Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.dataproc/v1.Cluster
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a cluster in a project. The returned Operation.metadata will be ClusterOperationMetadata (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#clusteroperationmetadata). Auto-naming is currently not supported for this resource.
Create Cluster Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Cluster(name: string, args: ClusterArgs, opts?: CustomResourceOptions);@overload
def Cluster(resource_name: str,
            args: ClusterArgs,
            opts: Optional[ResourceOptions] = None)
@overload
def Cluster(resource_name: str,
            opts: Optional[ResourceOptions] = None,
            cluster_name: Optional[str] = None,
            region: Optional[str] = None,
            action_on_failed_primary_workers: Optional[str] = None,
            config: Optional[ClusterConfigArgs] = None,
            labels: Optional[Mapping[str, str]] = None,
            project: Optional[str] = None,
            request_id: Optional[str] = None,
            virtual_cluster_config: Optional[VirtualClusterConfigArgs] = None)func NewCluster(ctx *Context, name string, args ClusterArgs, opts ...ResourceOption) (*Cluster, error)public Cluster(string name, ClusterArgs args, CustomResourceOptions? opts = null)
public Cluster(String name, ClusterArgs args)
public Cluster(String name, ClusterArgs args, CustomResourceOptions options)
type: google-native:dataproc/v1:Cluster
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args ClusterArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args ClusterArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args ClusterArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args ClusterArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args ClusterArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var exampleclusterResourceResourceFromDataprocv1 = new GoogleNative.Dataproc.V1.Cluster("exampleclusterResourceResourceFromDataprocv1", new()
{
    ClusterName = "string",
    Region = "string",
    ActionOnFailedPrimaryWorkers = "string",
    Config = new GoogleNative.Dataproc.V1.Inputs.ClusterConfigArgs
    {
        AutoscalingConfig = new GoogleNative.Dataproc.V1.Inputs.AutoscalingConfigArgs
        {
            PolicyUri = "string",
        },
        AuxiliaryNodeGroups = new[]
        {
            new GoogleNative.Dataproc.V1.Inputs.AuxiliaryNodeGroupArgs
            {
                NodeGroup = new GoogleNative.Dataproc.V1.Inputs.NodeGroupArgs
                {
                    Roles = new[]
                    {
                        GoogleNative.Dataproc.V1.NodeGroupRolesItem.RoleUnspecified,
                    },
                    Labels = 
                    {
                        { "string", "string" },
                    },
                    Name = "string",
                    NodeGroupConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
                    {
                        Accelerators = new[]
                        {
                            new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
                            {
                                AcceleratorCount = 0,
                                AcceleratorTypeUri = "string",
                            },
                        },
                        DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
                        {
                            BootDiskSizeGb = 0,
                            BootDiskType = "string",
                            LocalSsdInterface = "string",
                            NumLocalSsds = 0,
                        },
                        ImageUri = "string",
                        InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
                        {
                            InstanceSelectionList = new[]
                            {
                                new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
                                {
                                    MachineTypes = new[]
                                    {
                                        "string",
                                    },
                                    Rank = 0,
                                },
                            },
                        },
                        MachineTypeUri = "string",
                        MinCpuPlatform = "string",
                        MinNumInstances = 0,
                        NumInstances = 0,
                        Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
                        StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
                        {
                            RequiredRegistrationFraction = 0,
                        },
                    },
                },
                NodeGroupId = "string",
            },
        },
        ConfigBucket = "string",
        DataprocMetricConfig = new GoogleNative.Dataproc.V1.Inputs.DataprocMetricConfigArgs
        {
            Metrics = new[]
            {
                new GoogleNative.Dataproc.V1.Inputs.MetricArgs
                {
                    MetricSource = GoogleNative.Dataproc.V1.MetricMetricSource.MetricSourceUnspecified,
                    MetricOverrides = new[]
                    {
                        "string",
                    },
                },
            },
        },
        EncryptionConfig = new GoogleNative.Dataproc.V1.Inputs.EncryptionConfigArgs
        {
            GcePdKmsKeyName = "string",
            KmsKey = "string",
        },
        EndpointConfig = new GoogleNative.Dataproc.V1.Inputs.EndpointConfigArgs
        {
            EnableHttpPortAccess = false,
        },
        GceClusterConfig = new GoogleNative.Dataproc.V1.Inputs.GceClusterConfigArgs
        {
            ConfidentialInstanceConfig = new GoogleNative.Dataproc.V1.Inputs.ConfidentialInstanceConfigArgs
            {
                EnableConfidentialCompute = false,
            },
            InternalIpOnly = false,
            Metadata = 
            {
                { "string", "string" },
            },
            NetworkUri = "string",
            NodeGroupAffinity = new GoogleNative.Dataproc.V1.Inputs.NodeGroupAffinityArgs
            {
                NodeGroupUri = "string",
            },
            PrivateIpv6GoogleAccess = GoogleNative.Dataproc.V1.GceClusterConfigPrivateIpv6GoogleAccess.PrivateIpv6GoogleAccessUnspecified,
            ReservationAffinity = new GoogleNative.Dataproc.V1.Inputs.ReservationAffinityArgs
            {
                ConsumeReservationType = GoogleNative.Dataproc.V1.ReservationAffinityConsumeReservationType.TypeUnspecified,
                Key = "string",
                Values = new[]
                {
                    "string",
                },
            },
            ServiceAccount = "string",
            ServiceAccountScopes = new[]
            {
                "string",
            },
            ShieldedInstanceConfig = new GoogleNative.Dataproc.V1.Inputs.ShieldedInstanceConfigArgs
            {
                EnableIntegrityMonitoring = false,
                EnableSecureBoot = false,
                EnableVtpm = false,
            },
            SubnetworkUri = "string",
            Tags = new[]
            {
                "string",
            },
            ZoneUri = "string",
        },
        GkeClusterConfig = new GoogleNative.Dataproc.V1.Inputs.GkeClusterConfigArgs
        {
            GkeClusterTarget = "string",
            NodePoolTarget = new[]
            {
                new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolTargetArgs
                {
                    NodePool = "string",
                    Roles = new[]
                    {
                        GoogleNative.Dataproc.V1.GkeNodePoolTargetRolesItem.RoleUnspecified,
                    },
                    NodePoolConfig = new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolConfigArgs
                    {
                        Autoscaling = new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAutoscalingConfigArgs
                        {
                            MaxNodeCount = 0,
                            MinNodeCount = 0,
                        },
                        Config = new GoogleNative.Dataproc.V1.Inputs.GkeNodeConfigArgs
                        {
                            Accelerators = new[]
                            {
                                new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAcceleratorConfigArgs
                                {
                                    AcceleratorCount = "string",
                                    AcceleratorType = "string",
                                    GpuPartitionSize = "string",
                                },
                            },
                            BootDiskKmsKey = "string",
                            LocalSsdCount = 0,
                            MachineType = "string",
                            MinCpuPlatform = "string",
                            Preemptible = false,
                            Spot = false,
                        },
                        Locations = new[]
                        {
                            "string",
                        },
                    },
                },
            },
        },
        InitializationActions = new[]
        {
            new GoogleNative.Dataproc.V1.Inputs.NodeInitializationActionArgs
            {
                ExecutableFile = "string",
                ExecutionTimeout = "string",
            },
        },
        LifecycleConfig = new GoogleNative.Dataproc.V1.Inputs.LifecycleConfigArgs
        {
            AutoDeleteTime = "string",
            AutoDeleteTtl = "string",
            IdleDeleteTtl = "string",
        },
        MasterConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
        {
            Accelerators = new[]
            {
                new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
                {
                    AcceleratorCount = 0,
                    AcceleratorTypeUri = "string",
                },
            },
            DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
            {
                BootDiskSizeGb = 0,
                BootDiskType = "string",
                LocalSsdInterface = "string",
                NumLocalSsds = 0,
            },
            ImageUri = "string",
            InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
            {
                InstanceSelectionList = new[]
                {
                    new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
                    {
                        MachineTypes = new[]
                        {
                            "string",
                        },
                        Rank = 0,
                    },
                },
            },
            MachineTypeUri = "string",
            MinCpuPlatform = "string",
            MinNumInstances = 0,
            NumInstances = 0,
            Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
            StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
            {
                RequiredRegistrationFraction = 0,
            },
        },
        MetastoreConfig = new GoogleNative.Dataproc.V1.Inputs.MetastoreConfigArgs
        {
            DataprocMetastoreService = "string",
        },
        SecondaryWorkerConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
        {
            Accelerators = new[]
            {
                new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
                {
                    AcceleratorCount = 0,
                    AcceleratorTypeUri = "string",
                },
            },
            DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
            {
                BootDiskSizeGb = 0,
                BootDiskType = "string",
                LocalSsdInterface = "string",
                NumLocalSsds = 0,
            },
            ImageUri = "string",
            InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
            {
                InstanceSelectionList = new[]
                {
                    new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
                    {
                        MachineTypes = new[]
                        {
                            "string",
                        },
                        Rank = 0,
                    },
                },
            },
            MachineTypeUri = "string",
            MinCpuPlatform = "string",
            MinNumInstances = 0,
            NumInstances = 0,
            Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
            StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
            {
                RequiredRegistrationFraction = 0,
            },
        },
        SecurityConfig = new GoogleNative.Dataproc.V1.Inputs.SecurityConfigArgs
        {
            IdentityConfig = new GoogleNative.Dataproc.V1.Inputs.IdentityConfigArgs
            {
                UserServiceAccountMapping = 
                {
                    { "string", "string" },
                },
            },
            KerberosConfig = new GoogleNative.Dataproc.V1.Inputs.KerberosConfigArgs
            {
                CrossRealmTrustAdminServer = "string",
                CrossRealmTrustKdc = "string",
                CrossRealmTrustRealm = "string",
                CrossRealmTrustSharedPasswordUri = "string",
                EnableKerberos = false,
                KdcDbKeyUri = "string",
                KeyPasswordUri = "string",
                KeystorePasswordUri = "string",
                KeystoreUri = "string",
                KmsKeyUri = "string",
                Realm = "string",
                RootPrincipalPasswordUri = "string",
                TgtLifetimeHours = 0,
                TruststorePasswordUri = "string",
                TruststoreUri = "string",
            },
        },
        SoftwareConfig = new GoogleNative.Dataproc.V1.Inputs.SoftwareConfigArgs
        {
            ImageVersion = "string",
            OptionalComponents = new[]
            {
                GoogleNative.Dataproc.V1.SoftwareConfigOptionalComponentsItem.ComponentUnspecified,
            },
            Properties = 
            {
                { "string", "string" },
            },
        },
        TempBucket = "string",
        WorkerConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
        {
            Accelerators = new[]
            {
                new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
                {
                    AcceleratorCount = 0,
                    AcceleratorTypeUri = "string",
                },
            },
            DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
            {
                BootDiskSizeGb = 0,
                BootDiskType = "string",
                LocalSsdInterface = "string",
                NumLocalSsds = 0,
            },
            ImageUri = "string",
            InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
            {
                InstanceSelectionList = new[]
                {
                    new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
                    {
                        MachineTypes = new[]
                        {
                            "string",
                        },
                        Rank = 0,
                    },
                },
            },
            MachineTypeUri = "string",
            MinCpuPlatform = "string",
            MinNumInstances = 0,
            NumInstances = 0,
            Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
            StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
            {
                RequiredRegistrationFraction = 0,
            },
        },
    },
    Labels = 
    {
        { "string", "string" },
    },
    Project = "string",
    RequestId = "string",
    VirtualClusterConfig = new GoogleNative.Dataproc.V1.Inputs.VirtualClusterConfigArgs
    {
        KubernetesClusterConfig = new GoogleNative.Dataproc.V1.Inputs.KubernetesClusterConfigArgs
        {
            GkeClusterConfig = new GoogleNative.Dataproc.V1.Inputs.GkeClusterConfigArgs
            {
                GkeClusterTarget = "string",
                NodePoolTarget = new[]
                {
                    new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolTargetArgs
                    {
                        NodePool = "string",
                        Roles = new[]
                        {
                            GoogleNative.Dataproc.V1.GkeNodePoolTargetRolesItem.RoleUnspecified,
                        },
                        NodePoolConfig = new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolConfigArgs
                        {
                            Autoscaling = new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAutoscalingConfigArgs
                            {
                                MaxNodeCount = 0,
                                MinNodeCount = 0,
                            },
                            Config = new GoogleNative.Dataproc.V1.Inputs.GkeNodeConfigArgs
                            {
                                Accelerators = new[]
                                {
                                    new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAcceleratorConfigArgs
                                    {
                                        AcceleratorCount = "string",
                                        AcceleratorType = "string",
                                        GpuPartitionSize = "string",
                                    },
                                },
                                BootDiskKmsKey = "string",
                                LocalSsdCount = 0,
                                MachineType = "string",
                                MinCpuPlatform = "string",
                                Preemptible = false,
                                Spot = false,
                            },
                            Locations = new[]
                            {
                                "string",
                            },
                        },
                    },
                },
            },
            KubernetesNamespace = "string",
            KubernetesSoftwareConfig = new GoogleNative.Dataproc.V1.Inputs.KubernetesSoftwareConfigArgs
            {
                ComponentVersion = 
                {
                    { "string", "string" },
                },
                Properties = 
                {
                    { "string", "string" },
                },
            },
        },
        AuxiliaryServicesConfig = new GoogleNative.Dataproc.V1.Inputs.AuxiliaryServicesConfigArgs
        {
            MetastoreConfig = new GoogleNative.Dataproc.V1.Inputs.MetastoreConfigArgs
            {
                DataprocMetastoreService = "string",
            },
            SparkHistoryServerConfig = new GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfigArgs
            {
                DataprocCluster = "string",
            },
        },
        StagingBucket = "string",
    },
});
example, err := dataproc.NewCluster(ctx, "exampleclusterResourceResourceFromDataprocv1", &dataproc.ClusterArgs{
	ClusterName:                  pulumi.String("string"),
	Region:                       pulumi.String("string"),
	ActionOnFailedPrimaryWorkers: pulumi.String("string"),
	Config: &dataproc.ClusterConfigArgs{
		AutoscalingConfig: &dataproc.AutoscalingConfigArgs{
			PolicyUri: pulumi.String("string"),
		},
		AuxiliaryNodeGroups: dataproc.AuxiliaryNodeGroupArray{
			&dataproc.AuxiliaryNodeGroupArgs{
				NodeGroup: &dataproc.NodeGroupTypeArgs{
					Roles: dataproc.NodeGroupRolesItemArray{
						dataproc.NodeGroupRolesItemRoleUnspecified,
					},
					Labels: pulumi.StringMap{
						"string": pulumi.String("string"),
					},
					Name: pulumi.String("string"),
					NodeGroupConfig: &dataproc.InstanceGroupConfigArgs{
						Accelerators: dataproc.AcceleratorConfigArray{
							&dataproc.AcceleratorConfigArgs{
								AcceleratorCount:   pulumi.Int(0),
								AcceleratorTypeUri: pulumi.String("string"),
							},
						},
						DiskConfig: &dataproc.DiskConfigArgs{
							BootDiskSizeGb:    pulumi.Int(0),
							BootDiskType:      pulumi.String("string"),
							LocalSsdInterface: pulumi.String("string"),
							NumLocalSsds:      pulumi.Int(0),
						},
						ImageUri: pulumi.String("string"),
						InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
							InstanceSelectionList: dataproc.InstanceSelectionArray{
								&dataproc.InstanceSelectionArgs{
									MachineTypes: pulumi.StringArray{
										pulumi.String("string"),
									},
									Rank: pulumi.Int(0),
								},
							},
						},
						MachineTypeUri:  pulumi.String("string"),
						MinCpuPlatform:  pulumi.String("string"),
						MinNumInstances: pulumi.Int(0),
						NumInstances:    pulumi.Int(0),
						Preemptibility:  dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
						StartupConfig: &dataproc.StartupConfigArgs{
							RequiredRegistrationFraction: pulumi.Float64(0),
						},
					},
				},
				NodeGroupId: pulumi.String("string"),
			},
		},
		ConfigBucket: pulumi.String("string"),
		DataprocMetricConfig: &dataproc.DataprocMetricConfigArgs{
			Metrics: dataproc.MetricArray{
				&dataproc.MetricArgs{
					MetricSource: dataproc.MetricMetricSourceMetricSourceUnspecified,
					MetricOverrides: pulumi.StringArray{
						pulumi.String("string"),
					},
				},
			},
		},
		EncryptionConfig: &dataproc.EncryptionConfigArgs{
			GcePdKmsKeyName: pulumi.String("string"),
			KmsKey:          pulumi.String("string"),
		},
		EndpointConfig: &dataproc.EndpointConfigArgs{
			EnableHttpPortAccess: pulumi.Bool(false),
		},
		GceClusterConfig: &dataproc.GceClusterConfigArgs{
			ConfidentialInstanceConfig: &dataproc.ConfidentialInstanceConfigArgs{
				EnableConfidentialCompute: pulumi.Bool(false),
			},
			InternalIpOnly: pulumi.Bool(false),
			Metadata: pulumi.StringMap{
				"string": pulumi.String("string"),
			},
			NetworkUri: pulumi.String("string"),
			NodeGroupAffinity: &dataproc.NodeGroupAffinityArgs{
				NodeGroupUri: pulumi.String("string"),
			},
			PrivateIpv6GoogleAccess: dataproc.GceClusterConfigPrivateIpv6GoogleAccessPrivateIpv6GoogleAccessUnspecified,
			ReservationAffinity: &dataproc.ReservationAffinityArgs{
				ConsumeReservationType: dataproc.ReservationAffinityConsumeReservationTypeTypeUnspecified,
				Key:                    pulumi.String("string"),
				Values: pulumi.StringArray{
					pulumi.String("string"),
				},
			},
			ServiceAccount: pulumi.String("string"),
			ServiceAccountScopes: pulumi.StringArray{
				pulumi.String("string"),
			},
			ShieldedInstanceConfig: &dataproc.ShieldedInstanceConfigArgs{
				EnableIntegrityMonitoring: pulumi.Bool(false),
				EnableSecureBoot:          pulumi.Bool(false),
				EnableVtpm:                pulumi.Bool(false),
			},
			SubnetworkUri: pulumi.String("string"),
			Tags: pulumi.StringArray{
				pulumi.String("string"),
			},
			ZoneUri: pulumi.String("string"),
		},
		GkeClusterConfig: &dataproc.GkeClusterConfigArgs{
			GkeClusterTarget: pulumi.String("string"),
			NodePoolTarget: dataproc.GkeNodePoolTargetArray{
				&dataproc.GkeNodePoolTargetArgs{
					NodePool: pulumi.String("string"),
					Roles: dataproc.GkeNodePoolTargetRolesItemArray{
						dataproc.GkeNodePoolTargetRolesItemRoleUnspecified,
					},
					NodePoolConfig: &dataproc.GkeNodePoolConfigArgs{
						Autoscaling: &dataproc.GkeNodePoolAutoscalingConfigArgs{
							MaxNodeCount: pulumi.Int(0),
							MinNodeCount: pulumi.Int(0),
						},
						Config: &dataproc.GkeNodeConfigArgs{
							Accelerators: dataproc.GkeNodePoolAcceleratorConfigArray{
								&dataproc.GkeNodePoolAcceleratorConfigArgs{
									AcceleratorCount: pulumi.String("string"),
									AcceleratorType:  pulumi.String("string"),
									GpuPartitionSize: pulumi.String("string"),
								},
							},
							BootDiskKmsKey: pulumi.String("string"),
							LocalSsdCount:  pulumi.Int(0),
							MachineType:    pulumi.String("string"),
							MinCpuPlatform: pulumi.String("string"),
							Preemptible:    pulumi.Bool(false),
							Spot:           pulumi.Bool(false),
						},
						Locations: pulumi.StringArray{
							pulumi.String("string"),
						},
					},
				},
			},
		},
		InitializationActions: dataproc.NodeInitializationActionArray{
			&dataproc.NodeInitializationActionArgs{
				ExecutableFile:   pulumi.String("string"),
				ExecutionTimeout: pulumi.String("string"),
			},
		},
		LifecycleConfig: &dataproc.LifecycleConfigArgs{
			AutoDeleteTime: pulumi.String("string"),
			AutoDeleteTtl:  pulumi.String("string"),
			IdleDeleteTtl:  pulumi.String("string"),
		},
		MasterConfig: &dataproc.InstanceGroupConfigArgs{
			Accelerators: dataproc.AcceleratorConfigArray{
				&dataproc.AcceleratorConfigArgs{
					AcceleratorCount:   pulumi.Int(0),
					AcceleratorTypeUri: pulumi.String("string"),
				},
			},
			DiskConfig: &dataproc.DiskConfigArgs{
				BootDiskSizeGb:    pulumi.Int(0),
				BootDiskType:      pulumi.String("string"),
				LocalSsdInterface: pulumi.String("string"),
				NumLocalSsds:      pulumi.Int(0),
			},
			ImageUri: pulumi.String("string"),
			InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
				InstanceSelectionList: dataproc.InstanceSelectionArray{
					&dataproc.InstanceSelectionArgs{
						MachineTypes: pulumi.StringArray{
							pulumi.String("string"),
						},
						Rank: pulumi.Int(0),
					},
				},
			},
			MachineTypeUri:  pulumi.String("string"),
			MinCpuPlatform:  pulumi.String("string"),
			MinNumInstances: pulumi.Int(0),
			NumInstances:    pulumi.Int(0),
			Preemptibility:  dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
			StartupConfig: &dataproc.StartupConfigArgs{
				RequiredRegistrationFraction: pulumi.Float64(0),
			},
		},
		MetastoreConfig: &dataproc.MetastoreConfigArgs{
			DataprocMetastoreService: pulumi.String("string"),
		},
		SecondaryWorkerConfig: &dataproc.InstanceGroupConfigArgs{
			Accelerators: dataproc.AcceleratorConfigArray{
				&dataproc.AcceleratorConfigArgs{
					AcceleratorCount:   pulumi.Int(0),
					AcceleratorTypeUri: pulumi.String("string"),
				},
			},
			DiskConfig: &dataproc.DiskConfigArgs{
				BootDiskSizeGb:    pulumi.Int(0),
				BootDiskType:      pulumi.String("string"),
				LocalSsdInterface: pulumi.String("string"),
				NumLocalSsds:      pulumi.Int(0),
			},
			ImageUri: pulumi.String("string"),
			InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
				InstanceSelectionList: dataproc.InstanceSelectionArray{
					&dataproc.InstanceSelectionArgs{
						MachineTypes: pulumi.StringArray{
							pulumi.String("string"),
						},
						Rank: pulumi.Int(0),
					},
				},
			},
			MachineTypeUri:  pulumi.String("string"),
			MinCpuPlatform:  pulumi.String("string"),
			MinNumInstances: pulumi.Int(0),
			NumInstances:    pulumi.Int(0),
			Preemptibility:  dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
			StartupConfig: &dataproc.StartupConfigArgs{
				RequiredRegistrationFraction: pulumi.Float64(0),
			},
		},
		SecurityConfig: &dataproc.SecurityConfigArgs{
			IdentityConfig: &dataproc.IdentityConfigArgs{
				UserServiceAccountMapping: pulumi.StringMap{
					"string": pulumi.String("string"),
				},
			},
			KerberosConfig: &dataproc.KerberosConfigArgs{
				CrossRealmTrustAdminServer:       pulumi.String("string"),
				CrossRealmTrustKdc:               pulumi.String("string"),
				CrossRealmTrustRealm:             pulumi.String("string"),
				CrossRealmTrustSharedPasswordUri: pulumi.String("string"),
				EnableKerberos:                   pulumi.Bool(false),
				KdcDbKeyUri:                      pulumi.String("string"),
				KeyPasswordUri:                   pulumi.String("string"),
				KeystorePasswordUri:              pulumi.String("string"),
				KeystoreUri:                      pulumi.String("string"),
				KmsKeyUri:                        pulumi.String("string"),
				Realm:                            pulumi.String("string"),
				RootPrincipalPasswordUri:         pulumi.String("string"),
				TgtLifetimeHours:                 pulumi.Int(0),
				TruststorePasswordUri:            pulumi.String("string"),
				TruststoreUri:                    pulumi.String("string"),
			},
		},
		SoftwareConfig: &dataproc.SoftwareConfigArgs{
			ImageVersion: pulumi.String("string"),
			OptionalComponents: dataproc.SoftwareConfigOptionalComponentsItemArray{
				dataproc.SoftwareConfigOptionalComponentsItemComponentUnspecified,
			},
			Properties: pulumi.StringMap{
				"string": pulumi.String("string"),
			},
		},
		TempBucket: pulumi.String("string"),
		WorkerConfig: &dataproc.InstanceGroupConfigArgs{
			Accelerators: dataproc.AcceleratorConfigArray{
				&dataproc.AcceleratorConfigArgs{
					AcceleratorCount:   pulumi.Int(0),
					AcceleratorTypeUri: pulumi.String("string"),
				},
			},
			DiskConfig: &dataproc.DiskConfigArgs{
				BootDiskSizeGb:    pulumi.Int(0),
				BootDiskType:      pulumi.String("string"),
				LocalSsdInterface: pulumi.String("string"),
				NumLocalSsds:      pulumi.Int(0),
			},
			ImageUri: pulumi.String("string"),
			InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
				InstanceSelectionList: dataproc.InstanceSelectionArray{
					&dataproc.InstanceSelectionArgs{
						MachineTypes: pulumi.StringArray{
							pulumi.String("string"),
						},
						Rank: pulumi.Int(0),
					},
				},
			},
			MachineTypeUri:  pulumi.String("string"),
			MinCpuPlatform:  pulumi.String("string"),
			MinNumInstances: pulumi.Int(0),
			NumInstances:    pulumi.Int(0),
			Preemptibility:  dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
			StartupConfig: &dataproc.StartupConfigArgs{
				RequiredRegistrationFraction: pulumi.Float64(0),
			},
		},
	},
	Labels: pulumi.StringMap{
		"string": pulumi.String("string"),
	},
	Project:   pulumi.String("string"),
	RequestId: pulumi.String("string"),
	VirtualClusterConfig: &dataproc.VirtualClusterConfigArgs{
		KubernetesClusterConfig: &dataproc.KubernetesClusterConfigArgs{
			GkeClusterConfig: &dataproc.GkeClusterConfigArgs{
				GkeClusterTarget: pulumi.String("string"),
				NodePoolTarget: dataproc.GkeNodePoolTargetArray{
					&dataproc.GkeNodePoolTargetArgs{
						NodePool: pulumi.String("string"),
						Roles: dataproc.GkeNodePoolTargetRolesItemArray{
							dataproc.GkeNodePoolTargetRolesItemRoleUnspecified,
						},
						NodePoolConfig: &dataproc.GkeNodePoolConfigArgs{
							Autoscaling: &dataproc.GkeNodePoolAutoscalingConfigArgs{
								MaxNodeCount: pulumi.Int(0),
								MinNodeCount: pulumi.Int(0),
							},
							Config: &dataproc.GkeNodeConfigArgs{
								Accelerators: dataproc.GkeNodePoolAcceleratorConfigArray{
									&dataproc.GkeNodePoolAcceleratorConfigArgs{
										AcceleratorCount: pulumi.String("string"),
										AcceleratorType:  pulumi.String("string"),
										GpuPartitionSize: pulumi.String("string"),
									},
								},
								BootDiskKmsKey: pulumi.String("string"),
								LocalSsdCount:  pulumi.Int(0),
								MachineType:    pulumi.String("string"),
								MinCpuPlatform: pulumi.String("string"),
								Preemptible:    pulumi.Bool(false),
								Spot:           pulumi.Bool(false),
							},
							Locations: pulumi.StringArray{
								pulumi.String("string"),
							},
						},
					},
				},
			},
			KubernetesNamespace: pulumi.String("string"),
			KubernetesSoftwareConfig: &dataproc.KubernetesSoftwareConfigArgs{
				ComponentVersion: pulumi.StringMap{
					"string": pulumi.String("string"),
				},
				Properties: pulumi.StringMap{
					"string": pulumi.String("string"),
				},
			},
		},
		AuxiliaryServicesConfig: &dataproc.AuxiliaryServicesConfigArgs{
			MetastoreConfig: &dataproc.MetastoreConfigArgs{
				DataprocMetastoreService: pulumi.String("string"),
			},
			SparkHistoryServerConfig: &dataproc.SparkHistoryServerConfigArgs{
				DataprocCluster: pulumi.String("string"),
			},
		},
		StagingBucket: pulumi.String("string"),
	},
})
var exampleclusterResourceResourceFromDataprocv1 = new Cluster("exampleclusterResourceResourceFromDataprocv1", ClusterArgs.builder()
    .clusterName("string")
    .region("string")
    .actionOnFailedPrimaryWorkers("string")
    .config(ClusterConfigArgs.builder()
        .autoscalingConfig(AutoscalingConfigArgs.builder()
            .policyUri("string")
            .build())
        .auxiliaryNodeGroups(AuxiliaryNodeGroupArgs.builder()
            .nodeGroup(NodeGroupArgs.builder()
                .roles("ROLE_UNSPECIFIED")
                .labels(Map.of("string", "string"))
                .name("string")
                .nodeGroupConfig(InstanceGroupConfigArgs.builder()
                    .accelerators(AcceleratorConfigArgs.builder()
                        .acceleratorCount(0)
                        .acceleratorTypeUri("string")
                        .build())
                    .diskConfig(DiskConfigArgs.builder()
                        .bootDiskSizeGb(0)
                        .bootDiskType("string")
                        .localSsdInterface("string")
                        .numLocalSsds(0)
                        .build())
                    .imageUri("string")
                    .instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
                        .instanceSelectionList(InstanceSelectionArgs.builder()
                            .machineTypes("string")
                            .rank(0)
                            .build())
                        .build())
                    .machineTypeUri("string")
                    .minCpuPlatform("string")
                    .minNumInstances(0)
                    .numInstances(0)
                    .preemptibility("PREEMPTIBILITY_UNSPECIFIED")
                    .startupConfig(StartupConfigArgs.builder()
                        .requiredRegistrationFraction(0)
                        .build())
                    .build())
                .build())
            .nodeGroupId("string")
            .build())
        .configBucket("string")
        .dataprocMetricConfig(DataprocMetricConfigArgs.builder()
            .metrics(MetricArgs.builder()
                .metricSource("METRIC_SOURCE_UNSPECIFIED")
                .metricOverrides("string")
                .build())
            .build())
        .encryptionConfig(EncryptionConfigArgs.builder()
            .gcePdKmsKeyName("string")
            .kmsKey("string")
            .build())
        .endpointConfig(EndpointConfigArgs.builder()
            .enableHttpPortAccess(false)
            .build())
        .gceClusterConfig(GceClusterConfigArgs.builder()
            .confidentialInstanceConfig(ConfidentialInstanceConfigArgs.builder()
                .enableConfidentialCompute(false)
                .build())
            .internalIpOnly(false)
            .metadata(Map.of("string", "string"))
            .networkUri("string")
            .nodeGroupAffinity(NodeGroupAffinityArgs.builder()
                .nodeGroupUri("string")
                .build())
            .privateIpv6GoogleAccess("PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED")
            .reservationAffinity(ReservationAffinityArgs.builder()
                .consumeReservationType("TYPE_UNSPECIFIED")
                .key("string")
                .values("string")
                .build())
            .serviceAccount("string")
            .serviceAccountScopes("string")
            .shieldedInstanceConfig(ShieldedInstanceConfigArgs.builder()
                .enableIntegrityMonitoring(false)
                .enableSecureBoot(false)
                .enableVtpm(false)
                .build())
            .subnetworkUri("string")
            .tags("string")
            .zoneUri("string")
            .build())
        .gkeClusterConfig(GkeClusterConfigArgs.builder()
            .gkeClusterTarget("string")
            .nodePoolTarget(GkeNodePoolTargetArgs.builder()
                .nodePool("string")
                .roles("ROLE_UNSPECIFIED")
                .nodePoolConfig(GkeNodePoolConfigArgs.builder()
                    .autoscaling(GkeNodePoolAutoscalingConfigArgs.builder()
                        .maxNodeCount(0)
                        .minNodeCount(0)
                        .build())
                    .config(GkeNodeConfigArgs.builder()
                        .accelerators(GkeNodePoolAcceleratorConfigArgs.builder()
                            .acceleratorCount("string")
                            .acceleratorType("string")
                            .gpuPartitionSize("string")
                            .build())
                        .bootDiskKmsKey("string")
                        .localSsdCount(0)
                        .machineType("string")
                        .minCpuPlatform("string")
                        .preemptible(false)
                        .spot(false)
                        .build())
                    .locations("string")
                    .build())
                .build())
            .build())
        .initializationActions(NodeInitializationActionArgs.builder()
            .executableFile("string")
            .executionTimeout("string")
            .build())
        .lifecycleConfig(LifecycleConfigArgs.builder()
            .autoDeleteTime("string")
            .autoDeleteTtl("string")
            .idleDeleteTtl("string")
            .build())
        .masterConfig(InstanceGroupConfigArgs.builder()
            .accelerators(AcceleratorConfigArgs.builder()
                .acceleratorCount(0)
                .acceleratorTypeUri("string")
                .build())
            .diskConfig(DiskConfigArgs.builder()
                .bootDiskSizeGb(0)
                .bootDiskType("string")
                .localSsdInterface("string")
                .numLocalSsds(0)
                .build())
            .imageUri("string")
            .instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
                .instanceSelectionList(InstanceSelectionArgs.builder()
                    .machineTypes("string")
                    .rank(0)
                    .build())
                .build())
            .machineTypeUri("string")
            .minCpuPlatform("string")
            .minNumInstances(0)
            .numInstances(0)
            .preemptibility("PREEMPTIBILITY_UNSPECIFIED")
            .startupConfig(StartupConfigArgs.builder()
                .requiredRegistrationFraction(0)
                .build())
            .build())
        .metastoreConfig(MetastoreConfigArgs.builder()
            .dataprocMetastoreService("string")
            .build())
        .secondaryWorkerConfig(InstanceGroupConfigArgs.builder()
            .accelerators(AcceleratorConfigArgs.builder()
                .acceleratorCount(0)
                .acceleratorTypeUri("string")
                .build())
            .diskConfig(DiskConfigArgs.builder()
                .bootDiskSizeGb(0)
                .bootDiskType("string")
                .localSsdInterface("string")
                .numLocalSsds(0)
                .build())
            .imageUri("string")
            .instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
                .instanceSelectionList(InstanceSelectionArgs.builder()
                    .machineTypes("string")
                    .rank(0)
                    .build())
                .build())
            .machineTypeUri("string")
            .minCpuPlatform("string")
            .minNumInstances(0)
            .numInstances(0)
            .preemptibility("PREEMPTIBILITY_UNSPECIFIED")
            .startupConfig(StartupConfigArgs.builder()
                .requiredRegistrationFraction(0)
                .build())
            .build())
        .securityConfig(SecurityConfigArgs.builder()
            .identityConfig(IdentityConfigArgs.builder()
                .userServiceAccountMapping(Map.of("string", "string"))
                .build())
            .kerberosConfig(KerberosConfigArgs.builder()
                .crossRealmTrustAdminServer("string")
                .crossRealmTrustKdc("string")
                .crossRealmTrustRealm("string")
                .crossRealmTrustSharedPasswordUri("string")
                .enableKerberos(false)
                .kdcDbKeyUri("string")
                .keyPasswordUri("string")
                .keystorePasswordUri("string")
                .keystoreUri("string")
                .kmsKeyUri("string")
                .realm("string")
                .rootPrincipalPasswordUri("string")
                .tgtLifetimeHours(0)
                .truststorePasswordUri("string")
                .truststoreUri("string")
                .build())
            .build())
        .softwareConfig(SoftwareConfigArgs.builder()
            .imageVersion("string")
            .optionalComponents("COMPONENT_UNSPECIFIED")
            .properties(Map.of("string", "string"))
            .build())
        .tempBucket("string")
        .workerConfig(InstanceGroupConfigArgs.builder()
            .accelerators(AcceleratorConfigArgs.builder()
                .acceleratorCount(0)
                .acceleratorTypeUri("string")
                .build())
            .diskConfig(DiskConfigArgs.builder()
                .bootDiskSizeGb(0)
                .bootDiskType("string")
                .localSsdInterface("string")
                .numLocalSsds(0)
                .build())
            .imageUri("string")
            .instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
                .instanceSelectionList(InstanceSelectionArgs.builder()
                    .machineTypes("string")
                    .rank(0)
                    .build())
                .build())
            .machineTypeUri("string")
            .minCpuPlatform("string")
            .minNumInstances(0)
            .numInstances(0)
            .preemptibility("PREEMPTIBILITY_UNSPECIFIED")
            .startupConfig(StartupConfigArgs.builder()
                .requiredRegistrationFraction(0)
                .build())
            .build())
        .build())
    .labels(Map.of("string", "string"))
    .project("string")
    .requestId("string")
    .virtualClusterConfig(VirtualClusterConfigArgs.builder()
        .kubernetesClusterConfig(KubernetesClusterConfigArgs.builder()
            .gkeClusterConfig(GkeClusterConfigArgs.builder()
                .gkeClusterTarget("string")
                .nodePoolTarget(GkeNodePoolTargetArgs.builder()
                    .nodePool("string")
                    .roles("ROLE_UNSPECIFIED")
                    .nodePoolConfig(GkeNodePoolConfigArgs.builder()
                        .autoscaling(GkeNodePoolAutoscalingConfigArgs.builder()
                            .maxNodeCount(0)
                            .minNodeCount(0)
                            .build())
                        .config(GkeNodeConfigArgs.builder()
                            .accelerators(GkeNodePoolAcceleratorConfigArgs.builder()
                                .acceleratorCount("string")
                                .acceleratorType("string")
                                .gpuPartitionSize("string")
                                .build())
                            .bootDiskKmsKey("string")
                            .localSsdCount(0)
                            .machineType("string")
                            .minCpuPlatform("string")
                            .preemptible(false)
                            .spot(false)
                            .build())
                        .locations("string")
                        .build())
                    .build())
                .build())
            .kubernetesNamespace("string")
            .kubernetesSoftwareConfig(KubernetesSoftwareConfigArgs.builder()
                .componentVersion(Map.of("string", "string"))
                .properties(Map.of("string", "string"))
                .build())
            .build())
        .auxiliaryServicesConfig(AuxiliaryServicesConfigArgs.builder()
            .metastoreConfig(MetastoreConfigArgs.builder()
                .dataprocMetastoreService("string")
                .build())
            .sparkHistoryServerConfig(SparkHistoryServerConfigArgs.builder()
                .dataprocCluster("string")
                .build())
            .build())
        .stagingBucket("string")
        .build())
    .build());
examplecluster_resource_resource_from_dataprocv1 = google_native.dataproc.v1.Cluster("exampleclusterResourceResourceFromDataprocv1",
    cluster_name="string",
    region="string",
    action_on_failed_primary_workers="string",
    config={
        "autoscaling_config": {
            "policy_uri": "string",
        },
        "auxiliary_node_groups": [{
            "node_group": {
                "roles": [google_native.dataproc.v1.NodeGroupRolesItem.ROLE_UNSPECIFIED],
                "labels": {
                    "string": "string",
                },
                "name": "string",
                "node_group_config": {
                    "accelerators": [{
                        "accelerator_count": 0,
                        "accelerator_type_uri": "string",
                    }],
                    "disk_config": {
                        "boot_disk_size_gb": 0,
                        "boot_disk_type": "string",
                        "local_ssd_interface": "string",
                        "num_local_ssds": 0,
                    },
                    "image_uri": "string",
                    "instance_flexibility_policy": {
                        "instance_selection_list": [{
                            "machine_types": ["string"],
                            "rank": 0,
                        }],
                    },
                    "machine_type_uri": "string",
                    "min_cpu_platform": "string",
                    "min_num_instances": 0,
                    "num_instances": 0,
                    "preemptibility": google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
                    "startup_config": {
                        "required_registration_fraction": 0,
                    },
                },
            },
            "node_group_id": "string",
        }],
        "config_bucket": "string",
        "dataproc_metric_config": {
            "metrics": [{
                "metric_source": google_native.dataproc.v1.MetricMetricSource.METRIC_SOURCE_UNSPECIFIED,
                "metric_overrides": ["string"],
            }],
        },
        "encryption_config": {
            "gce_pd_kms_key_name": "string",
            "kms_key": "string",
        },
        "endpoint_config": {
            "enable_http_port_access": False,
        },
        "gce_cluster_config": {
            "confidential_instance_config": {
                "enable_confidential_compute": False,
            },
            "internal_ip_only": False,
            "metadata": {
                "string": "string",
            },
            "network_uri": "string",
            "node_group_affinity": {
                "node_group_uri": "string",
            },
            "private_ipv6_google_access": google_native.dataproc.v1.GceClusterConfigPrivateIpv6GoogleAccess.PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED,
            "reservation_affinity": {
                "consume_reservation_type": google_native.dataproc.v1.ReservationAffinityConsumeReservationType.TYPE_UNSPECIFIED,
                "key": "string",
                "values": ["string"],
            },
            "service_account": "string",
            "service_account_scopes": ["string"],
            "shielded_instance_config": {
                "enable_integrity_monitoring": False,
                "enable_secure_boot": False,
                "enable_vtpm": False,
            },
            "subnetwork_uri": "string",
            "tags": ["string"],
            "zone_uri": "string",
        },
        "gke_cluster_config": {
            "gke_cluster_target": "string",
            "node_pool_target": [{
                "node_pool": "string",
                "roles": [google_native.dataproc.v1.GkeNodePoolTargetRolesItem.ROLE_UNSPECIFIED],
                "node_pool_config": {
                    "autoscaling": {
                        "max_node_count": 0,
                        "min_node_count": 0,
                    },
                    "config": {
                        "accelerators": [{
                            "accelerator_count": "string",
                            "accelerator_type": "string",
                            "gpu_partition_size": "string",
                        }],
                        "boot_disk_kms_key": "string",
                        "local_ssd_count": 0,
                        "machine_type": "string",
                        "min_cpu_platform": "string",
                        "preemptible": False,
                        "spot": False,
                    },
                    "locations": ["string"],
                },
            }],
        },
        "initialization_actions": [{
            "executable_file": "string",
            "execution_timeout": "string",
        }],
        "lifecycle_config": {
            "auto_delete_time": "string",
            "auto_delete_ttl": "string",
            "idle_delete_ttl": "string",
        },
        "master_config": {
            "accelerators": [{
                "accelerator_count": 0,
                "accelerator_type_uri": "string",
            }],
            "disk_config": {
                "boot_disk_size_gb": 0,
                "boot_disk_type": "string",
                "local_ssd_interface": "string",
                "num_local_ssds": 0,
            },
            "image_uri": "string",
            "instance_flexibility_policy": {
                "instance_selection_list": [{
                    "machine_types": ["string"],
                    "rank": 0,
                }],
            },
            "machine_type_uri": "string",
            "min_cpu_platform": "string",
            "min_num_instances": 0,
            "num_instances": 0,
            "preemptibility": google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
            "startup_config": {
                "required_registration_fraction": 0,
            },
        },
        "metastore_config": {
            "dataproc_metastore_service": "string",
        },
        "secondary_worker_config": {
            "accelerators": [{
                "accelerator_count": 0,
                "accelerator_type_uri": "string",
            }],
            "disk_config": {
                "boot_disk_size_gb": 0,
                "boot_disk_type": "string",
                "local_ssd_interface": "string",
                "num_local_ssds": 0,
            },
            "image_uri": "string",
            "instance_flexibility_policy": {
                "instance_selection_list": [{
                    "machine_types": ["string"],
                    "rank": 0,
                }],
            },
            "machine_type_uri": "string",
            "min_cpu_platform": "string",
            "min_num_instances": 0,
            "num_instances": 0,
            "preemptibility": google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
            "startup_config": {
                "required_registration_fraction": 0,
            },
        },
        "security_config": {
            "identity_config": {
                "user_service_account_mapping": {
                    "string": "string",
                },
            },
            "kerberos_config": {
                "cross_realm_trust_admin_server": "string",
                "cross_realm_trust_kdc": "string",
                "cross_realm_trust_realm": "string",
                "cross_realm_trust_shared_password_uri": "string",
                "enable_kerberos": False,
                "kdc_db_key_uri": "string",
                "key_password_uri": "string",
                "keystore_password_uri": "string",
                "keystore_uri": "string",
                "kms_key_uri": "string",
                "realm": "string",
                "root_principal_password_uri": "string",
                "tgt_lifetime_hours": 0,
                "truststore_password_uri": "string",
                "truststore_uri": "string",
            },
        },
        "software_config": {
            "image_version": "string",
            "optional_components": [google_native.dataproc.v1.SoftwareConfigOptionalComponentsItem.COMPONENT_UNSPECIFIED],
            "properties": {
                "string": "string",
            },
        },
        "temp_bucket": "string",
        "worker_config": {
            "accelerators": [{
                "accelerator_count": 0,
                "accelerator_type_uri": "string",
            }],
            "disk_config": {
                "boot_disk_size_gb": 0,
                "boot_disk_type": "string",
                "local_ssd_interface": "string",
                "num_local_ssds": 0,
            },
            "image_uri": "string",
            "instance_flexibility_policy": {
                "instance_selection_list": [{
                    "machine_types": ["string"],
                    "rank": 0,
                }],
            },
            "machine_type_uri": "string",
            "min_cpu_platform": "string",
            "min_num_instances": 0,
            "num_instances": 0,
            "preemptibility": google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
            "startup_config": {
                "required_registration_fraction": 0,
            },
        },
    },
    labels={
        "string": "string",
    },
    project="string",
    request_id="string",
    virtual_cluster_config={
        "kubernetes_cluster_config": {
            "gke_cluster_config": {
                "gke_cluster_target": "string",
                "node_pool_target": [{
                    "node_pool": "string",
                    "roles": [google_native.dataproc.v1.GkeNodePoolTargetRolesItem.ROLE_UNSPECIFIED],
                    "node_pool_config": {
                        "autoscaling": {
                            "max_node_count": 0,
                            "min_node_count": 0,
                        },
                        "config": {
                            "accelerators": [{
                                "accelerator_count": "string",
                                "accelerator_type": "string",
                                "gpu_partition_size": "string",
                            }],
                            "boot_disk_kms_key": "string",
                            "local_ssd_count": 0,
                            "machine_type": "string",
                            "min_cpu_platform": "string",
                            "preemptible": False,
                            "spot": False,
                        },
                        "locations": ["string"],
                    },
                }],
            },
            "kubernetes_namespace": "string",
            "kubernetes_software_config": {
                "component_version": {
                    "string": "string",
                },
                "properties": {
                    "string": "string",
                },
            },
        },
        "auxiliary_services_config": {
            "metastore_config": {
                "dataproc_metastore_service": "string",
            },
            "spark_history_server_config": {
                "dataproc_cluster": "string",
            },
        },
        "staging_bucket": "string",
    })
const exampleclusterResourceResourceFromDataprocv1 = new google_native.dataproc.v1.Cluster("exampleclusterResourceResourceFromDataprocv1", {
    clusterName: "string",
    region: "string",
    actionOnFailedPrimaryWorkers: "string",
    config: {
        autoscalingConfig: {
            policyUri: "string",
        },
        auxiliaryNodeGroups: [{
            nodeGroup: {
                roles: [google_native.dataproc.v1.NodeGroupRolesItem.RoleUnspecified],
                labels: {
                    string: "string",
                },
                name: "string",
                nodeGroupConfig: {
                    accelerators: [{
                        acceleratorCount: 0,
                        acceleratorTypeUri: "string",
                    }],
                    diskConfig: {
                        bootDiskSizeGb: 0,
                        bootDiskType: "string",
                        localSsdInterface: "string",
                        numLocalSsds: 0,
                    },
                    imageUri: "string",
                    instanceFlexibilityPolicy: {
                        instanceSelectionList: [{
                            machineTypes: ["string"],
                            rank: 0,
                        }],
                    },
                    machineTypeUri: "string",
                    minCpuPlatform: "string",
                    minNumInstances: 0,
                    numInstances: 0,
                    preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
                    startupConfig: {
                        requiredRegistrationFraction: 0,
                    },
                },
            },
            nodeGroupId: "string",
        }],
        configBucket: "string",
        dataprocMetricConfig: {
            metrics: [{
                metricSource: google_native.dataproc.v1.MetricMetricSource.MetricSourceUnspecified,
                metricOverrides: ["string"],
            }],
        },
        encryptionConfig: {
            gcePdKmsKeyName: "string",
            kmsKey: "string",
        },
        endpointConfig: {
            enableHttpPortAccess: false,
        },
        gceClusterConfig: {
            confidentialInstanceConfig: {
                enableConfidentialCompute: false,
            },
            internalIpOnly: false,
            metadata: {
                string: "string",
            },
            networkUri: "string",
            nodeGroupAffinity: {
                nodeGroupUri: "string",
            },
            privateIpv6GoogleAccess: google_native.dataproc.v1.GceClusterConfigPrivateIpv6GoogleAccess.PrivateIpv6GoogleAccessUnspecified,
            reservationAffinity: {
                consumeReservationType: google_native.dataproc.v1.ReservationAffinityConsumeReservationType.TypeUnspecified,
                key: "string",
                values: ["string"],
            },
            serviceAccount: "string",
            serviceAccountScopes: ["string"],
            shieldedInstanceConfig: {
                enableIntegrityMonitoring: false,
                enableSecureBoot: false,
                enableVtpm: false,
            },
            subnetworkUri: "string",
            tags: ["string"],
            zoneUri: "string",
        },
        gkeClusterConfig: {
            gkeClusterTarget: "string",
            nodePoolTarget: [{
                nodePool: "string",
                roles: [google_native.dataproc.v1.GkeNodePoolTargetRolesItem.RoleUnspecified],
                nodePoolConfig: {
                    autoscaling: {
                        maxNodeCount: 0,
                        minNodeCount: 0,
                    },
                    config: {
                        accelerators: [{
                            acceleratorCount: "string",
                            acceleratorType: "string",
                            gpuPartitionSize: "string",
                        }],
                        bootDiskKmsKey: "string",
                        localSsdCount: 0,
                        machineType: "string",
                        minCpuPlatform: "string",
                        preemptible: false,
                        spot: false,
                    },
                    locations: ["string"],
                },
            }],
        },
        initializationActions: [{
            executableFile: "string",
            executionTimeout: "string",
        }],
        lifecycleConfig: {
            autoDeleteTime: "string",
            autoDeleteTtl: "string",
            idleDeleteTtl: "string",
        },
        masterConfig: {
            accelerators: [{
                acceleratorCount: 0,
                acceleratorTypeUri: "string",
            }],
            diskConfig: {
                bootDiskSizeGb: 0,
                bootDiskType: "string",
                localSsdInterface: "string",
                numLocalSsds: 0,
            },
            imageUri: "string",
            instanceFlexibilityPolicy: {
                instanceSelectionList: [{
                    machineTypes: ["string"],
                    rank: 0,
                }],
            },
            machineTypeUri: "string",
            minCpuPlatform: "string",
            minNumInstances: 0,
            numInstances: 0,
            preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
            startupConfig: {
                requiredRegistrationFraction: 0,
            },
        },
        metastoreConfig: {
            dataprocMetastoreService: "string",
        },
        secondaryWorkerConfig: {
            accelerators: [{
                acceleratorCount: 0,
                acceleratorTypeUri: "string",
            }],
            diskConfig: {
                bootDiskSizeGb: 0,
                bootDiskType: "string",
                localSsdInterface: "string",
                numLocalSsds: 0,
            },
            imageUri: "string",
            instanceFlexibilityPolicy: {
                instanceSelectionList: [{
                    machineTypes: ["string"],
                    rank: 0,
                }],
            },
            machineTypeUri: "string",
            minCpuPlatform: "string",
            minNumInstances: 0,
            numInstances: 0,
            preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
            startupConfig: {
                requiredRegistrationFraction: 0,
            },
        },
        securityConfig: {
            identityConfig: {
                userServiceAccountMapping: {
                    string: "string",
                },
            },
            kerberosConfig: {
                crossRealmTrustAdminServer: "string",
                crossRealmTrustKdc: "string",
                crossRealmTrustRealm: "string",
                crossRealmTrustSharedPasswordUri: "string",
                enableKerberos: false,
                kdcDbKeyUri: "string",
                keyPasswordUri: "string",
                keystorePasswordUri: "string",
                keystoreUri: "string",
                kmsKeyUri: "string",
                realm: "string",
                rootPrincipalPasswordUri: "string",
                tgtLifetimeHours: 0,
                truststorePasswordUri: "string",
                truststoreUri: "string",
            },
        },
        softwareConfig: {
            imageVersion: "string",
            optionalComponents: [google_native.dataproc.v1.SoftwareConfigOptionalComponentsItem.ComponentUnspecified],
            properties: {
                string: "string",
            },
        },
        tempBucket: "string",
        workerConfig: {
            accelerators: [{
                acceleratorCount: 0,
                acceleratorTypeUri: "string",
            }],
            diskConfig: {
                bootDiskSizeGb: 0,
                bootDiskType: "string",
                localSsdInterface: "string",
                numLocalSsds: 0,
            },
            imageUri: "string",
            instanceFlexibilityPolicy: {
                instanceSelectionList: [{
                    machineTypes: ["string"],
                    rank: 0,
                }],
            },
            machineTypeUri: "string",
            minCpuPlatform: "string",
            minNumInstances: 0,
            numInstances: 0,
            preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
            startupConfig: {
                requiredRegistrationFraction: 0,
            },
        },
    },
    labels: {
        string: "string",
    },
    project: "string",
    requestId: "string",
    virtualClusterConfig: {
        kubernetesClusterConfig: {
            gkeClusterConfig: {
                gkeClusterTarget: "string",
                nodePoolTarget: [{
                    nodePool: "string",
                    roles: [google_native.dataproc.v1.GkeNodePoolTargetRolesItem.RoleUnspecified],
                    nodePoolConfig: {
                        autoscaling: {
                            maxNodeCount: 0,
                            minNodeCount: 0,
                        },
                        config: {
                            accelerators: [{
                                acceleratorCount: "string",
                                acceleratorType: "string",
                                gpuPartitionSize: "string",
                            }],
                            bootDiskKmsKey: "string",
                            localSsdCount: 0,
                            machineType: "string",
                            minCpuPlatform: "string",
                            preemptible: false,
                            spot: false,
                        },
                        locations: ["string"],
                    },
                }],
            },
            kubernetesNamespace: "string",
            kubernetesSoftwareConfig: {
                componentVersion: {
                    string: "string",
                },
                properties: {
                    string: "string",
                },
            },
        },
        auxiliaryServicesConfig: {
            metastoreConfig: {
                dataprocMetastoreService: "string",
            },
            sparkHistoryServerConfig: {
                dataprocCluster: "string",
            },
        },
        stagingBucket: "string",
    },
});
type: google-native:dataproc/v1:Cluster
properties:
    actionOnFailedPrimaryWorkers: string
    clusterName: string
    config:
        autoscalingConfig:
            policyUri: string
        auxiliaryNodeGroups:
            - nodeGroup:
                labels:
                    string: string
                name: string
                nodeGroupConfig:
                    accelerators:
                        - acceleratorCount: 0
                          acceleratorTypeUri: string
                    diskConfig:
                        bootDiskSizeGb: 0
                        bootDiskType: string
                        localSsdInterface: string
                        numLocalSsds: 0
                    imageUri: string
                    instanceFlexibilityPolicy:
                        instanceSelectionList:
                            - machineTypes:
                                - string
                              rank: 0
                    machineTypeUri: string
                    minCpuPlatform: string
                    minNumInstances: 0
                    numInstances: 0
                    preemptibility: PREEMPTIBILITY_UNSPECIFIED
                    startupConfig:
                        requiredRegistrationFraction: 0
                roles:
                    - ROLE_UNSPECIFIED
              nodeGroupId: string
        configBucket: string
        dataprocMetricConfig:
            metrics:
                - metricOverrides:
                    - string
                  metricSource: METRIC_SOURCE_UNSPECIFIED
        encryptionConfig:
            gcePdKmsKeyName: string
            kmsKey: string
        endpointConfig:
            enableHttpPortAccess: false
        gceClusterConfig:
            confidentialInstanceConfig:
                enableConfidentialCompute: false
            internalIpOnly: false
            metadata:
                string: string
            networkUri: string
            nodeGroupAffinity:
                nodeGroupUri: string
            privateIpv6GoogleAccess: PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED
            reservationAffinity:
                consumeReservationType: TYPE_UNSPECIFIED
                key: string
                values:
                    - string
            serviceAccount: string
            serviceAccountScopes:
                - string
            shieldedInstanceConfig:
                enableIntegrityMonitoring: false
                enableSecureBoot: false
                enableVtpm: false
            subnetworkUri: string
            tags:
                - string
            zoneUri: string
        gkeClusterConfig:
            gkeClusterTarget: string
            nodePoolTarget:
                - nodePool: string
                  nodePoolConfig:
                    autoscaling:
                        maxNodeCount: 0
                        minNodeCount: 0
                    config:
                        accelerators:
                            - acceleratorCount: string
                              acceleratorType: string
                              gpuPartitionSize: string
                        bootDiskKmsKey: string
                        localSsdCount: 0
                        machineType: string
                        minCpuPlatform: string
                        preemptible: false
                        spot: false
                    locations:
                        - string
                  roles:
                    - ROLE_UNSPECIFIED
        initializationActions:
            - executableFile: string
              executionTimeout: string
        lifecycleConfig:
            autoDeleteTime: string
            autoDeleteTtl: string
            idleDeleteTtl: string
        masterConfig:
            accelerators:
                - acceleratorCount: 0
                  acceleratorTypeUri: string
            diskConfig:
                bootDiskSizeGb: 0
                bootDiskType: string
                localSsdInterface: string
                numLocalSsds: 0
            imageUri: string
            instanceFlexibilityPolicy:
                instanceSelectionList:
                    - machineTypes:
                        - string
                      rank: 0
            machineTypeUri: string
            minCpuPlatform: string
            minNumInstances: 0
            numInstances: 0
            preemptibility: PREEMPTIBILITY_UNSPECIFIED
            startupConfig:
                requiredRegistrationFraction: 0
        metastoreConfig:
            dataprocMetastoreService: string
        secondaryWorkerConfig:
            accelerators:
                - acceleratorCount: 0
                  acceleratorTypeUri: string
            diskConfig:
                bootDiskSizeGb: 0
                bootDiskType: string
                localSsdInterface: string
                numLocalSsds: 0
            imageUri: string
            instanceFlexibilityPolicy:
                instanceSelectionList:
                    - machineTypes:
                        - string
                      rank: 0
            machineTypeUri: string
            minCpuPlatform: string
            minNumInstances: 0
            numInstances: 0
            preemptibility: PREEMPTIBILITY_UNSPECIFIED
            startupConfig:
                requiredRegistrationFraction: 0
        securityConfig:
            identityConfig:
                userServiceAccountMapping:
                    string: string
            kerberosConfig:
                crossRealmTrustAdminServer: string
                crossRealmTrustKdc: string
                crossRealmTrustRealm: string
                crossRealmTrustSharedPasswordUri: string
                enableKerberos: false
                kdcDbKeyUri: string
                keyPasswordUri: string
                keystorePasswordUri: string
                keystoreUri: string
                kmsKeyUri: string
                realm: string
                rootPrincipalPasswordUri: string
                tgtLifetimeHours: 0
                truststorePasswordUri: string
                truststoreUri: string
        softwareConfig:
            imageVersion: string
            optionalComponents:
                - COMPONENT_UNSPECIFIED
            properties:
                string: string
        tempBucket: string
        workerConfig:
            accelerators:
                - acceleratorCount: 0
                  acceleratorTypeUri: string
            diskConfig:
                bootDiskSizeGb: 0
                bootDiskType: string
                localSsdInterface: string
                numLocalSsds: 0
            imageUri: string
            instanceFlexibilityPolicy:
                instanceSelectionList:
                    - machineTypes:
                        - string
                      rank: 0
            machineTypeUri: string
            minCpuPlatform: string
            minNumInstances: 0
            numInstances: 0
            preemptibility: PREEMPTIBILITY_UNSPECIFIED
            startupConfig:
                requiredRegistrationFraction: 0
    labels:
        string: string
    project: string
    region: string
    requestId: string
    virtualClusterConfig:
        auxiliaryServicesConfig:
            metastoreConfig:
                dataprocMetastoreService: string
            sparkHistoryServerConfig:
                dataprocCluster: string
        kubernetesClusterConfig:
            gkeClusterConfig:
                gkeClusterTarget: string
                nodePoolTarget:
                    - nodePool: string
                      nodePoolConfig:
                        autoscaling:
                            maxNodeCount: 0
                            minNodeCount: 0
                        config:
                            accelerators:
                                - acceleratorCount: string
                                  acceleratorType: string
                                  gpuPartitionSize: string
                            bootDiskKmsKey: string
                            localSsdCount: 0
                            machineType: string
                            minCpuPlatform: string
                            preemptible: false
                            spot: false
                        locations:
                            - string
                      roles:
                        - ROLE_UNSPECIFIED
            kubernetesNamespace: string
            kubernetesSoftwareConfig:
                componentVersion:
                    string: string
                properties:
                    string: string
        stagingBucket: string
Cluster Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The Cluster resource accepts the following input properties:
- ClusterName string
- The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- Region string
- ActionOn stringFailed Primary Workers 
- Optional. Failure action when primary worker creation fails.
- Config
Pulumi.Google Native. Dataproc. V1. Inputs. Cluster Config 
- Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- Labels Dictionary<string, string>
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- Project string
- The Google Cloud Platform project ID that the cluster belongs to.
- RequestId string
- Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- VirtualCluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Virtual Cluster Config 
- Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
- ClusterName string
- The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- Region string
- ActionOn stringFailed Primary Workers 
- Optional. Failure action when primary worker creation fails.
- Config
ClusterConfig Args 
- Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- Labels map[string]string
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- Project string
- The Google Cloud Platform project ID that the cluster belongs to.
- RequestId string
- Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- VirtualCluster VirtualConfig Cluster Config Args 
- Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
- clusterName String
- The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- region String
- actionOn StringFailed Primary Workers 
- Optional. Failure action when primary worker creation fails.
- config
ClusterConfig 
- Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- labels Map<String,String>
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- project String
- The Google Cloud Platform project ID that the cluster belongs to.
- requestId String
- Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- virtualCluster VirtualConfig Cluster Config 
- Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
- clusterName string
- The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- region string
- actionOn stringFailed Primary Workers 
- Optional. Failure action when primary worker creation fails.
- config
ClusterConfig 
- Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- labels {[key: string]: string}
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- project string
- The Google Cloud Platform project ID that the cluster belongs to.
- requestId string
- Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- virtualCluster VirtualConfig Cluster Config 
- Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
- cluster_name str
- The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- region str
- action_on_ strfailed_ primary_ workers 
- Optional. Failure action when primary worker creation fails.
- config
ClusterConfig Args 
- Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- labels Mapping[str, str]
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- project str
- The Google Cloud Platform project ID that the cluster belongs to.
- request_id str
- Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- virtual_cluster_ Virtualconfig Cluster Config Args 
- Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
- clusterName String
- The cluster name, which must be unique within a project. The name must start with a lowercase letter, and can contain up to 51 lowercase letters, numbers, and hyphens. It cannot end with a hyphen. The name of a deleted cluster can be reused.
- region String
- actionOn StringFailed Primary Workers 
- Optional. Failure action when primary worker creation fails.
- config Property Map
- Optional. The cluster config for a cluster of Compute Engine Instances. Note that Dataproc may set default values, and values may change when clusters are updated.Exactly one of ClusterConfig or VirtualClusterConfig must be specified.
- labels Map<String>
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- project String
- The Google Cloud Platform project ID that the cluster belongs to.
- requestId String
- Optional. A unique ID used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- virtualCluster Property MapConfig 
- Optional. The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster (https://cloud.google.com/dataproc/docs/guides/dpgke/dataproc-gke-overview). Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtual_cluster_config must be specified.
Outputs
All input properties are implicitly available as output properties. Additionally, the Cluster resource produces the following output properties:
- ClusterUuid string
- A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- Id string
- The provider-assigned unique ID for this managed resource.
- Metrics
Pulumi.Google Native. Dataproc. V1. Outputs. Cluster Metrics Response 
- Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- Status
Pulumi.Google Native. Dataproc. V1. Outputs. Cluster Status Response 
- Cluster status.
- StatusHistory List<Pulumi.Google Native. Dataproc. V1. Outputs. Cluster Status Response> 
- The previous cluster status.
- ClusterUuid string
- A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- Id string
- The provider-assigned unique ID for this managed resource.
- Metrics
ClusterMetrics Response 
- Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- Status
ClusterStatus Response 
- Cluster status.
- StatusHistory []ClusterStatus Response 
- The previous cluster status.
- clusterUuid String
- A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- id String
- The provider-assigned unique ID for this managed resource.
- metrics
ClusterMetrics Response 
- Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- status
ClusterStatus Response 
- Cluster status.
- statusHistory List<ClusterStatus Response> 
- The previous cluster status.
- clusterUuid string
- A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- id string
- The provider-assigned unique ID for this managed resource.
- metrics
ClusterMetrics Response 
- Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- status
ClusterStatus Response 
- Cluster status.
- statusHistory ClusterStatus Response[] 
- The previous cluster status.
- cluster_uuid str
- A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- id str
- The provider-assigned unique ID for this managed resource.
- metrics
ClusterMetrics Response 
- Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- status
ClusterStatus Response 
- Cluster status.
- status_history Sequence[ClusterStatus Response] 
- The previous cluster status.
- clusterUuid String
- A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- id String
- The provider-assigned unique ID for this managed resource.
- metrics Property Map
- Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- status Property Map
- Cluster status.
- statusHistory List<Property Map>
- The previous cluster status.
Supporting Types
AcceleratorConfig, AcceleratorConfigArgs    
- AcceleratorCount int
- The number of the accelerator cards of this type exposed to this instance.
- AcceleratorType stringUri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- AcceleratorCount int
- The number of the accelerator cards of this type exposed to this instance.
- AcceleratorType stringUri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- acceleratorCount Integer
- The number of the accelerator cards of this type exposed to this instance.
- acceleratorType StringUri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- acceleratorCount number
- The number of the accelerator cards of this type exposed to this instance.
- acceleratorType stringUri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator_count int
- The number of the accelerator cards of this type exposed to this instance.
- accelerator_type_ struri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- acceleratorCount Number
- The number of the accelerator cards of this type exposed to this instance.
- acceleratorType StringUri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
AcceleratorConfigResponse, AcceleratorConfigResponseArgs      
- AcceleratorCount int
- The number of the accelerator cards of this type exposed to this instance.
- AcceleratorType stringUri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- AcceleratorCount int
- The number of the accelerator cards of this type exposed to this instance.
- AcceleratorType stringUri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- acceleratorCount Integer
- The number of the accelerator cards of this type exposed to this instance.
- acceleratorType StringUri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- acceleratorCount number
- The number of the accelerator cards of this type exposed to this instance.
- acceleratorType stringUri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator_count int
- The number of the accelerator cards of this type exposed to this instance.
- accelerator_type_ struri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- acceleratorCount Number
- The number of the accelerator cards of this type exposed to this instance.
- acceleratorType StringUri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
AutoscalingConfig, AutoscalingConfigArgs    
- PolicyUri string
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- PolicyUri string
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policyUri String
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policyUri string
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy_uri str
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policyUri String
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
AutoscalingConfigResponse, AutoscalingConfigResponseArgs      
- PolicyUri string
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- PolicyUri string
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policyUri String
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policyUri string
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy_uri str
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policyUri String
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
AuxiliaryNodeGroup, AuxiliaryNodeGroupArgs      
- NodeGroup Pulumi.Google Native. Dataproc. V1. Inputs. Node Group 
- Node group configuration.
- NodeGroup stringId 
- Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- NodeGroup NodeGroup Type 
- Node group configuration.
- NodeGroup stringId 
- Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- nodeGroup NodeGroup 
- Node group configuration.
- nodeGroup StringId 
- Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- nodeGroup NodeGroup 
- Node group configuration.
- nodeGroup stringId 
- Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node_group NodeGroup 
- Node group configuration.
- node_group_ strid 
- Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- nodeGroup Property Map
- Node group configuration.
- nodeGroup StringId 
- Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
AuxiliaryNodeGroupResponse, AuxiliaryNodeGroupResponseArgs        
- NodeGroup Pulumi.Google Native. Dataproc. V1. Inputs. Node Group Response 
- Node group configuration.
- NodeGroup stringId 
- Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- NodeGroup NodeGroup Response 
- Node group configuration.
- NodeGroup stringId 
- Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- nodeGroup NodeGroup Response 
- Node group configuration.
- nodeGroup StringId 
- Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- nodeGroup NodeGroup Response 
- Node group configuration.
- nodeGroup stringId 
- Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- node_group NodeGroup Response 
- Node group configuration.
- node_group_ strid 
- Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
- nodeGroup Property Map
- Node group configuration.
- nodeGroup StringId 
- Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
AuxiliaryServicesConfig, AuxiliaryServicesConfigArgs      
- MetastoreConfig Pulumi.Google Native. Dataproc. V1. Inputs. Metastore Config 
- Optional. The Hive Metastore configuration for this workload.
- SparkHistory Pulumi.Server Config Google Native. Dataproc. V1. Inputs. Spark History Server Config 
- Optional. The Spark History Server configuration for the workload.
- MetastoreConfig MetastoreConfig 
- Optional. The Hive Metastore configuration for this workload.
- SparkHistory SparkServer Config History Server Config 
- Optional. The Spark History Server configuration for the workload.
- metastoreConfig MetastoreConfig 
- Optional. The Hive Metastore configuration for this workload.
- sparkHistory SparkServer Config History Server Config 
- Optional. The Spark History Server configuration for the workload.
- metastoreConfig MetastoreConfig 
- Optional. The Hive Metastore configuration for this workload.
- sparkHistory SparkServer Config History Server Config 
- Optional. The Spark History Server configuration for the workload.
- metastore_config MetastoreConfig 
- Optional. The Hive Metastore configuration for this workload.
- spark_history_ Sparkserver_ config History Server Config 
- Optional. The Spark History Server configuration for the workload.
- metastoreConfig Property Map
- Optional. The Hive Metastore configuration for this workload.
- sparkHistory Property MapServer Config 
- Optional. The Spark History Server configuration for the workload.
AuxiliaryServicesConfigResponse, AuxiliaryServicesConfigResponseArgs        
- MetastoreConfig Pulumi.Google Native. Dataproc. V1. Inputs. Metastore Config Response 
- Optional. The Hive Metastore configuration for this workload.
- SparkHistory Pulumi.Server Config Google Native. Dataproc. V1. Inputs. Spark History Server Config Response 
- Optional. The Spark History Server configuration for the workload.
- MetastoreConfig MetastoreConfig Response 
- Optional. The Hive Metastore configuration for this workload.
- SparkHistory SparkServer Config History Server Config Response 
- Optional. The Spark History Server configuration for the workload.
- metastoreConfig MetastoreConfig Response 
- Optional. The Hive Metastore configuration for this workload.
- sparkHistory SparkServer Config History Server Config Response 
- Optional. The Spark History Server configuration for the workload.
- metastoreConfig MetastoreConfig Response 
- Optional. The Hive Metastore configuration for this workload.
- sparkHistory SparkServer Config History Server Config Response 
- Optional. The Spark History Server configuration for the workload.
- metastore_config MetastoreConfig Response 
- Optional. The Hive Metastore configuration for this workload.
- spark_history_ Sparkserver_ config History Server Config Response 
- Optional. The Spark History Server configuration for the workload.
- metastoreConfig Property Map
- Optional. The Hive Metastore configuration for this workload.
- sparkHistory Property MapServer Config 
- Optional. The Spark History Server configuration for the workload.
ClusterConfig, ClusterConfigArgs    
- AutoscalingConfig Pulumi.Google Native. Dataproc. V1. Inputs. Autoscaling Config 
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- AuxiliaryNode List<Pulumi.Groups Google Native. Dataproc. V1. Inputs. Auxiliary Node Group> 
- Optional. The node group settings.
- ConfigBucket string
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- DataprocMetric Pulumi.Config Google Native. Dataproc. V1. Inputs. Dataproc Metric Config 
- Optional. The config for Dataproc metrics.
- EncryptionConfig Pulumi.Google Native. Dataproc. V1. Inputs. Encryption Config 
- Optional. Encryption settings for the cluster.
- EndpointConfig Pulumi.Google Native. Dataproc. V1. Inputs. Endpoint Config 
- Optional. Port/endpoint configuration for this cluster
- GceCluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Gce Cluster Config 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- GkeCluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Gke Cluster Config 
- Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- InitializationActions List<Pulumi.Google Native. Dataproc. V1. Inputs. Node Initialization Action> 
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- LifecycleConfig Pulumi.Google Native. Dataproc. V1. Inputs. Lifecycle Config 
- Optional. Lifecycle setting for the cluster.
- MasterConfig Pulumi.Google Native. Dataproc. V1. Inputs. Instance Group Config 
- Optional. The Compute Engine config settings for the cluster's master instance.
- MetastoreConfig Pulumi.Google Native. Dataproc. V1. Inputs. Metastore Config 
- Optional. Metastore configuration.
- SecondaryWorker Pulumi.Config Google Native. Dataproc. V1. Inputs. Instance Group Config 
- Optional. The Compute Engine config settings for a cluster's secondary worker instances
- SecurityConfig Pulumi.Google Native. Dataproc. V1. Inputs. Security Config 
- Optional. Security settings for the cluster.
- SoftwareConfig Pulumi.Google Native. Dataproc. V1. Inputs. Software Config 
- Optional. The config settings for cluster software.
- TempBucket string
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- WorkerConfig Pulumi.Google Native. Dataproc. V1. Inputs. Instance Group Config 
- Optional. The Compute Engine config settings for the cluster's worker instances.
- AutoscalingConfig AutoscalingConfig 
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- AuxiliaryNode []AuxiliaryGroups Node Group 
- Optional. The node group settings.
- ConfigBucket string
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- DataprocMetric DataprocConfig Metric Config 
- Optional. The config for Dataproc metrics.
- EncryptionConfig EncryptionConfig 
- Optional. Encryption settings for the cluster.
- EndpointConfig EndpointConfig 
- Optional. Port/endpoint configuration for this cluster
- GceCluster GceConfig Cluster Config 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- GkeCluster GkeConfig Cluster Config 
- Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- InitializationActions []NodeInitialization Action 
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- LifecycleConfig LifecycleConfig 
- Optional. Lifecycle setting for the cluster.
- MasterConfig InstanceGroup Config 
- Optional. The Compute Engine config settings for the cluster's master instance.
- MetastoreConfig MetastoreConfig 
- Optional. Metastore configuration.
- SecondaryWorker InstanceConfig Group Config 
- Optional. The Compute Engine config settings for a cluster's secondary worker instances
- SecurityConfig SecurityConfig 
- Optional. Security settings for the cluster.
- SoftwareConfig SoftwareConfig 
- Optional. The config settings for cluster software.
- TempBucket string
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- WorkerConfig InstanceGroup Config 
- Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscalingConfig AutoscalingConfig 
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliaryNode List<AuxiliaryGroups Node Group> 
- Optional. The node group settings.
- configBucket String
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataprocMetric DataprocConfig Metric Config 
- Optional. The config for Dataproc metrics.
- encryptionConfig EncryptionConfig 
- Optional. Encryption settings for the cluster.
- endpointConfig EndpointConfig 
- Optional. Port/endpoint configuration for this cluster
- gceCluster GceConfig Cluster Config 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- gkeCluster GkeConfig Cluster Config 
- Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initializationActions List<NodeInitialization Action> 
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycleConfig LifecycleConfig 
- Optional. Lifecycle setting for the cluster.
- masterConfig InstanceGroup Config 
- Optional. The Compute Engine config settings for the cluster's master instance.
- metastoreConfig MetastoreConfig 
- Optional. Metastore configuration.
- secondaryWorker InstanceConfig Group Config 
- Optional. The Compute Engine config settings for a cluster's secondary worker instances
- securityConfig SecurityConfig 
- Optional. Security settings for the cluster.
- softwareConfig SoftwareConfig 
- Optional. The config settings for cluster software.
- tempBucket String
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- workerConfig InstanceGroup Config 
- Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscalingConfig AutoscalingConfig 
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliaryNode AuxiliaryGroups Node Group[] 
- Optional. The node group settings.
- configBucket string
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataprocMetric DataprocConfig Metric Config 
- Optional. The config for Dataproc metrics.
- encryptionConfig EncryptionConfig 
- Optional. Encryption settings for the cluster.
- endpointConfig EndpointConfig 
- Optional. Port/endpoint configuration for this cluster
- gceCluster GceConfig Cluster Config 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- gkeCluster GkeConfig Cluster Config 
- Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initializationActions NodeInitialization Action[] 
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycleConfig LifecycleConfig 
- Optional. Lifecycle setting for the cluster.
- masterConfig InstanceGroup Config 
- Optional. The Compute Engine config settings for the cluster's master instance.
- metastoreConfig MetastoreConfig 
- Optional. Metastore configuration.
- secondaryWorker InstanceConfig Group Config 
- Optional. The Compute Engine config settings for a cluster's secondary worker instances
- securityConfig SecurityConfig 
- Optional. Security settings for the cluster.
- softwareConfig SoftwareConfig 
- Optional. The config settings for cluster software.
- tempBucket string
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- workerConfig InstanceGroup Config 
- Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling_config AutoscalingConfig 
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary_node_ Sequence[Auxiliarygroups Node Group] 
- Optional. The node group settings.
- config_bucket str
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc_metric_ Dataprocconfig Metric Config 
- Optional. The config for Dataproc metrics.
- encryption_config EncryptionConfig 
- Optional. Encryption settings for the cluster.
- endpoint_config EndpointConfig 
- Optional. Port/endpoint configuration for this cluster
- gce_cluster_ Gceconfig Cluster Config 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke_cluster_ Gkeconfig Cluster Config 
- Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization_actions Sequence[NodeInitialization Action] 
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle_config LifecycleConfig 
- Optional. Lifecycle setting for the cluster.
- master_config InstanceGroup Config 
- Optional. The Compute Engine config settings for the cluster's master instance.
- metastore_config MetastoreConfig 
- Optional. Metastore configuration.
- secondary_worker_ Instanceconfig Group Config 
- Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security_config SecurityConfig 
- Optional. Security settings for the cluster.
- software_config SoftwareConfig 
- Optional. The config settings for cluster software.
- temp_bucket str
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker_config InstanceGroup Config 
- Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscalingConfig Property Map
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliaryNode List<Property Map>Groups 
- Optional. The node group settings.
- configBucket String
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataprocMetric Property MapConfig 
- Optional. The config for Dataproc metrics.
- encryptionConfig Property Map
- Optional. Encryption settings for the cluster.
- endpointConfig Property Map
- Optional. Port/endpoint configuration for this cluster
- gceCluster Property MapConfig 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- gkeCluster Property MapConfig 
- Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initializationActions List<Property Map>
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycleConfig Property Map
- Optional. Lifecycle setting for the cluster.
- masterConfig Property Map
- Optional. The Compute Engine config settings for the cluster's master instance.
- metastoreConfig Property Map
- Optional. Metastore configuration.
- secondaryWorker Property MapConfig 
- Optional. The Compute Engine config settings for a cluster's secondary worker instances
- securityConfig Property Map
- Optional. Security settings for the cluster.
- softwareConfig Property Map
- Optional. The config settings for cluster software.
- tempBucket String
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- workerConfig Property Map
- Optional. The Compute Engine config settings for the cluster's worker instances.
ClusterConfigResponse, ClusterConfigResponseArgs      
- AutoscalingConfig Pulumi.Google Native. Dataproc. V1. Inputs. Autoscaling Config Response 
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- AuxiliaryNode List<Pulumi.Groups Google Native. Dataproc. V1. Inputs. Auxiliary Node Group Response> 
- Optional. The node group settings.
- ConfigBucket string
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- DataprocMetric Pulumi.Config Google Native. Dataproc. V1. Inputs. Dataproc Metric Config Response 
- Optional. The config for Dataproc metrics.
- EncryptionConfig Pulumi.Google Native. Dataproc. V1. Inputs. Encryption Config Response 
- Optional. Encryption settings for the cluster.
- EndpointConfig Pulumi.Google Native. Dataproc. V1. Inputs. Endpoint Config Response 
- Optional. Port/endpoint configuration for this cluster
- GceCluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Gce Cluster Config Response 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- GkeCluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Gke Cluster Config Response 
- Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- InitializationActions List<Pulumi.Google Native. Dataproc. V1. Inputs. Node Initialization Action Response> 
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- LifecycleConfig Pulumi.Google Native. Dataproc. V1. Inputs. Lifecycle Config Response 
- Optional. Lifecycle setting for the cluster.
- MasterConfig Pulumi.Google Native. Dataproc. V1. Inputs. Instance Group Config Response 
- Optional. The Compute Engine config settings for the cluster's master instance.
- MetastoreConfig Pulumi.Google Native. Dataproc. V1. Inputs. Metastore Config Response 
- Optional. Metastore configuration.
- SecondaryWorker Pulumi.Config Google Native. Dataproc. V1. Inputs. Instance Group Config Response 
- Optional. The Compute Engine config settings for a cluster's secondary worker instances
- SecurityConfig Pulumi.Google Native. Dataproc. V1. Inputs. Security Config Response 
- Optional. Security settings for the cluster.
- SoftwareConfig Pulumi.Google Native. Dataproc. V1. Inputs. Software Config Response 
- Optional. The config settings for cluster software.
- TempBucket string
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- WorkerConfig Pulumi.Google Native. Dataproc. V1. Inputs. Instance Group Config Response 
- Optional. The Compute Engine config settings for the cluster's worker instances.
- AutoscalingConfig AutoscalingConfig Response 
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- AuxiliaryNode []AuxiliaryGroups Node Group Response 
- Optional. The node group settings.
- ConfigBucket string
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- DataprocMetric DataprocConfig Metric Config Response 
- Optional. The config for Dataproc metrics.
- EncryptionConfig EncryptionConfig Response 
- Optional. Encryption settings for the cluster.
- EndpointConfig EndpointConfig Response 
- Optional. Port/endpoint configuration for this cluster
- GceCluster GceConfig Cluster Config Response 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- GkeCluster GkeConfig Cluster Config Response 
- Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- InitializationActions []NodeInitialization Action Response 
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- LifecycleConfig LifecycleConfig Response 
- Optional. Lifecycle setting for the cluster.
- MasterConfig InstanceGroup Config Response 
- Optional. The Compute Engine config settings for the cluster's master instance.
- MetastoreConfig MetastoreConfig Response 
- Optional. Metastore configuration.
- SecondaryWorker InstanceConfig Group Config Response 
- Optional. The Compute Engine config settings for a cluster's secondary worker instances
- SecurityConfig SecurityConfig Response 
- Optional. Security settings for the cluster.
- SoftwareConfig SoftwareConfig Response 
- Optional. The config settings for cluster software.
- TempBucket string
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- WorkerConfig InstanceGroup Config Response 
- Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscalingConfig AutoscalingConfig Response 
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliaryNode List<AuxiliaryGroups Node Group Response> 
- Optional. The node group settings.
- configBucket String
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataprocMetric DataprocConfig Metric Config Response 
- Optional. The config for Dataproc metrics.
- encryptionConfig EncryptionConfig Response 
- Optional. Encryption settings for the cluster.
- endpointConfig EndpointConfig Response 
- Optional. Port/endpoint configuration for this cluster
- gceCluster GceConfig Cluster Config Response 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- gkeCluster GkeConfig Cluster Config Response 
- Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initializationActions List<NodeInitialization Action Response> 
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycleConfig LifecycleConfig Response 
- Optional. Lifecycle setting for the cluster.
- masterConfig InstanceGroup Config Response 
- Optional. The Compute Engine config settings for the cluster's master instance.
- metastoreConfig MetastoreConfig Response 
- Optional. Metastore configuration.
- secondaryWorker InstanceConfig Group Config Response 
- Optional. The Compute Engine config settings for a cluster's secondary worker instances
- securityConfig SecurityConfig Response 
- Optional. Security settings for the cluster.
- softwareConfig SoftwareConfig Response 
- Optional. The config settings for cluster software.
- tempBucket String
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- workerConfig InstanceGroup Config Response 
- Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscalingConfig AutoscalingConfig Response 
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliaryNode AuxiliaryGroups Node Group Response[] 
- Optional. The node group settings.
- configBucket string
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataprocMetric DataprocConfig Metric Config Response 
- Optional. The config for Dataproc metrics.
- encryptionConfig EncryptionConfig Response 
- Optional. Encryption settings for the cluster.
- endpointConfig EndpointConfig Response 
- Optional. Port/endpoint configuration for this cluster
- gceCluster GceConfig Cluster Config Response 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- gkeCluster GkeConfig Cluster Config Response 
- Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initializationActions NodeInitialization Action Response[] 
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycleConfig LifecycleConfig Response 
- Optional. Lifecycle setting for the cluster.
- masterConfig InstanceGroup Config Response 
- Optional. The Compute Engine config settings for the cluster's master instance.
- metastoreConfig MetastoreConfig Response 
- Optional. Metastore configuration.
- secondaryWorker InstanceConfig Group Config Response 
- Optional. The Compute Engine config settings for a cluster's secondary worker instances
- securityConfig SecurityConfig Response 
- Optional. Security settings for the cluster.
- softwareConfig SoftwareConfig Response 
- Optional. The config settings for cluster software.
- tempBucket string
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- workerConfig InstanceGroup Config Response 
- Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscaling_config AutoscalingConfig Response 
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliary_node_ Sequence[Auxiliarygroups Node Group Response] 
- Optional. The node group settings.
- config_bucket str
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataproc_metric_ Dataprocconfig Metric Config Response 
- Optional. The config for Dataproc metrics.
- encryption_config EncryptionConfig Response 
- Optional. Encryption settings for the cluster.
- endpoint_config EndpointConfig Response 
- Optional. Port/endpoint configuration for this cluster
- gce_cluster_ Gceconfig Cluster Config Response 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke_cluster_ Gkeconfig Cluster Config Response 
- Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization_actions Sequence[NodeInitialization Action Response] 
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle_config LifecycleConfig Response 
- Optional. Lifecycle setting for the cluster.
- master_config InstanceGroup Config Response 
- Optional. The Compute Engine config settings for the cluster's master instance.
- metastore_config MetastoreConfig Response 
- Optional. Metastore configuration.
- secondary_worker_ Instanceconfig Group Config Response 
- Optional. The Compute Engine config settings for a cluster's secondary worker instances
- security_config SecurityConfig Response 
- Optional. Security settings for the cluster.
- software_config SoftwareConfig Response 
- Optional. The config settings for cluster software.
- temp_bucket str
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- worker_config InstanceGroup Config Response 
- Optional. The Compute Engine config settings for the cluster's worker instances.
- autoscalingConfig Property Map
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- auxiliaryNode List<Property Map>Groups 
- Optional. The node group settings.
- configBucket String
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- dataprocMetric Property MapConfig 
- Optional. The config for Dataproc metrics.
- encryptionConfig Property Map
- Optional. Encryption settings for the cluster.
- endpointConfig Property Map
- Optional. Port/endpoint configuration for this cluster
- gceCluster Property MapConfig 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- gkeCluster Property MapConfig 
- Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initializationActions List<Property Map>
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycleConfig Property Map
- Optional. Lifecycle setting for the cluster.
- masterConfig Property Map
- Optional. The Compute Engine config settings for the cluster's master instance.
- metastoreConfig Property Map
- Optional. Metastore configuration.
- secondaryWorker Property MapConfig 
- Optional. The Compute Engine config settings for a cluster's secondary worker instances
- securityConfig Property Map
- Optional. Security settings for the cluster.
- softwareConfig Property Map
- Optional. The config settings for cluster software.
- tempBucket String
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- workerConfig Property Map
- Optional. The Compute Engine config settings for the cluster's worker instances.
ClusterMetricsResponse, ClusterMetricsResponseArgs      
- HdfsMetrics Dictionary<string, string>
- The HDFS metrics.
- YarnMetrics Dictionary<string, string>
- YARN metrics.
- HdfsMetrics map[string]string
- The HDFS metrics.
- YarnMetrics map[string]string
- YARN metrics.
- hdfsMetrics Map<String,String>
- The HDFS metrics.
- yarnMetrics Map<String,String>
- YARN metrics.
- hdfsMetrics {[key: string]: string}
- The HDFS metrics.
- yarnMetrics {[key: string]: string}
- YARN metrics.
- hdfs_metrics Mapping[str, str]
- The HDFS metrics.
- yarn_metrics Mapping[str, str]
- YARN metrics.
- hdfsMetrics Map<String>
- The HDFS metrics.
- yarnMetrics Map<String>
- YARN metrics.
ClusterStatusResponse, ClusterStatusResponseArgs      
- Detail string
- Optional. Output only. Details of cluster's state.
- State string
- The cluster's state.
- StateStart stringTime 
- Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Substate string
- Additional state information that includes status reported by the agent.
- Detail string
- Optional. Output only. Details of cluster's state.
- State string
- The cluster's state.
- StateStart stringTime 
- Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Substate string
- Additional state information that includes status reported by the agent.
- detail String
- Optional. Output only. Details of cluster's state.
- state String
- The cluster's state.
- stateStart StringTime 
- Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- substate String
- Additional state information that includes status reported by the agent.
- detail string
- Optional. Output only. Details of cluster's state.
- state string
- The cluster's state.
- stateStart stringTime 
- Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- substate string
- Additional state information that includes status reported by the agent.
- detail str
- Optional. Output only. Details of cluster's state.
- state str
- The cluster's state.
- state_start_ strtime 
- Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- substate str
- Additional state information that includes status reported by the agent.
- detail String
- Optional. Output only. Details of cluster's state.
- state String
- The cluster's state.
- stateStart StringTime 
- Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- substate String
- Additional state information that includes status reported by the agent.
ConfidentialInstanceConfig, ConfidentialInstanceConfigArgs      
- EnableConfidential boolCompute 
- Optional. Defines whether the instance should have confidential compute enabled.
- EnableConfidential boolCompute 
- Optional. Defines whether the instance should have confidential compute enabled.
- enableConfidential BooleanCompute 
- Optional. Defines whether the instance should have confidential compute enabled.
- enableConfidential booleanCompute 
- Optional. Defines whether the instance should have confidential compute enabled.
- enable_confidential_ boolcompute 
- Optional. Defines whether the instance should have confidential compute enabled.
- enableConfidential BooleanCompute 
- Optional. Defines whether the instance should have confidential compute enabled.
ConfidentialInstanceConfigResponse, ConfidentialInstanceConfigResponseArgs        
- EnableConfidential boolCompute 
- Optional. Defines whether the instance should have confidential compute enabled.
- EnableConfidential boolCompute 
- Optional. Defines whether the instance should have confidential compute enabled.
- enableConfidential BooleanCompute 
- Optional. Defines whether the instance should have confidential compute enabled.
- enableConfidential booleanCompute 
- Optional. Defines whether the instance should have confidential compute enabled.
- enable_confidential_ boolcompute 
- Optional. Defines whether the instance should have confidential compute enabled.
- enableConfidential BooleanCompute 
- Optional. Defines whether the instance should have confidential compute enabled.
DataprocMetricConfig, DataprocMetricConfigArgs      
- Metrics
List<Pulumi.Google Native. Dataproc. V1. Inputs. Metric> 
- Metrics sources to enable.
- metrics List<Metric>
- Metrics sources to enable.
- metrics Sequence[Metric]
- Metrics sources to enable.
- metrics List<Property Map>
- Metrics sources to enable.
DataprocMetricConfigResponse, DataprocMetricConfigResponseArgs        
- Metrics
List<Pulumi.Google Native. Dataproc. V1. Inputs. Metric Response> 
- Metrics sources to enable.
- Metrics
[]MetricResponse 
- Metrics sources to enable.
- metrics
List<MetricResponse> 
- Metrics sources to enable.
- metrics
MetricResponse[] 
- Metrics sources to enable.
- metrics
Sequence[MetricResponse] 
- Metrics sources to enable.
- metrics List<Property Map>
- Metrics sources to enable.
DiskConfig, DiskConfigArgs    
- BootDisk intSize Gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- BootDisk stringType 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- LocalSsd stringInterface 
- Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- NumLocal intSsds 
- Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- BootDisk intSize Gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- BootDisk stringType 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- LocalSsd stringInterface 
- Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- NumLocal intSsds 
- Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- bootDisk IntegerSize Gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- bootDisk StringType 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- localSsd StringInterface 
- Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- numLocal IntegerSsds 
- Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- bootDisk numberSize Gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- bootDisk stringType 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- localSsd stringInterface 
- Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- numLocal numberSsds 
- Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot_disk_ intsize_ gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- boot_disk_ strtype 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local_ssd_ strinterface 
- Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num_local_ intssds 
- Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- bootDisk NumberSize Gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- bootDisk StringType 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- localSsd StringInterface 
- Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- numLocal NumberSsds 
- Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
DiskConfigResponse, DiskConfigResponseArgs      
- BootDisk intSize Gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- BootDisk stringType 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- LocalSsd stringInterface 
- Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- NumLocal intSsds 
- Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- BootDisk intSize Gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- BootDisk stringType 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- LocalSsd stringInterface 
- Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- NumLocal intSsds 
- Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- bootDisk IntegerSize Gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- bootDisk StringType 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- localSsd StringInterface 
- Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- numLocal IntegerSsds 
- Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- bootDisk numberSize Gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- bootDisk stringType 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- localSsd stringInterface 
- Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- numLocal numberSsds 
- Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- boot_disk_ intsize_ gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- boot_disk_ strtype 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- local_ssd_ strinterface 
- Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- num_local_ intssds 
- Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
- bootDisk NumberSize Gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- bootDisk StringType 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- localSsd StringInterface 
- Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
- numLocal NumberSsds 
- Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
EncryptionConfig, EncryptionConfigArgs    
- GcePd stringKms Key Name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- KmsKey string
- Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- GcePd stringKms Key Name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- KmsKey string
- Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gcePd StringKms Key Name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kmsKey String
- Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gcePd stringKms Key Name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kmsKey string
- Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce_pd_ strkms_ key_ name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms_key str
- Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gcePd StringKms Key Name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kmsKey String
- Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
EncryptionConfigResponse, EncryptionConfigResponseArgs      
- GcePd stringKms Key Name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- KmsKey string
- Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- GcePd stringKms Key Name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- KmsKey string
- Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gcePd StringKms Key Name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kmsKey String
- Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gcePd stringKms Key Name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kmsKey string
- Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gce_pd_ strkms_ key_ name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kms_key str
- Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
- gcePd StringKms Key Name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- kmsKey String
- Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
EndpointConfig, EndpointConfigArgs    
- EnableHttp boolPort Access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- EnableHttp boolPort Access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- enableHttp BooleanPort Access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- enableHttp booleanPort Access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- enable_http_ boolport_ access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- enableHttp BooleanPort Access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
EndpointConfigResponse, EndpointConfigResponseArgs      
- EnableHttp boolPort Access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- HttpPorts Dictionary<string, string>
- The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- EnableHttp boolPort Access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- HttpPorts map[string]string
- The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enableHttp BooleanPort Access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- httpPorts Map<String,String>
- The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enableHttp booleanPort Access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- httpPorts {[key: string]: string}
- The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable_http_ boolport_ access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http_ports Mapping[str, str]
- The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enableHttp BooleanPort Access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- httpPorts Map<String>
- The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
GceClusterConfig, GceClusterConfigArgs      
- ConfidentialInstance Pulumi.Config Google Native. Dataproc. V1. Inputs. Confidential Instance Config 
- Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- InternalIp boolOnly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata Dictionary<string, string>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- NetworkUri string
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- NodeGroup Pulumi.Affinity Google Native. Dataproc. V1. Inputs. Node Group Affinity 
- Optional. Node Group Affinity for sole-tenant clusters.
- PrivateIpv6Google Pulumi.Access Google Native. Dataproc. V1. Gce Cluster Config Private Ipv6Google Access 
- Optional. The type of IPv6 access for a cluster.
- ReservationAffinity Pulumi.Google Native. Dataproc. V1. Inputs. Reservation Affinity 
- Optional. Reservation Affinity for consuming Zonal reservation.
- ServiceAccount string
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- ServiceAccount List<string>Scopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- ShieldedInstance Pulumi.Config Google Native. Dataproc. V1. Inputs. Shielded Instance Config 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- SubnetworkUri string
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<string>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- ZoneUri string
- Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- ConfidentialInstance ConfidentialConfig Instance Config 
- Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- InternalIp boolOnly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata map[string]string
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- NetworkUri string
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- NodeGroup NodeAffinity Group Affinity 
- Optional. Node Group Affinity for sole-tenant clusters.
- PrivateIpv6Google GceAccess Cluster Config Private Ipv6Google Access 
- Optional. The type of IPv6 access for a cluster.
- ReservationAffinity ReservationAffinity 
- Optional. Reservation Affinity for consuming Zonal reservation.
- ServiceAccount string
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- ServiceAccount []stringScopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- ShieldedInstance ShieldedConfig Instance Config 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- SubnetworkUri string
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- []string
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- ZoneUri string
- Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidentialInstance ConfidentialConfig Instance Config 
- Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internalIp BooleanOnly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String,String>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- networkUri String
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- nodeGroup NodeAffinity Group Affinity 
- Optional. Node Group Affinity for sole-tenant clusters.
- privateIpv6Google GceAccess Cluster Config Private Ipv6Google Access 
- Optional. The type of IPv6 access for a cluster.
- reservationAffinity ReservationAffinity 
- Optional. Reservation Affinity for consuming Zonal reservation.
- serviceAccount String
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- serviceAccount List<String>Scopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shieldedInstance ShieldedConfig Instance Config 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetworkUri String
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zoneUri String
- Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidentialInstance ConfidentialConfig Instance Config 
- Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internalIp booleanOnly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata {[key: string]: string}
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- networkUri string
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- nodeGroup NodeAffinity Group Affinity 
- Optional. Node Group Affinity for sole-tenant clusters.
- privateIpv6Google GceAccess Cluster Config Private Ipv6Google Access 
- Optional. The type of IPv6 access for a cluster.
- reservationAffinity ReservationAffinity 
- Optional. Reservation Affinity for consuming Zonal reservation.
- serviceAccount string
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- serviceAccount string[]Scopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shieldedInstance ShieldedConfig Instance Config 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetworkUri string
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- string[]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zoneUri string
- Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential_instance_ Confidentialconfig Instance Config 
- Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal_ip_ boolonly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Mapping[str, str]
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network_uri str
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node_group_ Nodeaffinity Group Affinity 
- Optional. Node Group Affinity for sole-tenant clusters.
- private_ipv6_ Gcegoogle_ access Cluster Config Private Ipv6Google Access 
- Optional. The type of IPv6 access for a cluster.
- reservation_affinity ReservationAffinity 
- Optional. Reservation Affinity for consuming Zonal reservation.
- service_account str
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service_account_ Sequence[str]scopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded_instance_ Shieldedconfig Instance Config 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork_uri str
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- Sequence[str]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone_uri str
- Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidentialInstance Property MapConfig 
- Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internalIp BooleanOnly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- networkUri String
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- nodeGroup Property MapAffinity 
- Optional. Node Group Affinity for sole-tenant clusters.
- privateIpv6Google "PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED" | "INHERIT_FROM_SUBNETWORK" | "OUTBOUND" | "BIDIRECTIONAL"Access 
- Optional. The type of IPv6 access for a cluster.
- reservationAffinity Property Map
- Optional. Reservation Affinity for consuming Zonal reservation.
- serviceAccount String
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- serviceAccount List<String>Scopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shieldedInstance Property MapConfig 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetworkUri String
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zoneUri String
- Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
GceClusterConfigPrivateIpv6GoogleAccess, GceClusterConfigPrivateIpv6GoogleAccessArgs            
- PrivateIpv6Google Access Unspecified 
- PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- InheritFrom Subnetwork 
- INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- Outbound
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- Bidirectional
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- GceCluster Config Private Ipv6Google Access Private Ipv6Google Access Unspecified 
- PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- GceCluster Config Private Ipv6Google Access Inherit From Subnetwork 
- INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- GceCluster Config Private Ipv6Google Access Outbound 
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- GceCluster Config Private Ipv6Google Access Bidirectional 
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- PrivateIpv6Google Access Unspecified 
- PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- InheritFrom Subnetwork 
- INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- Outbound
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- Bidirectional
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- PrivateIpv6Google Access Unspecified 
- PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- InheritFrom Subnetwork 
- INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- Outbound
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- Bidirectional
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED
- PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- INHERIT_FROM_SUBNETWORK
- INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- OUTBOUND
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- BIDIRECTIONAL
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- "PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED"
- PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- "INHERIT_FROM_SUBNETWORK"
- INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- "OUTBOUND"
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- "BIDIRECTIONAL"
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
GceClusterConfigResponse, GceClusterConfigResponseArgs        
- ConfidentialInstance Pulumi.Config Google Native. Dataproc. V1. Inputs. Confidential Instance Config Response 
- Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- InternalIp boolOnly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata Dictionary<string, string>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- NetworkUri string
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- NodeGroup Pulumi.Affinity Google Native. Dataproc. V1. Inputs. Node Group Affinity Response 
- Optional. Node Group Affinity for sole-tenant clusters.
- PrivateIpv6Google stringAccess 
- Optional. The type of IPv6 access for a cluster.
- ReservationAffinity Pulumi.Google Native. Dataproc. V1. Inputs. Reservation Affinity Response 
- Optional. Reservation Affinity for consuming Zonal reservation.
- ServiceAccount string
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- ServiceAccount List<string>Scopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- ShieldedInstance Pulumi.Config Google Native. Dataproc. V1. Inputs. Shielded Instance Config Response 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- SubnetworkUri string
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<string>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- ZoneUri string
- Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- ConfidentialInstance ConfidentialConfig Instance Config Response 
- Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- InternalIp boolOnly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata map[string]string
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- NetworkUri string
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- NodeGroup NodeAffinity Group Affinity Response 
- Optional. Node Group Affinity for sole-tenant clusters.
- PrivateIpv6Google stringAccess 
- Optional. The type of IPv6 access for a cluster.
- ReservationAffinity ReservationAffinity Response 
- Optional. Reservation Affinity for consuming Zonal reservation.
- ServiceAccount string
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- ServiceAccount []stringScopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- ShieldedInstance ShieldedConfig Instance Config Response 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- SubnetworkUri string
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- []string
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- ZoneUri string
- Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidentialInstance ConfidentialConfig Instance Config Response 
- Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internalIp BooleanOnly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String,String>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- networkUri String
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- nodeGroup NodeAffinity Group Affinity Response 
- Optional. Node Group Affinity for sole-tenant clusters.
- privateIpv6Google StringAccess 
- Optional. The type of IPv6 access for a cluster.
- reservationAffinity ReservationAffinity Response 
- Optional. Reservation Affinity for consuming Zonal reservation.
- serviceAccount String
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- serviceAccount List<String>Scopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shieldedInstance ShieldedConfig Instance Config Response 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetworkUri String
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zoneUri String
- Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidentialInstance ConfidentialConfig Instance Config Response 
- Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internalIp booleanOnly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata {[key: string]: string}
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- networkUri string
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- nodeGroup NodeAffinity Group Affinity Response 
- Optional. Node Group Affinity for sole-tenant clusters.
- privateIpv6Google stringAccess 
- Optional. The type of IPv6 access for a cluster.
- reservationAffinity ReservationAffinity Response 
- Optional. Reservation Affinity for consuming Zonal reservation.
- serviceAccount string
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- serviceAccount string[]Scopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shieldedInstance ShieldedConfig Instance Config Response 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetworkUri string
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- string[]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zoneUri string
- Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidential_instance_ Confidentialconfig Instance Config Response 
- Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internal_ip_ boolonly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Mapping[str, str]
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network_uri str
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- node_group_ Nodeaffinity Group Affinity Response 
- Optional. Node Group Affinity for sole-tenant clusters.
- private_ipv6_ strgoogle_ access 
- Optional. The type of IPv6 access for a cluster.
- reservation_affinity ReservationAffinity Response 
- Optional. Reservation Affinity for consuming Zonal reservation.
- service_account str
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service_account_ Sequence[str]scopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded_instance_ Shieldedconfig Instance Config Response 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork_uri str
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- Sequence[str]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone_uri str
- Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
- confidentialInstance Property MapConfig 
- Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
- internalIp BooleanOnly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String>
- Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- networkUri String
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
- nodeGroup Property MapAffinity 
- Optional. Node Group Affinity for sole-tenant clusters.
- privateIpv6Google StringAccess 
- Optional. The type of IPv6 access for a cluster.
- reservationAffinity Property Map
- Optional. Reservation Affinity for consuming Zonal reservation.
- serviceAccount String
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- serviceAccount List<String>Scopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shieldedInstance Property MapConfig 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetworkUri String
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zoneUri String
- Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
GkeClusterConfig, GkeClusterConfigArgs      
- GkeCluster stringTarget 
- Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- NamespacedGke Pulumi.Deployment Target Google Native. Dataproc. V1. Inputs. Namespaced Gke Deployment Target 
- Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- NodePool List<Pulumi.Target Google Native. Dataproc. V1. Inputs. Gke Node Pool Target> 
- Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- GkeCluster stringTarget 
- Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- NamespacedGke NamespacedDeployment Target Gke Deployment Target 
- Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- NodePool []GkeTarget Node Pool Target 
- Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gkeCluster StringTarget 
- Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespacedGke NamespacedDeployment Target Gke Deployment Target 
- Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- nodePool List<GkeTarget Node Pool Target> 
- Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gkeCluster stringTarget 
- Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespacedGke NamespacedDeployment Target Gke Deployment Target 
- Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- nodePool GkeTarget Node Pool Target[] 
- Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke_cluster_ strtarget 
- Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced_gke_ Namespaceddeployment_ target Gke Deployment Target 
- Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node_pool_ Sequence[Gketarget Node Pool Target] 
- Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gkeCluster StringTarget 
- Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespacedGke Property MapDeployment Target 
- Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- nodePool List<Property Map>Target 
- Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
GkeClusterConfigResponse, GkeClusterConfigResponseArgs        
- GkeCluster stringTarget 
- Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- NamespacedGke Pulumi.Deployment Target Google Native. Dataproc. V1. Inputs. Namespaced Gke Deployment Target Response 
- Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- NodePool List<Pulumi.Target Google Native. Dataproc. V1. Inputs. Gke Node Pool Target Response> 
- Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- GkeCluster stringTarget 
- Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- NamespacedGke NamespacedDeployment Target Gke Deployment Target Response 
- Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- NodePool []GkeTarget Node Pool Target Response 
- Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gkeCluster StringTarget 
- Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespacedGke NamespacedDeployment Target Gke Deployment Target Response 
- Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- nodePool List<GkeTarget Node Pool Target Response> 
- Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gkeCluster stringTarget 
- Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespacedGke NamespacedDeployment Target Gke Deployment Target Response 
- Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- nodePool GkeTarget Node Pool Target Response[] 
- Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gke_cluster_ strtarget 
- Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespaced_gke_ Namespaceddeployment_ target Gke Deployment Target Response 
- Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- node_pool_ Sequence[Gketarget Node Pool Target Response] 
- Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
- gkeCluster StringTarget 
- Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- namespacedGke Property MapDeployment Target 
- Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.
- nodePool List<Property Map>Target 
- Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
GkeNodeConfig, GkeNodeConfigArgs      
- Accelerators
List<Pulumi.Google Native. Dataproc. V1. Inputs. Gke Node Pool Accelerator Config> 
- Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- BootDisk stringKms Key 
- Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- LocalSsd intCount 
- Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- MachineType string
- Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- MinCpu stringPlatform 
- Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- Preemptible bool
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Spot bool
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Accelerators
[]GkeNode Pool Accelerator Config 
- Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- BootDisk stringKms Key 
- Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- LocalSsd intCount 
- Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- MachineType string
- Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- MinCpu stringPlatform 
- Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- Preemptible bool
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Spot bool
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
List<GkeNode Pool Accelerator Config> 
- Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- bootDisk StringKms Key 
- Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- localSsd IntegerCount 
- Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machineType String
- Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- minCpu StringPlatform 
- Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible Boolean
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot Boolean
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
GkeNode Pool Accelerator Config[] 
- Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- bootDisk stringKms Key 
- Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- localSsd numberCount 
- Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machineType string
- Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- minCpu stringPlatform 
- Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible boolean
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot boolean
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
Sequence[GkeNode Pool Accelerator Config] 
- Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot_disk_ strkms_ key 
- Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local_ssd_ intcount 
- Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machine_type str
- Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- min_cpu_ strplatform 
- Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible bool
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot bool
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators List<Property Map>
- Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- bootDisk StringKms Key 
- Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- localSsd NumberCount 
- Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machineType String
- Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- minCpu StringPlatform 
- Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible Boolean
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot Boolean
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
GkeNodeConfigResponse, GkeNodeConfigResponseArgs        
- Accelerators
List<Pulumi.Google Native. Dataproc. V1. Inputs. Gke Node Pool Accelerator Config Response> 
- Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- BootDisk stringKms Key 
- Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- LocalSsd intCount 
- Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- MachineType string
- Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- MinCpu stringPlatform 
- Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- Preemptible bool
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Spot bool
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Accelerators
[]GkeNode Pool Accelerator Config Response 
- Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- BootDisk stringKms Key 
- Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- LocalSsd intCount 
- Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- MachineType string
- Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- MinCpu stringPlatform 
- Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- Preemptible bool
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- Spot bool
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
List<GkeNode Pool Accelerator Config Response> 
- Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- bootDisk StringKms Key 
- Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- localSsd IntegerCount 
- Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machineType String
- Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- minCpu StringPlatform 
- Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible Boolean
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot Boolean
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
GkeNode Pool Accelerator Config Response[] 
- Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- bootDisk stringKms Key 
- Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- localSsd numberCount 
- Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machineType string
- Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- minCpu stringPlatform 
- Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible boolean
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot boolean
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators
Sequence[GkeNode Pool Accelerator Config Response] 
- Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- boot_disk_ strkms_ key 
- Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- local_ssd_ intcount 
- Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machine_type str
- Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- min_cpu_ strplatform 
- Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible bool
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot bool
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- accelerators List<Property Map>
- Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
- bootDisk StringKms Key 
- Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
- localSsd NumberCount 
- Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
- machineType String
- Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
- minCpu StringPlatform 
- Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
- preemptible Boolean
- Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
- spot Boolean
- Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
GkeNodePoolAcceleratorConfig, GkeNodePoolAcceleratorConfigArgs          
- AcceleratorCount string
- The number of accelerator cards exposed to an instance.
- AcceleratorType string
- The accelerator type resource namename (see GPUs on Compute Engine).
- GpuPartition stringSize 
- Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- AcceleratorCount string
- The number of accelerator cards exposed to an instance.
- AcceleratorType string
- The accelerator type resource namename (see GPUs on Compute Engine).
- GpuPartition stringSize 
- Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- acceleratorCount String
- The number of accelerator cards exposed to an instance.
- acceleratorType String
- The accelerator type resource namename (see GPUs on Compute Engine).
- gpuPartition StringSize 
- Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- acceleratorCount string
- The number of accelerator cards exposed to an instance.
- acceleratorType string
- The accelerator type resource namename (see GPUs on Compute Engine).
- gpuPartition stringSize 
- Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- accelerator_count str
- The number of accelerator cards exposed to an instance.
- accelerator_type str
- The accelerator type resource namename (see GPUs on Compute Engine).
- gpu_partition_ strsize 
- Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- acceleratorCount String
- The number of accelerator cards exposed to an instance.
- acceleratorType String
- The accelerator type resource namename (see GPUs on Compute Engine).
- gpuPartition StringSize 
- Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
GkeNodePoolAcceleratorConfigResponse, GkeNodePoolAcceleratorConfigResponseArgs            
- AcceleratorCount string
- The number of accelerator cards exposed to an instance.
- AcceleratorType string
- The accelerator type resource namename (see GPUs on Compute Engine).
- GpuPartition stringSize 
- Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- AcceleratorCount string
- The number of accelerator cards exposed to an instance.
- AcceleratorType string
- The accelerator type resource namename (see GPUs on Compute Engine).
- GpuPartition stringSize 
- Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- acceleratorCount String
- The number of accelerator cards exposed to an instance.
- acceleratorType String
- The accelerator type resource namename (see GPUs on Compute Engine).
- gpuPartition StringSize 
- Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- acceleratorCount string
- The number of accelerator cards exposed to an instance.
- acceleratorType string
- The accelerator type resource namename (see GPUs on Compute Engine).
- gpuPartition stringSize 
- Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- accelerator_count str
- The number of accelerator cards exposed to an instance.
- accelerator_type str
- The accelerator type resource namename (see GPUs on Compute Engine).
- gpu_partition_ strsize 
- Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
- acceleratorCount String
- The number of accelerator cards exposed to an instance.
- acceleratorType String
- The accelerator type resource namename (see GPUs on Compute Engine).
- gpuPartition StringSize 
- Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
GkeNodePoolAutoscalingConfig, GkeNodePoolAutoscalingConfigArgs          
- MaxNode intCount 
- The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- MinNode intCount 
- The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- MaxNode intCount 
- The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- MinNode intCount 
- The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- maxNode IntegerCount 
- The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- minNode IntegerCount 
- The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- maxNode numberCount 
- The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- minNode numberCount 
- The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- max_node_ intcount 
- The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- min_node_ intcount 
- The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- maxNode NumberCount 
- The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- minNode NumberCount 
- The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
GkeNodePoolAutoscalingConfigResponse, GkeNodePoolAutoscalingConfigResponseArgs            
- MaxNode intCount 
- The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- MinNode intCount 
- The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- MaxNode intCount 
- The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- MinNode intCount 
- The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- maxNode IntegerCount 
- The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- minNode IntegerCount 
- The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- maxNode numberCount 
- The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- minNode numberCount 
- The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- max_node_ intcount 
- The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- min_node_ intcount 
- The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
- maxNode NumberCount 
- The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
- minNode NumberCount 
- The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
GkeNodePoolConfig, GkeNodePoolConfigArgs        
- Autoscaling
Pulumi.Google Native. Dataproc. V1. Inputs. Gke Node Pool Autoscaling Config 
- Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- Config
Pulumi.Google Native. Dataproc. V1. Inputs. Gke Node Config 
- Optional. The node pool configuration.
- Locations List<string>
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- Autoscaling
GkeNode Pool Autoscaling Config 
- Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- Config
GkeNode Config 
- Optional. The node pool configuration.
- Locations []string
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling
GkeNode Pool Autoscaling Config 
- Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config
GkeNode Config 
- Optional. The node pool configuration.
- locations List<String>
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling
GkeNode Pool Autoscaling Config 
- Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config
GkeNode Config 
- Optional. The node pool configuration.
- locations string[]
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling
GkeNode Pool Autoscaling Config 
- Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config
GkeNode Config 
- Optional. The node pool configuration.
- locations Sequence[str]
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling Property Map
- Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config Property Map
- Optional. The node pool configuration.
- locations List<String>
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
GkeNodePoolConfigResponse, GkeNodePoolConfigResponseArgs          
- Autoscaling
Pulumi.Google Native. Dataproc. V1. Inputs. Gke Node Pool Autoscaling Config Response 
- Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- Config
Pulumi.Google Native. Dataproc. V1. Inputs. Gke Node Config Response 
- Optional. The node pool configuration.
- Locations List<string>
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- Autoscaling
GkeNode Pool Autoscaling Config Response 
- Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- Config
GkeNode Config Response 
- Optional. The node pool configuration.
- Locations []string
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling
GkeNode Pool Autoscaling Config Response 
- Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config
GkeNode Config Response 
- Optional. The node pool configuration.
- locations List<String>
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling
GkeNode Pool Autoscaling Config Response 
- Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config
GkeNode Config Response 
- Optional. The node pool configuration.
- locations string[]
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling
GkeNode Pool Autoscaling Config Response 
- Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config
GkeNode Config Response 
- Optional. The node pool configuration.
- locations Sequence[str]
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
- autoscaling Property Map
- Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
- config Property Map
- Optional. The node pool configuration.
- locations List<String>
- Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
GkeNodePoolTarget, GkeNodePoolTargetArgs        
- NodePool string
- The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- Roles
List<Pulumi.Google Native. Dataproc. V1. Gke Node Pool Target Roles Item> 
- The roles associated with the GKE node pool.
- NodePool Pulumi.Config Google Native. Dataproc. V1. Inputs. Gke Node Pool Config 
- Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- NodePool string
- The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- Roles
[]GkeNode Pool Target Roles Item 
- The roles associated with the GKE node pool.
- NodePool GkeConfig Node Pool Config 
- Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- nodePool String
- The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- roles
List<GkeNode Pool Target Roles Item> 
- The roles associated with the GKE node pool.
- nodePool GkeConfig Node Pool Config 
- Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- nodePool string
- The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- roles
GkeNode Pool Target Roles Item[] 
- The roles associated with the GKE node pool.
- nodePool GkeConfig Node Pool Config 
- Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- node_pool str
- The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- roles
Sequence[GkeNode Pool Target Roles Item] 
- The roles associated with the GKE node pool.
- node_pool_ Gkeconfig Node Pool Config 
- Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- nodePool String
- The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- roles List<"ROLE_UNSPECIFIED" | "DEFAULT" | "CONTROLLER" | "SPARK_DRIVER" | "SPARK_EXECUTOR">
- The roles associated with the GKE node pool.
- nodePool Property MapConfig 
- Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
GkeNodePoolTargetResponse, GkeNodePoolTargetResponseArgs          
- NodePool string
- The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- NodePool Pulumi.Config Google Native. Dataproc. V1. Inputs. Gke Node Pool Config Response 
- Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- Roles List<string>
- The roles associated with the GKE node pool.
- NodePool string
- The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- NodePool GkeConfig Node Pool Config Response 
- Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- Roles []string
- The roles associated with the GKE node pool.
- nodePool String
- The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- nodePool GkeConfig Node Pool Config Response 
- Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- roles List<String>
- The roles associated with the GKE node pool.
- nodePool string
- The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- nodePool GkeConfig Node Pool Config Response 
- Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- roles string[]
- The roles associated with the GKE node pool.
- node_pool str
- The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- node_pool_ Gkeconfig Node Pool Config Response 
- Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- roles Sequence[str]
- The roles associated with the GKE node pool.
- nodePool String
- The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
- nodePool Property MapConfig 
- Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
- roles List<String>
- The roles associated with the GKE node pool.
GkeNodePoolTargetRolesItem, GkeNodePoolTargetRolesItemArgs            
- RoleUnspecified 
- ROLE_UNSPECIFIEDRole is unspecified.
- Default
- DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
- Controller
- CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
- SparkDriver 
- SPARK_DRIVERRun work associated with a Spark driver of a job.
- SparkExecutor 
- SPARK_EXECUTORRun work associated with a Spark executor of a job.
- GkeNode Pool Target Roles Item Role Unspecified 
- ROLE_UNSPECIFIEDRole is unspecified.
- GkeNode Pool Target Roles Item Default 
- DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
- GkeNode Pool Target Roles Item Controller 
- CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
- GkeNode Pool Target Roles Item Spark Driver 
- SPARK_DRIVERRun work associated with a Spark driver of a job.
- GkeNode Pool Target Roles Item Spark Executor 
- SPARK_EXECUTORRun work associated with a Spark executor of a job.
- RoleUnspecified 
- ROLE_UNSPECIFIEDRole is unspecified.
- Default
- DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
- Controller
- CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
- SparkDriver 
- SPARK_DRIVERRun work associated with a Spark driver of a job.
- SparkExecutor 
- SPARK_EXECUTORRun work associated with a Spark executor of a job.
- RoleUnspecified 
- ROLE_UNSPECIFIEDRole is unspecified.
- Default
- DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
- Controller
- CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
- SparkDriver 
- SPARK_DRIVERRun work associated with a Spark driver of a job.
- SparkExecutor 
- SPARK_EXECUTORRun work associated with a Spark executor of a job.
- ROLE_UNSPECIFIED
- ROLE_UNSPECIFIEDRole is unspecified.
- DEFAULT
- DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
- CONTROLLER
- CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
- SPARK_DRIVER
- SPARK_DRIVERRun work associated with a Spark driver of a job.
- SPARK_EXECUTOR
- SPARK_EXECUTORRun work associated with a Spark executor of a job.
- "ROLE_UNSPECIFIED"
- ROLE_UNSPECIFIEDRole is unspecified.
- "DEFAULT"
- DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
- "CONTROLLER"
- CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
- "SPARK_DRIVER"
- SPARK_DRIVERRun work associated with a Spark driver of a job.
- "SPARK_EXECUTOR"
- SPARK_EXECUTORRun work associated with a Spark executor of a job.
IdentityConfig, IdentityConfigArgs    
- UserService Dictionary<string, string>Account Mapping 
- Map of user to service account.
- UserService map[string]stringAccount Mapping 
- Map of user to service account.
- userService Map<String,String>Account Mapping 
- Map of user to service account.
- userService {[key: string]: string}Account Mapping 
- Map of user to service account.
- user_service_ Mapping[str, str]account_ mapping 
- Map of user to service account.
- userService Map<String>Account Mapping 
- Map of user to service account.
IdentityConfigResponse, IdentityConfigResponseArgs      
- UserService Dictionary<string, string>Account Mapping 
- Map of user to service account.
- UserService map[string]stringAccount Mapping 
- Map of user to service account.
- userService Map<String,String>Account Mapping 
- Map of user to service account.
- userService {[key: string]: string}Account Mapping 
- Map of user to service account.
- user_service_ Mapping[str, str]account_ mapping 
- Map of user to service account.
- userService Map<String>Account Mapping 
- Map of user to service account.
InstanceFlexibilityPolicy, InstanceFlexibilityPolicyArgs      
- InstanceSelection List<Pulumi.List Google Native. Dataproc. V1. Inputs. Instance Selection> 
- Optional. List of instance selection options that the group will use when creating new VMs.
- InstanceSelection []InstanceList Selection 
- Optional. List of instance selection options that the group will use when creating new VMs.
- instanceSelection List<InstanceList Selection> 
- Optional. List of instance selection options that the group will use when creating new VMs.
- instanceSelection InstanceList Selection[] 
- Optional. List of instance selection options that the group will use when creating new VMs.
- instance_selection_ Sequence[Instancelist Selection] 
- Optional. List of instance selection options that the group will use when creating new VMs.
- instanceSelection List<Property Map>List 
- Optional. List of instance selection options that the group will use when creating new VMs.
InstanceFlexibilityPolicyResponse, InstanceFlexibilityPolicyResponseArgs        
- InstanceSelection List<Pulumi.List Google Native. Dataproc. V1. Inputs. Instance Selection Response> 
- Optional. List of instance selection options that the group will use when creating new VMs.
- InstanceSelection List<Pulumi.Results Google Native. Dataproc. V1. Inputs. Instance Selection Result Response> 
- A list of instance selection results in the group.
- InstanceSelection []InstanceList Selection Response 
- Optional. List of instance selection options that the group will use when creating new VMs.
- InstanceSelection []InstanceResults Selection Result Response 
- A list of instance selection results in the group.
- instanceSelection List<InstanceList Selection Response> 
- Optional. List of instance selection options that the group will use when creating new VMs.
- instanceSelection List<InstanceResults Selection Result Response> 
- A list of instance selection results in the group.
- instanceSelection InstanceList Selection Response[] 
- Optional. List of instance selection options that the group will use when creating new VMs.
- instanceSelection InstanceResults Selection Result Response[] 
- A list of instance selection results in the group.
- instance_selection_ Sequence[Instancelist Selection Response] 
- Optional. List of instance selection options that the group will use when creating new VMs.
- instance_selection_ Sequence[Instanceresults Selection Result Response] 
- A list of instance selection results in the group.
- instanceSelection List<Property Map>List 
- Optional. List of instance selection options that the group will use when creating new VMs.
- instanceSelection List<Property Map>Results 
- A list of instance selection results in the group.
InstanceGroupConfig, InstanceGroupConfigArgs      
- Accelerators
List<Pulumi.Google Native. Dataproc. V1. Inputs. Accelerator Config> 
- Optional. The Compute Engine accelerator configuration for these instances.
- DiskConfig Pulumi.Google Native. Dataproc. V1. Inputs. Disk Config 
- Optional. Disk option config settings.
- ImageUri string
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- InstanceFlexibility Pulumi.Policy Google Native. Dataproc. V1. Inputs. Instance Flexibility Policy 
- Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- MachineType stringUri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- MinCpu stringPlatform 
- Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- MinNum intInstances 
- Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- NumInstances int
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility
Pulumi.Google Native. Dataproc. V1. Instance Group Config Preemptibility 
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- StartupConfig Pulumi.Google Native. Dataproc. V1. Inputs. Startup Config 
- Optional. Configuration to handle the startup of instances during cluster create and update process.
- Accelerators
[]AcceleratorConfig 
- Optional. The Compute Engine accelerator configuration for these instances.
- DiskConfig DiskConfig 
- Optional. Disk option config settings.
- ImageUri string
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- InstanceFlexibility InstancePolicy Flexibility Policy 
- Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- MachineType stringUri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- MinCpu stringPlatform 
- Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- MinNum intInstances 
- Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- NumInstances int
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility
InstanceGroup Config Preemptibility 
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- StartupConfig StartupConfig 
- Optional. Configuration to handle the startup of instances during cluster create and update process.
- accelerators
List<AcceleratorConfig> 
- Optional. The Compute Engine accelerator configuration for these instances.
- diskConfig DiskConfig 
- Optional. Disk option config settings.
- imageUri String
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instanceFlexibility InstancePolicy Flexibility Policy 
- Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- machineType StringUri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- minCpu StringPlatform 
- Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- minNum IntegerInstances 
- Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- numInstances Integer
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility
InstanceGroup Config Preemptibility 
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- startupConfig StartupConfig 
- Optional. Configuration to handle the startup of instances during cluster create and update process.
- accelerators
AcceleratorConfig[] 
- Optional. The Compute Engine accelerator configuration for these instances.
- diskConfig DiskConfig 
- Optional. Disk option config settings.
- imageUri string
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instanceFlexibility InstancePolicy Flexibility Policy 
- Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- machineType stringUri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- minCpu stringPlatform 
- Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- minNum numberInstances 
- Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- numInstances number
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility
InstanceGroup Config Preemptibility 
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- startupConfig StartupConfig 
- Optional. Configuration to handle the startup of instances during cluster create and update process.
- accelerators
Sequence[AcceleratorConfig] 
- Optional. The Compute Engine accelerator configuration for these instances.
- disk_config DiskConfig 
- Optional. Disk option config settings.
- image_uri str
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance_flexibility_ Instancepolicy Flexibility Policy 
- Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- machine_type_ struri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- min_cpu_ strplatform 
- Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- min_num_ intinstances 
- Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- num_instances int
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility
InstanceGroup Config Preemptibility 
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- startup_config StartupConfig 
- Optional. Configuration to handle the startup of instances during cluster create and update process.
- accelerators List<Property Map>
- Optional. The Compute Engine accelerator configuration for these instances.
- diskConfig Property Map
- Optional. Disk option config settings.
- imageUri String
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instanceFlexibility Property MapPolicy 
- Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- machineType StringUri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- minCpu StringPlatform 
- Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- minNum NumberInstances 
- Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- numInstances Number
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility "PREEMPTIBILITY_UNSPECIFIED" | "NON_PREEMPTIBLE" | "PREEMPTIBLE" | "SPOT"
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- startupConfig Property Map
- Optional. Configuration to handle the startup of instances during cluster create and update process.
InstanceGroupConfigPreemptibility, InstanceGroupConfigPreemptibilityArgs        
- PreemptibilityUnspecified 
- PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- NonPreemptible 
- NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- Preemptible
- PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
- Spot
- SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
- InstanceGroup Config Preemptibility Preemptibility Unspecified 
- PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- InstanceGroup Config Preemptibility Non Preemptible 
- NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- InstanceGroup Config Preemptibility Preemptible 
- PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
- InstanceGroup Config Preemptibility Spot 
- SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
- PreemptibilityUnspecified 
- PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- NonPreemptible 
- NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- Preemptible
- PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
- Spot
- SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
- PreemptibilityUnspecified 
- PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- NonPreemptible 
- NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- Preemptible
- PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
- Spot
- SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
- PREEMPTIBILITY_UNSPECIFIED
- PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- NON_PREEMPTIBLE
- NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- PREEMPTIBLE
- PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
- SPOT
- SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
- "PREEMPTIBILITY_UNSPECIFIED"
- PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- "NON_PREEMPTIBLE"
- NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- "PREEMPTIBLE"
- PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
- "SPOT"
- SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
InstanceGroupConfigResponse, InstanceGroupConfigResponseArgs        
- Accelerators
List<Pulumi.Google Native. Dataproc. V1. Inputs. Accelerator Config Response> 
- Optional. The Compute Engine accelerator configuration for these instances.
- DiskConfig Pulumi.Google Native. Dataproc. V1. Inputs. Disk Config Response 
- Optional. Disk option config settings.
- ImageUri string
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- InstanceFlexibility Pulumi.Policy Google Native. Dataproc. V1. Inputs. Instance Flexibility Policy Response 
- Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- InstanceNames List<string>
- The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- InstanceReferences List<Pulumi.Google Native. Dataproc. V1. Inputs. Instance Reference Response> 
- List of references to Compute Engine instances.
- IsPreemptible bool
- Specifies that this instance group contains preemptible instances.
- MachineType stringUri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- ManagedGroup Pulumi.Config Google Native. Dataproc. V1. Inputs. Managed Group Config Response 
- The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- MinCpu stringPlatform 
- Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- MinNum intInstances 
- Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- NumInstances int
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility string
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- StartupConfig Pulumi.Google Native. Dataproc. V1. Inputs. Startup Config Response 
- Optional. Configuration to handle the startup of instances during cluster create and update process.
- Accelerators
[]AcceleratorConfig Response 
- Optional. The Compute Engine accelerator configuration for these instances.
- DiskConfig DiskConfig Response 
- Optional. Disk option config settings.
- ImageUri string
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- InstanceFlexibility InstancePolicy Flexibility Policy Response 
- Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- InstanceNames []string
- The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- InstanceReferences []InstanceReference Response 
- List of references to Compute Engine instances.
- IsPreemptible bool
- Specifies that this instance group contains preemptible instances.
- MachineType stringUri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- ManagedGroup ManagedConfig Group Config Response 
- The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- MinCpu stringPlatform 
- Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- MinNum intInstances 
- Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- NumInstances int
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility string
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- StartupConfig StartupConfig Response 
- Optional. Configuration to handle the startup of instances during cluster create and update process.
- accelerators
List<AcceleratorConfig Response> 
- Optional. The Compute Engine accelerator configuration for these instances.
- diskConfig DiskConfig Response 
- Optional. Disk option config settings.
- imageUri String
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instanceFlexibility InstancePolicy Flexibility Policy Response 
- Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- instanceNames List<String>
- The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instanceReferences List<InstanceReference Response> 
- List of references to Compute Engine instances.
- isPreemptible Boolean
- Specifies that this instance group contains preemptible instances.
- machineType StringUri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managedGroup ManagedConfig Group Config Response 
- The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- minCpu StringPlatform 
- Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- minNum IntegerInstances 
- Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- numInstances Integer
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility String
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- startupConfig StartupConfig Response 
- Optional. Configuration to handle the startup of instances during cluster create and update process.
- accelerators
AcceleratorConfig Response[] 
- Optional. The Compute Engine accelerator configuration for these instances.
- diskConfig DiskConfig Response 
- Optional. Disk option config settings.
- imageUri string
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instanceFlexibility InstancePolicy Flexibility Policy Response 
- Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- instanceNames string[]
- The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instanceReferences InstanceReference Response[] 
- List of references to Compute Engine instances.
- isPreemptible boolean
- Specifies that this instance group contains preemptible instances.
- machineType stringUri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managedGroup ManagedConfig Group Config Response 
- The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- minCpu stringPlatform 
- Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- minNum numberInstances 
- Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- numInstances number
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility string
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- startupConfig StartupConfig Response 
- Optional. Configuration to handle the startup of instances during cluster create and update process.
- accelerators
Sequence[AcceleratorConfig Response] 
- Optional. The Compute Engine accelerator configuration for these instances.
- disk_config DiskConfig Response 
- Optional. Disk option config settings.
- image_uri str
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance_flexibility_ Instancepolicy Flexibility Policy Response 
- Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- instance_names Sequence[str]
- The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instance_references Sequence[InstanceReference Response] 
- List of references to Compute Engine instances.
- is_preemptible bool
- Specifies that this instance group contains preemptible instances.
- machine_type_ struri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managed_group_ Managedconfig Group Config Response 
- The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- min_cpu_ strplatform 
- Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- min_num_ intinstances 
- Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- num_instances int
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility str
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- startup_config StartupConfig Response 
- Optional. Configuration to handle the startup of instances during cluster create and update process.
- accelerators List<Property Map>
- Optional. The Compute Engine accelerator configuration for these instances.
- diskConfig Property Map
- Optional. Disk option config settings.
- imageUri String
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instanceFlexibility Property MapPolicy 
- Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
- instanceNames List<String>
- The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instanceReferences List<Property Map>
- List of references to Compute Engine instances.
- isPreemptible Boolean
- Specifies that this instance group contains preemptible instances.
- machineType StringUri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managedGroup Property MapConfig 
- The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- minCpu StringPlatform 
- Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- minNum NumberInstances 
- Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
- numInstances Number
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility String
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- startupConfig Property Map
- Optional. Configuration to handle the startup of instances during cluster create and update process.
InstanceReferenceResponse, InstanceReferenceResponseArgs      
- InstanceId string
- The unique identifier of the Compute Engine instance.
- InstanceName string
- The user-friendly name of the Compute Engine instance.
- PublicEcies stringKey 
- The public ECIES key used for sharing data with this instance.
- PublicKey string
- The public RSA key used for sharing data with this instance.
- InstanceId string
- The unique identifier of the Compute Engine instance.
- InstanceName string
- The user-friendly name of the Compute Engine instance.
- PublicEcies stringKey 
- The public ECIES key used for sharing data with this instance.
- PublicKey string
- The public RSA key used for sharing data with this instance.
- instanceId String
- The unique identifier of the Compute Engine instance.
- instanceName String
- The user-friendly name of the Compute Engine instance.
- publicEcies StringKey 
- The public ECIES key used for sharing data with this instance.
- publicKey String
- The public RSA key used for sharing data with this instance.
- instanceId string
- The unique identifier of the Compute Engine instance.
- instanceName string
- The user-friendly name of the Compute Engine instance.
- publicEcies stringKey 
- The public ECIES key used for sharing data with this instance.
- publicKey string
- The public RSA key used for sharing data with this instance.
- instance_id str
- The unique identifier of the Compute Engine instance.
- instance_name str
- The user-friendly name of the Compute Engine instance.
- public_ecies_ strkey 
- The public ECIES key used for sharing data with this instance.
- public_key str
- The public RSA key used for sharing data with this instance.
- instanceId String
- The unique identifier of the Compute Engine instance.
- instanceName String
- The user-friendly name of the Compute Engine instance.
- publicEcies StringKey 
- The public ECIES key used for sharing data with this instance.
- publicKey String
- The public RSA key used for sharing data with this instance.
InstanceSelection, InstanceSelectionArgs    
- MachineTypes List<string>
- Optional. Full machine-type names, e.g. "n1-standard-16".
- Rank int
- Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
- MachineTypes []string
- Optional. Full machine-type names, e.g. "n1-standard-16".
- Rank int
- Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
- machineTypes List<String>
- Optional. Full machine-type names, e.g. "n1-standard-16".
- rank Integer
- Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
- machineTypes string[]
- Optional. Full machine-type names, e.g. "n1-standard-16".
- rank number
- Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
- machine_types Sequence[str]
- Optional. Full machine-type names, e.g. "n1-standard-16".
- rank int
- Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
- machineTypes List<String>
- Optional. Full machine-type names, e.g. "n1-standard-16".
- rank Number
- Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
InstanceSelectionResponse, InstanceSelectionResponseArgs      
- MachineTypes List<string>
- Optional. Full machine-type names, e.g. "n1-standard-16".
- Rank int
- Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
- MachineTypes []string
- Optional. Full machine-type names, e.g. "n1-standard-16".
- Rank int
- Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
- machineTypes List<String>
- Optional. Full machine-type names, e.g. "n1-standard-16".
- rank Integer
- Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
- machineTypes string[]
- Optional. Full machine-type names, e.g. "n1-standard-16".
- rank number
- Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
- machine_types Sequence[str]
- Optional. Full machine-type names, e.g. "n1-standard-16".
- rank int
- Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
- machineTypes List<String>
- Optional. Full machine-type names, e.g. "n1-standard-16".
- rank Number
- Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
InstanceSelectionResultResponse, InstanceSelectionResultResponseArgs        
- MachineType string
- Full machine-type names, e.g. "n1-standard-16".
- VmCount int
- Number of VM provisioned with the machine_type.
- MachineType string
- Full machine-type names, e.g. "n1-standard-16".
- VmCount int
- Number of VM provisioned with the machine_type.
- machineType String
- Full machine-type names, e.g. "n1-standard-16".
- vmCount Integer
- Number of VM provisioned with the machine_type.
- machineType string
- Full machine-type names, e.g. "n1-standard-16".
- vmCount number
- Number of VM provisioned with the machine_type.
- machine_type str
- Full machine-type names, e.g. "n1-standard-16".
- vm_count int
- Number of VM provisioned with the machine_type.
- machineType String
- Full machine-type names, e.g. "n1-standard-16".
- vmCount Number
- Number of VM provisioned with the machine_type.
KerberosConfig, KerberosConfigArgs    
- CrossRealm stringTrust Admin Server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- CrossRealm stringTrust Kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- CrossRealm stringTrust Realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- EnableKerberos bool
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- KdcDb stringKey Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- KeyPassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- KeystorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- KeystoreUri string
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- KmsKey stringUri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- Realm string
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- RootPrincipal stringPassword Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- TgtLifetime intHours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- TruststorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- TruststoreUri string
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- CrossRealm stringTrust Admin Server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- CrossRealm stringTrust Kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- CrossRealm stringTrust Realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- EnableKerberos bool
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- KdcDb stringKey Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- KeyPassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- KeystorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- KeystoreUri string
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- KmsKey stringUri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- Realm string
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- RootPrincipal stringPassword Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- TgtLifetime intHours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- TruststorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- TruststoreUri string
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- crossRealm StringTrust Admin Server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm StringTrust Kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm StringTrust Realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- String
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enableKerberos Boolean
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdcDb StringKey Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- keyPassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystorePassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystoreUri String
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kmsKey StringUri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm String
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- rootPrincipal StringPassword Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgtLifetime IntegerHours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststorePassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststoreUri String
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- crossRealm stringTrust Admin Server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm stringTrust Kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm stringTrust Realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enableKerberos boolean
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdcDb stringKey Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- keyPassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystoreUri string
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kmsKey stringUri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm string
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- rootPrincipal stringPassword Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgtLifetime numberHours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststoreUri string
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross_realm_ strtrust_ admin_ server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross_realm_ strtrust_ kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross_realm_ strtrust_ realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- str
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable_kerberos bool
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc_db_ strkey_ uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key_password_ struri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore_password_ struri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore_uri str
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms_key_ struri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm str
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root_principal_ strpassword_ uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt_lifetime_ inthours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore_password_ struri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore_uri str
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- crossRealm StringTrust Admin Server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm StringTrust Kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm StringTrust Realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- String
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enableKerberos Boolean
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdcDb StringKey Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- keyPassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystorePassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystoreUri String
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kmsKey StringUri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm String
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- rootPrincipal StringPassword Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgtLifetime NumberHours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststorePassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststoreUri String
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
KerberosConfigResponse, KerberosConfigResponseArgs      
- CrossRealm stringTrust Admin Server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- CrossRealm stringTrust Kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- CrossRealm stringTrust Realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- EnableKerberos bool
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- KdcDb stringKey Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- KeyPassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- KeystorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- KeystoreUri string
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- KmsKey stringUri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- Realm string
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- RootPrincipal stringPassword Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- TgtLifetime intHours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- TruststorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- TruststoreUri string
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- CrossRealm stringTrust Admin Server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- CrossRealm stringTrust Kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- CrossRealm stringTrust Realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- EnableKerberos bool
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- KdcDb stringKey Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- KeyPassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- KeystorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- KeystoreUri string
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- KmsKey stringUri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- Realm string
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- RootPrincipal stringPassword Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- TgtLifetime intHours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- TruststorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- TruststoreUri string
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- crossRealm StringTrust Admin Server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm StringTrust Kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm StringTrust Realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- String
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enableKerberos Boolean
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdcDb StringKey Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- keyPassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystorePassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystoreUri String
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kmsKey StringUri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm String
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- rootPrincipal StringPassword Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgtLifetime IntegerHours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststorePassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststoreUri String
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- crossRealm stringTrust Admin Server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm stringTrust Kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm stringTrust Realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enableKerberos boolean
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdcDb stringKey Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- keyPassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystoreUri string
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kmsKey stringUri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm string
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- rootPrincipal stringPassword Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgtLifetime numberHours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststoreUri string
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross_realm_ strtrust_ admin_ server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross_realm_ strtrust_ kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross_realm_ strtrust_ realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- str
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable_kerberos bool
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc_db_ strkey_ uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key_password_ struri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore_password_ struri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore_uri str
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms_key_ struri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm str
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root_principal_ strpassword_ uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt_lifetime_ inthours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore_password_ struri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore_uri str
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- crossRealm StringTrust Admin Server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm StringTrust Kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm StringTrust Realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- String
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enableKerberos Boolean
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdcDb StringKey Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- keyPassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystorePassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystoreUri String
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kmsKey StringUri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm String
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- rootPrincipal StringPassword Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgtLifetime NumberHours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststorePassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststoreUri String
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
KubernetesClusterConfig, KubernetesClusterConfigArgs      
- GkeCluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Gke Cluster Config 
- The configuration for running the Dataproc cluster on GKE.
- KubernetesNamespace string
- Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- KubernetesSoftware Pulumi.Config Google Native. Dataproc. V1. Inputs. Kubernetes Software Config 
- Optional. The software configuration for this Dataproc cluster running on Kubernetes.
- GkeCluster GkeConfig Cluster Config 
- The configuration for running the Dataproc cluster on GKE.
- KubernetesNamespace string
- Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- KubernetesSoftware KubernetesConfig Software Config 
- Optional. The software configuration for this Dataproc cluster running on Kubernetes.
- gkeCluster GkeConfig Cluster Config 
- The configuration for running the Dataproc cluster on GKE.
- kubernetesNamespace String
- Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- kubernetesSoftware KubernetesConfig Software Config 
- Optional. The software configuration for this Dataproc cluster running on Kubernetes.
- gkeCluster GkeConfig Cluster Config 
- The configuration for running the Dataproc cluster on GKE.
- kubernetesNamespace string
- Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- kubernetesSoftware KubernetesConfig Software Config 
- Optional. The software configuration for this Dataproc cluster running on Kubernetes.
- gke_cluster_ Gkeconfig Cluster Config 
- The configuration for running the Dataproc cluster on GKE.
- kubernetes_namespace str
- Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- kubernetes_software_ Kubernetesconfig Software Config 
- Optional. The software configuration for this Dataproc cluster running on Kubernetes.
- gkeCluster Property MapConfig 
- The configuration for running the Dataproc cluster on GKE.
- kubernetesNamespace String
- Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- kubernetesSoftware Property MapConfig 
- Optional. The software configuration for this Dataproc cluster running on Kubernetes.
KubernetesClusterConfigResponse, KubernetesClusterConfigResponseArgs        
- GkeCluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Gke Cluster Config Response 
- The configuration for running the Dataproc cluster on GKE.
- KubernetesNamespace string
- Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- KubernetesSoftware Pulumi.Config Google Native. Dataproc. V1. Inputs. Kubernetes Software Config Response 
- Optional. The software configuration for this Dataproc cluster running on Kubernetes.
- GkeCluster GkeConfig Cluster Config Response 
- The configuration for running the Dataproc cluster on GKE.
- KubernetesNamespace string
- Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- KubernetesSoftware KubernetesConfig Software Config Response 
- Optional. The software configuration for this Dataproc cluster running on Kubernetes.
- gkeCluster GkeConfig Cluster Config Response 
- The configuration for running the Dataproc cluster on GKE.
- kubernetesNamespace String
- Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- kubernetesSoftware KubernetesConfig Software Config Response 
- Optional. The software configuration for this Dataproc cluster running on Kubernetes.
- gkeCluster GkeConfig Cluster Config Response 
- The configuration for running the Dataproc cluster on GKE.
- kubernetesNamespace string
- Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- kubernetesSoftware KubernetesConfig Software Config Response 
- Optional. The software configuration for this Dataproc cluster running on Kubernetes.
- gke_cluster_ Gkeconfig Cluster Config Response 
- The configuration for running the Dataproc cluster on GKE.
- kubernetes_namespace str
- Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- kubernetes_software_ Kubernetesconfig Software Config Response 
- Optional. The software configuration for this Dataproc cluster running on Kubernetes.
- gkeCluster Property MapConfig 
- The configuration for running the Dataproc cluster on GKE.
- kubernetesNamespace String
- Optional. A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.
- kubernetesSoftware Property MapConfig 
- Optional. The software configuration for this Dataproc cluster running on Kubernetes.
KubernetesSoftwareConfig, KubernetesSoftwareConfigArgs      
- ComponentVersion Dictionary<string, string>
- The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- Properties Dictionary<string, string>
- The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- ComponentVersion map[string]string
- The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- Properties map[string]string
- The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- componentVersion Map<String,String>
- The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- properties Map<String,String>
- The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- componentVersion {[key: string]: string}
- The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- properties {[key: string]: string}
- The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- component_version Mapping[str, str]
- The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- properties Mapping[str, str]
- The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- componentVersion Map<String>
- The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- properties Map<String>
- The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
KubernetesSoftwareConfigResponse, KubernetesSoftwareConfigResponseArgs        
- ComponentVersion Dictionary<string, string>
- The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- Properties Dictionary<string, string>
- The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- ComponentVersion map[string]string
- The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- Properties map[string]string
- The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- componentVersion Map<String,String>
- The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- properties Map<String,String>
- The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- componentVersion {[key: string]: string}
- The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- properties {[key: string]: string}
- The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- component_version Mapping[str, str]
- The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- properties Mapping[str, str]
- The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- componentVersion Map<String>
- The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed. At least one entry must be specified.
- properties Map<String>
- The properties to set on daemon config files.Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image. The following are supported prefixes and their mappings: spark: spark-defaults.confFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
LifecycleConfig, LifecycleConfigArgs    
- AutoDelete stringTime 
- Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- AutoDelete stringTtl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- IdleDelete stringTtl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- AutoDelete stringTime 
- Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- AutoDelete stringTtl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- IdleDelete stringTtl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete StringTime 
- Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete StringTtl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idleDelete StringTtl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete stringTime 
- Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete stringTtl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idleDelete stringTtl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto_delete_ strtime 
- Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto_delete_ strttl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle_delete_ strttl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete StringTime 
- Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete StringTtl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idleDelete StringTtl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
LifecycleConfigResponse, LifecycleConfigResponseArgs      
- AutoDelete stringTime 
- Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- AutoDelete stringTtl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- IdleDelete stringTtl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- IdleStart stringTime 
- The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- AutoDelete stringTime 
- Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- AutoDelete stringTtl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- IdleDelete stringTtl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- IdleStart stringTime 
- The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete StringTime 
- Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete StringTtl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idleDelete StringTtl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idleStart StringTime 
- The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete stringTime 
- Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete stringTtl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idleDelete stringTtl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idleStart stringTime 
- The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto_delete_ strtime 
- Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto_delete_ strttl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle_delete_ strttl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle_start_ strtime 
- The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete StringTime 
- Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete StringTtl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idleDelete StringTtl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idleStart StringTime 
- The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
ManagedGroupConfigResponse, ManagedGroupConfigResponseArgs        
- InstanceGroup stringManager Name 
- The name of the Instance Group Manager for this group.
- InstanceGroup stringManager Uri 
- The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
- InstanceTemplate stringName 
- The name of the Instance Template used for the Managed Instance Group.
- InstanceGroup stringManager Name 
- The name of the Instance Group Manager for this group.
- InstanceGroup stringManager Uri 
- The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
- InstanceTemplate stringName 
- The name of the Instance Template used for the Managed Instance Group.
- instanceGroup StringManager Name 
- The name of the Instance Group Manager for this group.
- instanceGroup StringManager Uri 
- The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
- instanceTemplate StringName 
- The name of the Instance Template used for the Managed Instance Group.
- instanceGroup stringManager Name 
- The name of the Instance Group Manager for this group.
- instanceGroup stringManager Uri 
- The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
- instanceTemplate stringName 
- The name of the Instance Template used for the Managed Instance Group.
- instance_group_ strmanager_ name 
- The name of the Instance Group Manager for this group.
- instance_group_ strmanager_ uri 
- The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
- instance_template_ strname 
- The name of the Instance Template used for the Managed Instance Group.
- instanceGroup StringManager Name 
- The name of the Instance Group Manager for this group.
- instanceGroup StringManager Uri 
- The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
- instanceTemplate StringName 
- The name of the Instance Template used for the Managed Instance Group.
MetastoreConfig, MetastoreConfigArgs    
- DataprocMetastore stringService 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- DataprocMetastore stringService 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataprocMetastore StringService 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataprocMetastore stringService 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc_metastore_ strservice 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataprocMetastore StringService 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
MetastoreConfigResponse, MetastoreConfigResponseArgs      
- DataprocMetastore stringService 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- DataprocMetastore stringService 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataprocMetastore StringService 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataprocMetastore stringService 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc_metastore_ strservice 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataprocMetastore StringService 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
Metric, MetricArgs  
- MetricSource Pulumi.Google Native. Dataproc. V1. Metric Metric Source 
- A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
- MetricOverrides List<string>
- Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- MetricSource MetricMetric Source 
- A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
- MetricOverrides []string
- Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- metricSource MetricMetric Source 
- A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
- metricOverrides List<String>
- Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- metricSource MetricMetric Source 
- A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
- metricOverrides string[]
- Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- metric_source MetricMetric Source 
- A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
- metric_overrides Sequence[str]
- Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- metricSource "METRIC_SOURCE_UNSPECIFIED" | "MONITORING_AGENT_DEFAULTS" | "HDFS" | "SPARK" | "YARN" | "SPARK_HISTORY_SERVER" | "HIVESERVER2" | "HIVEMETASTORE" | "FLINK"
- A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
- metricOverrides List<String>
- Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
MetricMetricSource, MetricMetricSourceArgs      
- MetricSource Unspecified 
- METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
- MonitoringAgent Defaults 
- MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
- Hdfs
- HDFSHDFS metric source.
- Spark
- SPARKSpark metric source.
- Yarn
- YARNYARN metric source.
- SparkHistory Server 
- SPARK_HISTORY_SERVERSpark History Server metric source.
- Hiveserver2
- HIVESERVER2Hiveserver2 metric source.
- Hivemetastore
- HIVEMETASTOREhivemetastore metric source
- Flink
- FLINKflink metric source
- MetricMetric Source Metric Source Unspecified 
- METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
- MetricMetric Source Monitoring Agent Defaults 
- MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
- MetricMetric Source Hdfs 
- HDFSHDFS metric source.
- MetricMetric Source Spark 
- SPARKSpark metric source.
- MetricMetric Source Yarn 
- YARNYARN metric source.
- MetricMetric Source Spark History Server 
- SPARK_HISTORY_SERVERSpark History Server metric source.
- MetricMetric Source Hiveserver2 
- HIVESERVER2Hiveserver2 metric source.
- MetricMetric Source Hivemetastore 
- HIVEMETASTOREhivemetastore metric source
- MetricMetric Source Flink 
- FLINKflink metric source
- MetricSource Unspecified 
- METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
- MonitoringAgent Defaults 
- MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
- Hdfs
- HDFSHDFS metric source.
- Spark
- SPARKSpark metric source.
- Yarn
- YARNYARN metric source.
- SparkHistory Server 
- SPARK_HISTORY_SERVERSpark History Server metric source.
- Hiveserver2
- HIVESERVER2Hiveserver2 metric source.
- Hivemetastore
- HIVEMETASTOREhivemetastore metric source
- Flink
- FLINKflink metric source
- MetricSource Unspecified 
- METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
- MonitoringAgent Defaults 
- MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
- Hdfs
- HDFSHDFS metric source.
- Spark
- SPARKSpark metric source.
- Yarn
- YARNYARN metric source.
- SparkHistory Server 
- SPARK_HISTORY_SERVERSpark History Server metric source.
- Hiveserver2
- HIVESERVER2Hiveserver2 metric source.
- Hivemetastore
- HIVEMETASTOREhivemetastore metric source
- Flink
- FLINKflink metric source
- METRIC_SOURCE_UNSPECIFIED
- METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
- MONITORING_AGENT_DEFAULTS
- MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
- HDFS
- HDFSHDFS metric source.
- SPARK
- SPARKSpark metric source.
- YARN
- YARNYARN metric source.
- SPARK_HISTORY_SERVER
- SPARK_HISTORY_SERVERSpark History Server metric source.
- HIVESERVER2
- HIVESERVER2Hiveserver2 metric source.
- HIVEMETASTORE
- HIVEMETASTOREhivemetastore metric source
- FLINK
- FLINKflink metric source
- "METRIC_SOURCE_UNSPECIFIED"
- METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
- "MONITORING_AGENT_DEFAULTS"
- MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
- "HDFS"
- HDFSHDFS metric source.
- "SPARK"
- SPARKSpark metric source.
- "YARN"
- YARNYARN metric source.
- "SPARK_HISTORY_SERVER"
- SPARK_HISTORY_SERVERSpark History Server metric source.
- "HIVESERVER2"
- HIVESERVER2Hiveserver2 metric source.
- "HIVEMETASTORE"
- HIVEMETASTOREhivemetastore metric source
- "FLINK"
- FLINKflink metric source
MetricResponse, MetricResponseArgs    
- MetricOverrides List<string>
- Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- MetricSource string
- A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
- MetricOverrides []string
- Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- MetricSource string
- A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
- metricOverrides List<String>
- Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- metricSource String
- A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
- metricOverrides string[]
- Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- metricSource string
- A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
- metric_overrides Sequence[str]
- Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- metric_source str
- A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
- metricOverrides List<String>
- Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
- metricSource String
- A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
NamespacedGkeDeploymentTarget, NamespacedGkeDeploymentTargetArgs        
- ClusterNamespace string
- Optional. A namespace within the GKE cluster to deploy into.
- TargetGke stringCluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- ClusterNamespace string
- Optional. A namespace within the GKE cluster to deploy into.
- TargetGke stringCluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- clusterNamespace String
- Optional. A namespace within the GKE cluster to deploy into.
- targetGke StringCluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- clusterNamespace string
- Optional. A namespace within the GKE cluster to deploy into.
- targetGke stringCluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster_namespace str
- Optional. A namespace within the GKE cluster to deploy into.
- target_gke_ strcluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- clusterNamespace String
- Optional. A namespace within the GKE cluster to deploy into.
- targetGke StringCluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
NamespacedGkeDeploymentTargetResponse, NamespacedGkeDeploymentTargetResponseArgs          
- ClusterNamespace string
- Optional. A namespace within the GKE cluster to deploy into.
- TargetGke stringCluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- ClusterNamespace string
- Optional. A namespace within the GKE cluster to deploy into.
- TargetGke stringCluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- clusterNamespace String
- Optional. A namespace within the GKE cluster to deploy into.
- targetGke StringCluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- clusterNamespace string
- Optional. A namespace within the GKE cluster to deploy into.
- targetGke stringCluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster_namespace str
- Optional. A namespace within the GKE cluster to deploy into.
- target_gke_ strcluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- clusterNamespace String
- Optional. A namespace within the GKE cluster to deploy into.
- targetGke StringCluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
NodeGroup, NodeGroupArgs    
- Roles
List<Pulumi.Google Native. Dataproc. V1. Node Group Roles Item> 
- Node group roles.
- Labels Dictionary<string, string>
- Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- Name string
- The Node group resource name (https://aip.dev/122).
- NodeGroup Pulumi.Config Google Native. Dataproc. V1. Inputs. Instance Group Config 
- Optional. The node group instance group configuration.
- Roles
[]NodeGroup Roles Item 
- Node group roles.
- Labels map[string]string
- Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- Name string
- The Node group resource name (https://aip.dev/122).
- NodeGroup InstanceConfig Group Config 
- Optional. The node group instance group configuration.
- roles
List<NodeGroup Roles Item> 
- Node group roles.
- labels Map<String,String>
- Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- name String
- The Node group resource name (https://aip.dev/122).
- nodeGroup InstanceConfig Group Config 
- Optional. The node group instance group configuration.
- roles
NodeGroup Roles Item[] 
- Node group roles.
- labels {[key: string]: string}
- Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- name string
- The Node group resource name (https://aip.dev/122).
- nodeGroup InstanceConfig Group Config 
- Optional. The node group instance group configuration.
- roles
Sequence[NodeGroup Roles Item] 
- Node group roles.
- labels Mapping[str, str]
- Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- name str
- The Node group resource name (https://aip.dev/122).
- node_group_ Instanceconfig Group Config 
- Optional. The node group instance group configuration.
- roles List<"ROLE_UNSPECIFIED" | "DRIVER">
- Node group roles.
- labels Map<String>
- Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- name String
- The Node group resource name (https://aip.dev/122).
- nodeGroup Property MapConfig 
- Optional. The node group instance group configuration.
NodeGroupAffinity, NodeGroupAffinityArgs      
- NodeGroup stringUri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
- NodeGroup stringUri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
- nodeGroup StringUri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
- nodeGroup stringUri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
- node_group_ struri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
- nodeGroup StringUri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
NodeGroupAffinityResponse, NodeGroupAffinityResponseArgs        
- NodeGroup stringUri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
- NodeGroup stringUri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
- nodeGroup StringUri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
- nodeGroup stringUri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
- node_group_ struri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
- nodeGroup StringUri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
NodeGroupResponse, NodeGroupResponseArgs      
- Labels Dictionary<string, string>
- Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- Name string
- The Node group resource name (https://aip.dev/122).
- NodeGroup Pulumi.Config Google Native. Dataproc. V1. Inputs. Instance Group Config Response 
- Optional. The node group instance group configuration.
- Roles List<string>
- Node group roles.
- Labels map[string]string
- Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- Name string
- The Node group resource name (https://aip.dev/122).
- NodeGroup InstanceConfig Group Config Response 
- Optional. The node group instance group configuration.
- Roles []string
- Node group roles.
- labels Map<String,String>
- Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- name String
- The Node group resource name (https://aip.dev/122).
- nodeGroup InstanceConfig Group Config Response 
- Optional. The node group instance group configuration.
- roles List<String>
- Node group roles.
- labels {[key: string]: string}
- Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- name string
- The Node group resource name (https://aip.dev/122).
- nodeGroup InstanceConfig Group Config Response 
- Optional. The node group instance group configuration.
- roles string[]
- Node group roles.
- labels Mapping[str, str]
- Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- name str
- The Node group resource name (https://aip.dev/122).
- node_group_ Instanceconfig Group Config Response 
- Optional. The node group instance group configuration.
- roles Sequence[str]
- Node group roles.
- labels Map<String>
- Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
- name String
- The Node group resource name (https://aip.dev/122).
- nodeGroup Property MapConfig 
- Optional. The node group instance group configuration.
- roles List<String>
- Node group roles.
NodeGroupRolesItem, NodeGroupRolesItemArgs        
- RoleUnspecified 
- ROLE_UNSPECIFIEDRequired unspecified role.
- Driver
- DRIVERJob drivers run on the node pool.
- NodeGroup Roles Item Role Unspecified 
- ROLE_UNSPECIFIEDRequired unspecified role.
- NodeGroup Roles Item Driver 
- DRIVERJob drivers run on the node pool.
- RoleUnspecified 
- ROLE_UNSPECIFIEDRequired unspecified role.
- Driver
- DRIVERJob drivers run on the node pool.
- RoleUnspecified 
- ROLE_UNSPECIFIEDRequired unspecified role.
- Driver
- DRIVERJob drivers run on the node pool.
- ROLE_UNSPECIFIED
- ROLE_UNSPECIFIEDRequired unspecified role.
- DRIVER
- DRIVERJob drivers run on the node pool.
- "ROLE_UNSPECIFIED"
- ROLE_UNSPECIFIEDRequired unspecified role.
- "DRIVER"
- DRIVERJob drivers run on the node pool.
NodeInitializationAction, NodeInitializationActionArgs      
- ExecutableFile string
- Cloud Storage URI of executable file.
- ExecutionTimeout string
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- ExecutableFile string
- Cloud Storage URI of executable file.
- ExecutionTimeout string
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executableFile String
- Cloud Storage URI of executable file.
- executionTimeout String
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executableFile string
- Cloud Storage URI of executable file.
- executionTimeout string
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable_file str
- Cloud Storage URI of executable file.
- execution_timeout str
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executableFile String
- Cloud Storage URI of executable file.
- executionTimeout String
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
NodeInitializationActionResponse, NodeInitializationActionResponseArgs        
- ExecutableFile string
- Cloud Storage URI of executable file.
- ExecutionTimeout string
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- ExecutableFile string
- Cloud Storage URI of executable file.
- ExecutionTimeout string
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executableFile String
- Cloud Storage URI of executable file.
- executionTimeout String
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executableFile string
- Cloud Storage URI of executable file.
- executionTimeout string
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable_file str
- Cloud Storage URI of executable file.
- execution_timeout str
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executableFile String
- Cloud Storage URI of executable file.
- executionTimeout String
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
ReservationAffinity, ReservationAffinityArgs    
- ConsumeReservation Pulumi.Type Google Native. Dataproc. V1. Reservation Affinity Consume Reservation Type 
- Optional. Type of reservation to consume
- Key string
- Optional. Corresponds to the label key of reservation resource.
- Values List<string>
- Optional. Corresponds to the label values of reservation resource.
- ConsumeReservation ReservationType Affinity Consume Reservation Type 
- Optional. Type of reservation to consume
- Key string
- Optional. Corresponds to the label key of reservation resource.
- Values []string
- Optional. Corresponds to the label values of reservation resource.
- consumeReservation ReservationType Affinity Consume Reservation Type 
- Optional. Type of reservation to consume
- key String
- Optional. Corresponds to the label key of reservation resource.
- values List<String>
- Optional. Corresponds to the label values of reservation resource.
- consumeReservation ReservationType Affinity Consume Reservation Type 
- Optional. Type of reservation to consume
- key string
- Optional. Corresponds to the label key of reservation resource.
- values string[]
- Optional. Corresponds to the label values of reservation resource.
- consume_reservation_ Reservationtype Affinity Consume Reservation Type 
- Optional. Type of reservation to consume
- key str
- Optional. Corresponds to the label key of reservation resource.
- values Sequence[str]
- Optional. Corresponds to the label values of reservation resource.
- consumeReservation "TYPE_UNSPECIFIED" | "NO_RESERVATION" | "ANY_RESERVATION" | "SPECIFIC_RESERVATION"Type 
- Optional. Type of reservation to consume
- key String
- Optional. Corresponds to the label key of reservation resource.
- values List<String>
- Optional. Corresponds to the label values of reservation resource.
ReservationAffinityConsumeReservationType, ReservationAffinityConsumeReservationTypeArgs          
- TypeUnspecified 
- TYPE_UNSPECIFIED
- NoReservation 
- NO_RESERVATIONDo not consume from any allocated capacity.
- AnyReservation 
- ANY_RESERVATIONConsume any reservation available.
- SpecificReservation 
- SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
- ReservationAffinity Consume Reservation Type Type Unspecified 
- TYPE_UNSPECIFIED
- ReservationAffinity Consume Reservation Type No Reservation 
- NO_RESERVATIONDo not consume from any allocated capacity.
- ReservationAffinity Consume Reservation Type Any Reservation 
- ANY_RESERVATIONConsume any reservation available.
- ReservationAffinity Consume Reservation Type Specific Reservation 
- SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
- TypeUnspecified 
- TYPE_UNSPECIFIED
- NoReservation 
- NO_RESERVATIONDo not consume from any allocated capacity.
- AnyReservation 
- ANY_RESERVATIONConsume any reservation available.
- SpecificReservation 
- SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
- TypeUnspecified 
- TYPE_UNSPECIFIED
- NoReservation 
- NO_RESERVATIONDo not consume from any allocated capacity.
- AnyReservation 
- ANY_RESERVATIONConsume any reservation available.
- SpecificReservation 
- SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
- TYPE_UNSPECIFIED
- TYPE_UNSPECIFIED
- NO_RESERVATION
- NO_RESERVATIONDo not consume from any allocated capacity.
- ANY_RESERVATION
- ANY_RESERVATIONConsume any reservation available.
- SPECIFIC_RESERVATION
- SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
- "TYPE_UNSPECIFIED"
- TYPE_UNSPECIFIED
- "NO_RESERVATION"
- NO_RESERVATIONDo not consume from any allocated capacity.
- "ANY_RESERVATION"
- ANY_RESERVATIONConsume any reservation available.
- "SPECIFIC_RESERVATION"
- SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
ReservationAffinityResponse, ReservationAffinityResponseArgs      
- ConsumeReservation stringType 
- Optional. Type of reservation to consume
- Key string
- Optional. Corresponds to the label key of reservation resource.
- Values List<string>
- Optional. Corresponds to the label values of reservation resource.
- ConsumeReservation stringType 
- Optional. Type of reservation to consume
- Key string
- Optional. Corresponds to the label key of reservation resource.
- Values []string
- Optional. Corresponds to the label values of reservation resource.
- consumeReservation StringType 
- Optional. Type of reservation to consume
- key String
- Optional. Corresponds to the label key of reservation resource.
- values List<String>
- Optional. Corresponds to the label values of reservation resource.
- consumeReservation stringType 
- Optional. Type of reservation to consume
- key string
- Optional. Corresponds to the label key of reservation resource.
- values string[]
- Optional. Corresponds to the label values of reservation resource.
- consume_reservation_ strtype 
- Optional. Type of reservation to consume
- key str
- Optional. Corresponds to the label key of reservation resource.
- values Sequence[str]
- Optional. Corresponds to the label values of reservation resource.
- consumeReservation StringType 
- Optional. Type of reservation to consume
- key String
- Optional. Corresponds to the label key of reservation resource.
- values List<String>
- Optional. Corresponds to the label values of reservation resource.
SecurityConfig, SecurityConfigArgs    
- IdentityConfig Pulumi.Google Native. Dataproc. V1. Inputs. Identity Config 
- Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- KerberosConfig Pulumi.Google Native. Dataproc. V1. Inputs. Kerberos Config 
- Optional. Kerberos related configuration.
- IdentityConfig IdentityConfig 
- Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- KerberosConfig KerberosConfig 
- Optional. Kerberos related configuration.
- identityConfig IdentityConfig 
- Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- kerberosConfig KerberosConfig 
- Optional. Kerberos related configuration.
- identityConfig IdentityConfig 
- Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- kerberosConfig KerberosConfig 
- Optional. Kerberos related configuration.
- identity_config IdentityConfig 
- Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- kerberos_config KerberosConfig 
- Optional. Kerberos related configuration.
- identityConfig Property Map
- Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- kerberosConfig Property Map
- Optional. Kerberos related configuration.
SecurityConfigResponse, SecurityConfigResponseArgs      
- IdentityConfig Pulumi.Google Native. Dataproc. V1. Inputs. Identity Config Response 
- Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- KerberosConfig Pulumi.Google Native. Dataproc. V1. Inputs. Kerberos Config Response 
- Optional. Kerberos related configuration.
- IdentityConfig IdentityConfig Response 
- Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- KerberosConfig KerberosConfig Response 
- Optional. Kerberos related configuration.
- identityConfig IdentityConfig Response 
- Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- kerberosConfig KerberosConfig Response 
- Optional. Kerberos related configuration.
- identityConfig IdentityConfig Response 
- Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- kerberosConfig KerberosConfig Response 
- Optional. Kerberos related configuration.
- identity_config IdentityConfig Response 
- Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- kerberos_config KerberosConfig Response 
- Optional. Kerberos related configuration.
- identityConfig Property Map
- Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
- kerberosConfig Property Map
- Optional. Kerberos related configuration.
ShieldedInstanceConfig, ShieldedInstanceConfigArgs      
- EnableIntegrity boolMonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- EnableSecure boolBoot 
- Optional. Defines whether instances have Secure Boot enabled.
- EnableVtpm bool
- Optional. Defines whether instances have the vTPM enabled.
- EnableIntegrity boolMonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- EnableSecure boolBoot 
- Optional. Defines whether instances have Secure Boot enabled.
- EnableVtpm bool
- Optional. Defines whether instances have the vTPM enabled.
- enableIntegrity BooleanMonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- enableSecure BooleanBoot 
- Optional. Defines whether instances have Secure Boot enabled.
- enableVtpm Boolean
- Optional. Defines whether instances have the vTPM enabled.
- enableIntegrity booleanMonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- enableSecure booleanBoot 
- Optional. Defines whether instances have Secure Boot enabled.
- enableVtpm boolean
- Optional. Defines whether instances have the vTPM enabled.
- enable_integrity_ boolmonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- enable_secure_ boolboot 
- Optional. Defines whether instances have Secure Boot enabled.
- enable_vtpm bool
- Optional. Defines whether instances have the vTPM enabled.
- enableIntegrity BooleanMonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- enableSecure BooleanBoot 
- Optional. Defines whether instances have Secure Boot enabled.
- enableVtpm Boolean
- Optional. Defines whether instances have the vTPM enabled.
ShieldedInstanceConfigResponse, ShieldedInstanceConfigResponseArgs        
- EnableIntegrity boolMonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- EnableSecure boolBoot 
- Optional. Defines whether instances have Secure Boot enabled.
- EnableVtpm bool
- Optional. Defines whether instances have the vTPM enabled.
- EnableIntegrity boolMonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- EnableSecure boolBoot 
- Optional. Defines whether instances have Secure Boot enabled.
- EnableVtpm bool
- Optional. Defines whether instances have the vTPM enabled.
- enableIntegrity BooleanMonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- enableSecure BooleanBoot 
- Optional. Defines whether instances have Secure Boot enabled.
- enableVtpm Boolean
- Optional. Defines whether instances have the vTPM enabled.
- enableIntegrity booleanMonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- enableSecure booleanBoot 
- Optional. Defines whether instances have Secure Boot enabled.
- enableVtpm boolean
- Optional. Defines whether instances have the vTPM enabled.
- enable_integrity_ boolmonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- enable_secure_ boolboot 
- Optional. Defines whether instances have Secure Boot enabled.
- enable_vtpm bool
- Optional. Defines whether instances have the vTPM enabled.
- enableIntegrity BooleanMonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- enableSecure BooleanBoot 
- Optional. Defines whether instances have Secure Boot enabled.
- enableVtpm Boolean
- Optional. Defines whether instances have the vTPM enabled.
SoftwareConfig, SoftwareConfigArgs    
- ImageVersion string
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- OptionalComponents List<Pulumi.Google Native. Dataproc. V1. Software Config Optional Components Item> 
- Optional. The set of components to activate on the cluster.
- Properties Dictionary<string, string>
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- ImageVersion string
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- OptionalComponents []SoftwareConfig Optional Components Item 
- Optional. The set of components to activate on the cluster.
- Properties map[string]string
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- imageVersion String
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optionalComponents List<SoftwareConfig Optional Components Item> 
- Optional. The set of components to activate on the cluster.
- properties Map<String,String>
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- imageVersion string
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optionalComponents SoftwareConfig Optional Components Item[] 
- Optional. The set of components to activate on the cluster.
- properties {[key: string]: string}
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image_version str
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional_components Sequence[SoftwareConfig Optional Components Item] 
- Optional. The set of components to activate on the cluster.
- properties Mapping[str, str]
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- imageVersion String
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optionalComponents List<"COMPONENT_UNSPECIFIED" | "ANACONDA" | "DOCKER" | "DRUID" | "FLINK" | "HBASE" | "HIVE_WEBHCAT" | "HUDI" | "JUPYTER" | "PRESTO" | "TRINO" | "RANGER" | "SOLR" | "ZEPPELIN" | "ZOOKEEPER">
- Optional. The set of components to activate on the cluster.
- properties Map<String>
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
SoftwareConfigOptionalComponentsItem, SoftwareConfigOptionalComponentsItemArgs          
- ComponentUnspecified 
- COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
- Anaconda
- ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
- Docker
- DOCKERDocker
- Druid
- DRUIDThe Druid query engine. (alpha)
- Flink
- FLINKFlink
- Hbase
- HBASEHBase. (beta)
- HiveWebhcat 
- HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
- Hudi
- HUDIHudi.
- Jupyter
- JUPYTERThe Jupyter Notebook.
- Presto
- PRESTOThe Presto query engine.
- Trino
- TRINOThe Trino query engine.
- Ranger
- RANGERThe Ranger service.
- Solr
- SOLRThe Solr service.
- Zeppelin
- ZEPPELINThe Zeppelin notebook.
- Zookeeper
- ZOOKEEPERThe Zookeeper service.
- SoftwareConfig Optional Components Item Component Unspecified 
- COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
- SoftwareConfig Optional Components Item Anaconda 
- ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
- SoftwareConfig Optional Components Item Docker 
- DOCKERDocker
- SoftwareConfig Optional Components Item Druid 
- DRUIDThe Druid query engine. (alpha)
- SoftwareConfig Optional Components Item Flink 
- FLINKFlink
- SoftwareConfig Optional Components Item Hbase 
- HBASEHBase. (beta)
- SoftwareConfig Optional Components Item Hive Webhcat 
- HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
- SoftwareConfig Optional Components Item Hudi 
- HUDIHudi.
- SoftwareConfig Optional Components Item Jupyter 
- JUPYTERThe Jupyter Notebook.
- SoftwareConfig Optional Components Item Presto 
- PRESTOThe Presto query engine.
- SoftwareConfig Optional Components Item Trino 
- TRINOThe Trino query engine.
- SoftwareConfig Optional Components Item Ranger 
- RANGERThe Ranger service.
- SoftwareConfig Optional Components Item Solr 
- SOLRThe Solr service.
- SoftwareConfig Optional Components Item Zeppelin 
- ZEPPELINThe Zeppelin notebook.
- SoftwareConfig Optional Components Item Zookeeper 
- ZOOKEEPERThe Zookeeper service.
- ComponentUnspecified 
- COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
- Anaconda
- ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
- Docker
- DOCKERDocker
- Druid
- DRUIDThe Druid query engine. (alpha)
- Flink
- FLINKFlink
- Hbase
- HBASEHBase. (beta)
- HiveWebhcat 
- HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
- Hudi
- HUDIHudi.
- Jupyter
- JUPYTERThe Jupyter Notebook.
- Presto
- PRESTOThe Presto query engine.
- Trino
- TRINOThe Trino query engine.
- Ranger
- RANGERThe Ranger service.
- Solr
- SOLRThe Solr service.
- Zeppelin
- ZEPPELINThe Zeppelin notebook.
- Zookeeper
- ZOOKEEPERThe Zookeeper service.
- ComponentUnspecified 
- COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
- Anaconda
- ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
- Docker
- DOCKERDocker
- Druid
- DRUIDThe Druid query engine. (alpha)
- Flink
- FLINKFlink
- Hbase
- HBASEHBase. (beta)
- HiveWebhcat 
- HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
- Hudi
- HUDIHudi.
- Jupyter
- JUPYTERThe Jupyter Notebook.
- Presto
- PRESTOThe Presto query engine.
- Trino
- TRINOThe Trino query engine.
- Ranger
- RANGERThe Ranger service.
- Solr
- SOLRThe Solr service.
- Zeppelin
- ZEPPELINThe Zeppelin notebook.
- Zookeeper
- ZOOKEEPERThe Zookeeper service.
- COMPONENT_UNSPECIFIED
- COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
- ANACONDA
- ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
- DOCKER
- DOCKERDocker
- DRUID
- DRUIDThe Druid query engine. (alpha)
- FLINK
- FLINKFlink
- HBASE
- HBASEHBase. (beta)
- HIVE_WEBHCAT
- HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
- HUDI
- HUDIHudi.
- JUPYTER
- JUPYTERThe Jupyter Notebook.
- PRESTO
- PRESTOThe Presto query engine.
- TRINO
- TRINOThe Trino query engine.
- RANGER
- RANGERThe Ranger service.
- SOLR
- SOLRThe Solr service.
- ZEPPELIN
- ZEPPELINThe Zeppelin notebook.
- ZOOKEEPER
- ZOOKEEPERThe Zookeeper service.
- "COMPONENT_UNSPECIFIED"
- COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
- "ANACONDA"
- ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
- "DOCKER"
- DOCKERDocker
- "DRUID"
- DRUIDThe Druid query engine. (alpha)
- "FLINK"
- FLINKFlink
- "HBASE"
- HBASEHBase. (beta)
- "HIVE_WEBHCAT"
- HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
- "HUDI"
- HUDIHudi.
- "JUPYTER"
- JUPYTERThe Jupyter Notebook.
- "PRESTO"
- PRESTOThe Presto query engine.
- "TRINO"
- TRINOThe Trino query engine.
- "RANGER"
- RANGERThe Ranger service.
- "SOLR"
- SOLRThe Solr service.
- "ZEPPELIN"
- ZEPPELINThe Zeppelin notebook.
- "ZOOKEEPER"
- ZOOKEEPERThe Zookeeper service.
SoftwareConfigResponse, SoftwareConfigResponseArgs      
- ImageVersion string
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- OptionalComponents List<string>
- Optional. The set of components to activate on the cluster.
- Properties Dictionary<string, string>
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- ImageVersion string
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- OptionalComponents []string
- Optional. The set of components to activate on the cluster.
- Properties map[string]string
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- imageVersion String
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optionalComponents List<String>
- Optional. The set of components to activate on the cluster.
- properties Map<String,String>
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- imageVersion string
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optionalComponents string[]
- Optional. The set of components to activate on the cluster.
- properties {[key: string]: string}
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image_version str
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional_components Sequence[str]
- Optional. The set of components to activate on the cluster.
- properties Mapping[str, str]
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- imageVersion String
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optionalComponents List<String>
- Optional. The set of components to activate on the cluster.
- properties Map<String>
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
SparkHistoryServerConfig, SparkHistoryServerConfigArgs        
- DataprocCluster string
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- DataprocCluster string
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataprocCluster String
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataprocCluster string
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc_cluster str
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataprocCluster String
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
SparkHistoryServerConfigResponse, SparkHistoryServerConfigResponseArgs          
- DataprocCluster string
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- DataprocCluster string
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataprocCluster String
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataprocCluster string
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc_cluster str
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataprocCluster String
- Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
StartupConfig, StartupConfigArgs    
- RequiredRegistration doubleFraction 
- Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
- RequiredRegistration float64Fraction 
- Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
- requiredRegistration DoubleFraction 
- Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
- requiredRegistration numberFraction 
- Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
- required_registration_ floatfraction 
- Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
- requiredRegistration NumberFraction 
- Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
StartupConfigResponse, StartupConfigResponseArgs      
- RequiredRegistration doubleFraction 
- Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
- RequiredRegistration float64Fraction 
- Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
- requiredRegistration DoubleFraction 
- Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
- requiredRegistration numberFraction 
- Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
- required_registration_ floatfraction 
- Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
- requiredRegistration NumberFraction 
- Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
VirtualClusterConfig, VirtualClusterConfigArgs      
- KubernetesCluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Kubernetes Cluster Config 
- The configuration for running the Dataproc cluster on Kubernetes.
- AuxiliaryServices Pulumi.Config Google Native. Dataproc. V1. Inputs. Auxiliary Services Config 
- Optional. Configuration of auxiliary services used by this cluster.
- StagingBucket string
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- KubernetesCluster KubernetesConfig Cluster Config 
- The configuration for running the Dataproc cluster on Kubernetes.
- AuxiliaryServices AuxiliaryConfig Services Config 
- Optional. Configuration of auxiliary services used by this cluster.
- StagingBucket string
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- kubernetesCluster KubernetesConfig Cluster Config 
- The configuration for running the Dataproc cluster on Kubernetes.
- auxiliaryServices AuxiliaryConfig Services Config 
- Optional. Configuration of auxiliary services used by this cluster.
- stagingBucket String
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- kubernetesCluster KubernetesConfig Cluster Config 
- The configuration for running the Dataproc cluster on Kubernetes.
- auxiliaryServices AuxiliaryConfig Services Config 
- Optional. Configuration of auxiliary services used by this cluster.
- stagingBucket string
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- kubernetes_cluster_ Kubernetesconfig Cluster Config 
- The configuration for running the Dataproc cluster on Kubernetes.
- auxiliary_services_ Auxiliaryconfig Services Config 
- Optional. Configuration of auxiliary services used by this cluster.
- staging_bucket str
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- kubernetesCluster Property MapConfig 
- The configuration for running the Dataproc cluster on Kubernetes.
- auxiliaryServices Property MapConfig 
- Optional. Configuration of auxiliary services used by this cluster.
- stagingBucket String
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
VirtualClusterConfigResponse, VirtualClusterConfigResponseArgs        
- AuxiliaryServices Pulumi.Config Google Native. Dataproc. V1. Inputs. Auxiliary Services Config Response 
- Optional. Configuration of auxiliary services used by this cluster.
- KubernetesCluster Pulumi.Config Google Native. Dataproc. V1. Inputs. Kubernetes Cluster Config Response 
- The configuration for running the Dataproc cluster on Kubernetes.
- StagingBucket string
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- AuxiliaryServices AuxiliaryConfig Services Config Response 
- Optional. Configuration of auxiliary services used by this cluster.
- KubernetesCluster KubernetesConfig Cluster Config Response 
- The configuration for running the Dataproc cluster on Kubernetes.
- StagingBucket string
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- auxiliaryServices AuxiliaryConfig Services Config Response 
- Optional. Configuration of auxiliary services used by this cluster.
- kubernetesCluster KubernetesConfig Cluster Config Response 
- The configuration for running the Dataproc cluster on Kubernetes.
- stagingBucket String
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- auxiliaryServices AuxiliaryConfig Services Config Response 
- Optional. Configuration of auxiliary services used by this cluster.
- kubernetesCluster KubernetesConfig Cluster Config Response 
- The configuration for running the Dataproc cluster on Kubernetes.
- stagingBucket string
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- auxiliary_services_ Auxiliaryconfig Services Config Response 
- Optional. Configuration of auxiliary services used by this cluster.
- kubernetes_cluster_ Kubernetesconfig Cluster Config Response 
- The configuration for running the Dataproc cluster on Kubernetes.
- staging_bucket str
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- auxiliaryServices Property MapConfig 
- Optional. Configuration of auxiliary services used by this cluster.
- kubernetesCluster Property MapConfig 
- The configuration for running the Dataproc cluster on Kubernetes.
- stagingBucket String
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.