object BigQuery extends Google
Java API to interface with BigQuery.
- Annotations
- @ApiMayChange()
- Source
- BigQuery.scala
- Alphabetic
- By Inheritance
- BigQuery
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def cancelJob(jobId: String, location: Optional[String], settings: GoogleSettings, system: ClassicActorSystemProvider): CompletionStage[JobCancelResponse]
Requests that a job be cancelled.
Requests that a job be cancelled.
- jobId
job ID of the job to cancel
- location
the geographic location of the job. Required except for US and EU
- settings
- system
the actor system
- returns
a java.util.concurrent.CompletionStage containing the pekko.stream.connectors.googlecloud.bigquery.model.JobCancelResponse
- See also
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- def createDataset(dataset: Dataset, settings: GoogleSettings, system: ClassicActorSystemProvider): CompletionStage[Dataset]
Creates a new empty dataset.
Creates a new empty dataset.
- dataset
the pekko.stream.connectors.googlecloud.bigquery.model.Dataset to create
- settings
- system
the actor system
- returns
a java.util.concurrent.CompletionStage containing the pekko.stream.connectors.googlecloud.bigquery.model.Dataset
- See also
- def createDataset(datasetId: String, settings: GoogleSettings, system: ClassicActorSystemProvider): CompletionStage[Dataset]
Creates a new empty dataset.
Creates a new empty dataset.
- datasetId
dataset ID of the new dataset
- settings
- system
the actor system
- returns
a java.util.concurrent.CompletionStage containing the pekko.stream.connectors.googlecloud.bigquery.model.Dataset
- See also
- def createLoadJob[Job](job: Job, marshaller: Marshaller[Job, RequestEntity], unmarshaller: Unmarshaller[HttpEntity, Job]): Sink[ByteString, CompletionStage[Job]]
Starts a new asynchronous upload job.
Starts a new asynchronous upload job.
- Job
the data model for a job
- job
the job to start
- marshaller
- unmarshaller
- returns
a pekko.stream.javadsl.Sink that uploads bytes and materializes a java.util.concurrent.CompletionStage containing the Job when completed
- Note
WARNING: Pending the resolution of BigQuery issue 176002651 this method may not work as expected. As a workaround, you can use the config setting
pekko.http.parsing.conflicting-content-type-header-processing-mode = first
with Pekko HTTP.- See also
- def createTable(table: Table, settings: GoogleSettings, system: ClassicActorSystemProvider): CompletionStage[Table]
Creates a new, empty table in the dataset.
Creates a new, empty table in the dataset.
- table
the pekko.stream.connectors.googlecloud.bigquery.model.Table to create
- settings
- system
the actor system
- returns
a java.util.concurrent.CompletionStage containing the pekko.stream.connectors.googlecloud.bigquery.model.Table
- See also
- def createTable(datasetId: String, tableId: String, schema: TableSchema, settings: GoogleSettings, system: ClassicActorSystemProvider): CompletionStage[Table]
Creates a new, empty table in the dataset.
Creates a new, empty table in the dataset.
- datasetId
dataset ID of the new table
- tableId
table ID of the new table
- schema
pekko.stream.connectors.googlecloud.bigquery.model.TableSchema of the new table
- settings
- system
the actor system
- returns
a java.util.concurrent.CompletionStage containing the pekko.stream.connectors.googlecloud.bigquery.model.Table
- See also
- def deleteDataset(datasetId: String, deleteContents: Boolean, settings: GoogleSettings, system: ClassicActorSystemProvider): CompletionStage[Done]
Deletes the dataset specified by the datasetId value.
Deletes the dataset specified by the datasetId value.
- datasetId
dataset ID of dataset being deleted
- deleteContents
if
true
, delete all the tables in the dataset; iffalse
and the dataset contains tables, the request will fail- settings
- system
the actor system
- returns
a java.util.concurrent.CompletionStage containing pekko.Done
- def deleteTable(datasetId: String, tableId: String, settings: GoogleSettings, system: ClassicActorSystemProvider): CompletionStage[Done]
Deletes the specified table from the dataset.
Deletes the specified table from the dataset. If the table contains data, all the data will be deleted.
- datasetId
dataset ID of the table to delete
- tableId
table ID of the table to delete
- settings
- system
the actor system
- returns
a java.util.concurrent.CompletionStage containing pekko.Done
- See also
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def getDataset(datasetId: String, settings: GoogleSettings, system: ClassicActorSystemProvider): CompletionStage[Dataset]
Returns the specified dataset.
Returns the specified dataset.
- datasetId
dataset ID of the requested dataset
- settings
- system
the actor system
- returns
a java.util.concurrent.CompletionStage containing the pekko.stream.connectors.googlecloud.bigquery.model.Dataset
- See also
- def getJob(jobId: String, location: Optional[String], settings: GoogleSettings, system: ClassicActorSystemProvider): CompletionStage[Job]
Returns information about a specific job.
Returns information about a specific job.
- jobId
job ID of the requested job
- location
the geographic location of the job. Required except for US and EU
- settings
- system
the actor system
- returns
a java.util.concurrent.CompletionStage containing the Job
- See also
- def getQueryResults[Out](jobId: String, startIndex: OptionalLong, maxResults: OptionalInt, timeout: Optional[Duration], location: Optional[String], unmarshaller: Unmarshaller[HttpEntity, QueryResponse[Out]]): Source[Out, CompletionStage[QueryResponse[Out]]]
The results of a query job.
The results of a query job.
- Out
the data model of the query results
- jobId
job ID of the query job
- startIndex
zero-based index of the starting row
- maxResults
maximum number of results to read
- timeout
specifies the maximum amount of time that the client is willing to wait for the query to complete
- location
the geographic location of the job. Required except for US and EU
- unmarshaller
pekko.http.javadsl.unmarshalling.Unmarshaller for pekko.stream.connectors.googlecloud.bigquery.model.QueryResponse
- returns
a pekko.stream.javadsl.Source that emits an Out for each row of the results and materializes a java.util.concurrent.CompletionStage containing the pekko.stream.connectors.googlecloud.bigquery.model.QueryResponse
- See also
- def getTable(datasetId: String, tableId: String, settings: GoogleSettings, system: ClassicActorSystemProvider): CompletionStage[Table]
Gets the specified table resource.
Gets the specified table resource. This method does not return the data in the table, it only returns the table resource, which describes the structure of this table.
- datasetId
dataset ID of the requested table
- tableId
table ID of the requested table
- settings
- system
the actor system
- returns
a java.util.concurrent.CompletionStage containing the pekko.stream.connectors.googlecloud.bigquery.model.Table
- See also
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def insertAll[In](datasetId: String, tableId: String, retryFailedRequests: Boolean, marshaller: Marshaller[TableDataInsertAllRequest[In], RequestEntity]): Flow[TableDataInsertAllRequest[In], TableDataInsertAllResponse, NotUsed]
Streams data into BigQuery one record at a time without needing to run a load job.
Streams data into BigQuery one record at a time without needing to run a load job.
- In
the data model for each record
- datasetId
dataset ID of the table to insert into
- tableId
table ID of the table to insert into
- retryFailedRequests
whether to retry failed requests
- marshaller
pekko.http.javadsl.marshalling.Marshaller for pekko.stream.connectors.googlecloud.bigquery.model.TableDataInsertAllRequest
- returns
a pekko.stream.javadsl.Flow that sends each pekko.stream.connectors.googlecloud.bigquery.model.TableDataInsertAllRequest and emits a pekko.stream.connectors.googlecloud.bigquery.model.TableDataInsertAllResponse for each
- See also
- def insertAll[In](datasetId: String, tableId: String, retryPolicy: InsertAllRetryPolicy, templateSuffix: Optional[String], marshaller: Marshaller[TableDataInsertAllRequest[In], RequestEntity]): Sink[List[In], NotUsed]
Streams data into BigQuery one record at a time without needing to run a load job
Streams data into BigQuery one record at a time without needing to run a load job
- In
the data model for each record
- datasetId
dataset id of the table to insert into
- tableId
table id of the table to insert into
- retryPolicy
InsertAllRetryPolicy determining whether to retry and deduplicate
- templateSuffix
if specified, treats the destination table as a base template, and inserts the rows into an instance table named "{destination}{templateSuffix}"
- marshaller
pekko.http.javadsl.marshalling.Marshaller for pekko.stream.connectors.googlecloud.bigquery.model.TableDataInsertAllRequest
- returns
a pekko.stream.javadsl.Sink that inserts each batch of In into the table
- See also
- def insertAllAsync[In](datasetId: String, tableId: String, labels: Optional[Map[String, String]], marshaller: Marshaller[In, RequestEntity]): Flow[In, Job, NotUsed]
Loads data into BigQuery via a series of asynchronous load jobs created at the rate pekko.stream.connectors.googlecloud.bigquery.BigQuerySettings.loadJobPerTableQuota.
Loads data into BigQuery via a series of asynchronous load jobs created at the rate pekko.stream.connectors.googlecloud.bigquery.BigQuerySettings.loadJobPerTableQuota.
- In
the data model for each record
- datasetId
dataset ID of the table to insert into
- tableId
table ID of the table to insert into
- marshaller
- returns
a pekko.stream.javadsl.Flow that uploads each In and emits a Job for every upload job created
- Note
WARNING: Pending the resolution of BigQuery issue 176002651 this method may not work as expected. As a workaround, you can use the config setting
pekko.http.parsing.conflicting-content-type-header-processing-mode = first
with Pekko HTTP.
- def insertAllAsync[In](datasetId: String, tableId: String, marshaller: Marshaller[In, RequestEntity]): Flow[In, Job, NotUsed]
Loads data into BigQuery via a series of asynchronous load jobs created at the rate pekko.stream.connectors.googlecloud.bigquery.BigQuerySettings.loadJobPerTableQuota.
Loads data into BigQuery via a series of asynchronous load jobs created at the rate pekko.stream.connectors.googlecloud.bigquery.BigQuerySettings.loadJobPerTableQuota.
- In
the data model for each record
- datasetId
dataset ID of the table to insert into
- tableId
table ID of the table to insert into
- marshaller
- returns
a pekko.stream.javadsl.Flow that uploads each In and emits a Job for every upload job created
- Note
WARNING: Pending the resolution of BigQuery issue 176002651 this method may not work as expected. As a workaround, you can use the config setting
pekko.http.parsing.conflicting-content-type-header-processing-mode = first
with Pekko HTTP.
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- def listDatasets(maxResults: OptionalInt, all: Optional[Boolean], filter: Map[String, String]): Source[Dataset, NotUsed]
Lists all datasets in the specified project to which the user has been granted the READER dataset role.
Lists all datasets in the specified project to which the user has been granted the READER dataset role.
- maxResults
the maximum number of results to return in a single response page
- all
whether to list all datasets, including hidden ones
- filter
a key, value java.util.Map for filtering the results of the request by label
- returns
a pekko.stream.javadsl.Source that emits each pekko.stream.connectors.googlecloud.bigquery.model.Dataset
- See also
- def listTableData[Out](datasetId: String, tableId: String, startIndex: OptionalLong, maxResults: OptionalInt, selectedFields: List[String], unmarshaller: Unmarshaller[HttpEntity, TableDataListResponse[Out]]): Source[Out, CompletionStage[TableDataListResponse[Out]]]
Lists the content of a table in rows.
Lists the content of a table in rows.
- Out
the data model of each row
- datasetId
dataset ID of the table to list
- tableId
table ID of the table to list
- startIndex
start row index of the table
- maxResults
row limit of the table
- selectedFields
subset of fields to return, supports select into sub fields. Example:
selectedFields = List.of("a", "e.d.f")
- unmarshaller
pekko.http.javadsl.unmarshalling.Unmarshaller for pekko.stream.connectors.googlecloud.bigquery.model.TableDataListResponse
- returns
a pekko.stream.javadsl.Source that emits an Out for each row in the table
- See also
- def listTables(datasetId: String, maxResults: OptionalInt): Source[Table, CompletionStage[TableListResponse]]
Lists all tables in the specified dataset.
Lists all tables in the specified dataset.
- datasetId
dataset ID of the tables to list
- maxResults
the maximum number of results to return in a single response page
- returns
a pekko.stream.javadsl.Source that emits each pekko.stream.connectors.googlecloud.bigquery.model.Table in the dataset and materializes a java.util.concurrent.CompletionStage containing the pekko.stream.connectors.googlecloud.bigquery.model.TableListResponse
- See also
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def paginatedRequest[Out <: Paginated](request: HttpRequest, unmarshaller: Unmarshaller[HttpResponse, Out]): Source[Out, NotUsed]
Makes a series of requests to page through a resource.
Makes a series of requests to page through a resource. Authentication is handled automatically. Requests are retried if the unmarshaller throws a pekko.stream.connectors.google.util.Retry.
- Out
the data model for each page of the resource
- request
the initial pekko.http.javadsl.model.HttpRequest to make; must be a
GET
request- returns
a pekko.stream.javadsl.Source that emits an
Out
for each page of the resource
- Definition Classes
- def query[Out](query: QueryRequest, unmarshaller: Unmarshaller[HttpEntity, QueryResponse[Out]]): Source[Out, Pair[CompletionStage[JobReference], CompletionStage[QueryResponse[Out]]]]
Runs a BigQuery SQL query.
Runs a BigQuery SQL query.
- Out
the data model of the query results
- query
the pekko.stream.connectors.googlecloud.bigquery.model.QueryRequest
- unmarshaller
pekko.http.javadsl.unmarshalling.Unmarshaller for pekko.stream.connectors.googlecloud.bigquery.model.QueryResponse
- returns
a pekko.stream.javadsl.Source that emits an Out for each row of the results and materializes a java.util.concurrent.CompletionStage containing the pekko.stream.connectors.googlecloud.bigquery.model.JobReference a java.util.concurrent.CompletionStage containing the pekko.stream.connectors.googlecloud.bigquery.model.QueryResponse
- See also
- def query[Out](query: String, dryRun: Boolean, useLegacySql: Boolean, unmarshaller: Unmarshaller[HttpEntity, QueryResponse[Out]]): Source[Out, CompletionStage[QueryResponse[Out]]]
Runs a BigQuery SQL query.
Runs a BigQuery SQL query.
- Out
the data model of the query results
- query
a query string, following the BigQuery query syntax, of the query to execute
- dryRun
if set to
true
BigQuery doesn't run the job and instead returns statistics about the job such as how many bytes would be processed- useLegacySql
specifies whether to use BigQuery's legacy SQL dialect for this query
- unmarshaller
pekko.http.javadsl.unmarshalling.Unmarshaller for pekko.stream.connectors.googlecloud.bigquery.model.QueryResponse
- returns
a pekko.stream.javadsl.Source that emits an Out for each row of the results and materializes a java.util.concurrent.CompletionStage containing the pekko.stream.connectors.googlecloud.bigquery.model.QueryResponse
- See also
- final def resumableUpload[Out](request: HttpRequest, unmarshaller: Unmarshaller[HttpResponse, Out]): Sink[ByteString, CompletionStage[Out]]
Makes a series of requests to upload a stream of bytes to a media endpoint.
Makes a series of requests to upload a stream of bytes to a media endpoint. Authentication is handled automatically. If the unmarshaller throws a pekko.stream.connectors.google.util.Retry the upload will attempt to recover and continue.
- Out
the data model for the resource
- request
the pekko.http.javadsl.model.HttpRequest to initiate the upload; must be a
POST
request with queryuploadType=resumable
and optionally a pekko.stream.connectors.google.javadsl.XUploadContentType header- returns
a pekko.stream.javadsl.Sink that materializes a java.util.concurrent.CompletionStage containing the unmarshalled resource
- Definition Classes
- final def singleRequest[T](request: HttpRequest, unmarshaller: Unmarshaller[HttpResponse, T], settings: GoogleSettings, system: ClassicActorSystemProvider): CompletionStage[T]
Makes a request and returns the unmarshalled response.
Makes a request and returns the unmarshalled response. Authentication is handled automatically. Retries the request if the unmarshaller throws a pekko.stream.connectors.google.util.Retry.
- T
the data model for the resource
- request
the pekko.http.javadsl.model.HttpRequest to make
- returns
a java.util.concurrent.CompletionStage containing the unmarshalled response
- Definition Classes
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()