object BigQuery extends BigQueryRest with BigQueryDatasets with BigQueryJobs with BigQueryQueries with BigQueryTables with BigQueryTableData
Scala API to interface with BigQuery.
- Annotations
- @ApiMayChange()
- Source
- BigQuery.scala
- Alphabetic
- By Inheritance
- BigQuery
- BigQueryTableData
- BigQueryTables
- BigQueryQueries
- BigQueryJobs
- BigQueryDatasets
- BigQueryRest
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def cancelJob(jobId: String, location: Option[String] = None)(implicit system: ClassicActorSystemProvider, settings: GoogleSettings): Future[JobCancelResponse]
Requests that a job be cancelled.
Requests that a job be cancelled.
- jobId
job ID of the job to cancel
- location
the geographic location of the job. Required except for US and EU
- returns
a scala.concurrent.Future containing the pekko.stream.connectors.googlecloud.bigquery.model.JobCancelResponse
- Definition Classes
- BigQueryJobs
- See also
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- def createDataset(dataset: Dataset)(implicit system: ClassicActorSystemProvider, settings: GoogleSettings): Future[Dataset]
Creates a new empty dataset.
Creates a new empty dataset.
- dataset
the pekko.stream.connectors.googlecloud.bigquery.model.Dataset to create
- returns
a scala.concurrent.Future containing the pekko.stream.connectors.googlecloud.bigquery.model.Dataset
- Definition Classes
- BigQueryDatasets
- See also
- def createDataset(datasetId: String)(implicit system: ClassicActorSystemProvider, settings: GoogleSettings): Future[Dataset]
Creates a new empty dataset.
Creates a new empty dataset.
- datasetId
dataset ID of the new dataset
- returns
a scala.concurrent.Future containing the pekko.stream.connectors.googlecloud.bigquery.model.Dataset
- Definition Classes
- BigQueryDatasets
- See also
- def createLoadJob[Job](job: Job)(implicit arg0: ToEntityMarshaller[Job], arg1: FromEntityUnmarshaller[Job]): Sink[ByteString, Future[Job]]
Starts a new asynchronous upload job.
Starts a new asynchronous upload job.
- Job
the data model for a job
- job
the job to start
- returns
a pekko.stream.scaladsl.Sink that uploads bytes and materializes a scala.concurrent.Future containing the Job when completed
- Definition Classes
- BigQueryJobs
- Note
WARNING: Pending the resolution of BigQuery issue 176002651 this method may not work as expected. As a workaround, you can use the config setting
pekko.http.parsing.conflicting-content-type-header-processing-mode = first
with Pekko HTTP.- See also
- def createTable(table: Table)(implicit system: ClassicActorSystemProvider, settings: GoogleSettings): Future[Table]
Creates a new, empty table in the dataset.
Creates a new, empty table in the dataset.
- table
the pekko.stream.connectors.googlecloud.bigquery.model.Table to create
- returns
a scala.concurrent.Future containing the pekko.stream.connectors.googlecloud.bigquery.model.Table
- Definition Classes
- BigQueryTables
- See also
- def createTable[T](datasetId: String, tableId: String)(implicit system: ClassicActorSystemProvider, settings: GoogleSettings, schemaWriter: TableSchemaWriter[T]): Future[Table]
Creates a new, empty table in the dataset.
Creates a new, empty table in the dataset.
- T
the data model for the records of this table
- datasetId
dataset ID of the new table
- tableId
table ID of the new table
- returns
a scala.concurrent.Future containing the pekko.stream.connectors.googlecloud.bigquery.model.Table
- Definition Classes
- BigQueryTables
- See also
- def dataset(datasetId: String)(implicit system: ClassicActorSystemProvider, settings: GoogleSettings): Future[Dataset]
Returns the specified dataset.
Returns the specified dataset.
- datasetId
dataset ID of the requested dataset
- returns
a scala.concurrent.Future containing the pekko.stream.connectors.googlecloud.bigquery.model.Dataset
- Definition Classes
- BigQueryDatasets
- See also
- def datasets(maxResults: Option[Int] = None, all: Option[Boolean] = None, filter: Map[String, String] = Map.empty): Source[Dataset, NotUsed]
Lists all datasets in the specified project to which the user has been granted the READER dataset role.
Lists all datasets in the specified project to which the user has been granted the READER dataset role.
- maxResults
the maximum number of results to return in a single response page
- all
whether to list all datasets, including hidden ones
- filter
a key, value scala.collection.immutable.Map for filtering the results of the request by label
- returns
a pekko.stream.scaladsl.Source that emits each pekko.stream.connectors.googlecloud.bigquery.model.Dataset
- Definition Classes
- BigQueryDatasets
- See also
- def datasets: Source[Dataset, NotUsed]
Lists all datasets in the specified project to which the user has been granted the READER dataset role.
Lists all datasets in the specified project to which the user has been granted the READER dataset role.
- returns
a pekko.stream.scaladsl.Source that emits each pekko.stream.connectors.googlecloud.bigquery.model.Dataset
- Definition Classes
- BigQueryDatasets
- See also
- def deleteDataset(datasetId: String, deleteContents: Boolean = false)(implicit system: ClassicActorSystemProvider, settings: GoogleSettings): Future[Done]
Deletes the dataset specified by the datasetId value.
Deletes the dataset specified by the datasetId value.
- datasetId
dataset ID of dataset being deleted
- deleteContents
if
true
, delete all the tables in the dataset; iffalse
and the dataset contains tables, the request will fail- returns
a scala.concurrent.Future containing pekko.Done
- Definition Classes
- BigQueryDatasets
- def deleteTable(datasetId: String, tableId: String)(implicit system: ClassicActorSystemProvider, settings: GoogleSettings): Future[Done]
Deletes the specified table from the dataset.
Deletes the specified table from the dataset. If the table contains data, all the data will be deleted.
- datasetId
dataset ID of the table to delete
- tableId
table ID of the table to delete
- returns
a scala.concurrent.Future containing pekko.Done
- Definition Classes
- BigQueryTables
- See also
- implicit val doneUnmarshaller: FromEntityUnmarshaller[Done]
- Attributes
- protected[this]
- Definition Classes
- BigQueryRest
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def insertAll[In](datasetId: String, tableId: String, retryFailedRequests: Boolean)(implicit m: ToEntityMarshaller[TableDataInsertAllRequest[In]]): Flow[TableDataInsertAllRequest[In], TableDataInsertAllResponse, NotUsed]
Streams data into BigQuery one record at a time without needing to run a load job.
Streams data into BigQuery one record at a time without needing to run a load job.
- In
the data model for each record
- datasetId
dataset ID of the table to insert into
- tableId
table ID of the table to insert into
- retryFailedRequests
whether to retry failed requests
- returns
a pekko.stream.scaladsl.Flow that sends each pekko.stream.connectors.googlecloud.bigquery.model.TableDataInsertAllRequest and emits a pekko.stream.connectors.googlecloud.bigquery.model.TableDataInsertAllResponse for each
- Definition Classes
- BigQueryTableData
- See also
- def insertAll[In](datasetId: String, tableId: String, retryPolicy: InsertAllRetryPolicy, templateSuffix: Option[String] = None)(implicit m: ToEntityMarshaller[TableDataInsertAllRequest[In]]): Sink[Seq[In], NotUsed]
Streams data into BigQuery one record at a time without needing to run a load job
Streams data into BigQuery one record at a time without needing to run a load job
- In
the data model for each record
- datasetId
dataset id of the table to insert into
- tableId
table id of the table to insert into
- retryPolicy
pekko.stream.connectors.googlecloud.bigquery.InsertAllRetryPolicy determining whether to retry and deduplicate
- templateSuffix
if specified, treats the destination table as a base template, and inserts the rows into an instance table named "{destination}{templateSuffix}"
- returns
a pekko.stream.scaladsl.Sink that inserts each batch of In into the table
- Definition Classes
- BigQueryTableData
- See also
- def insertAllAsync[In](datasetId: String, tableId: String, labels: Option[Map[String, String]])(implicit arg0: ToEntityMarshaller[In]): Flow[In, Job, NotUsed]
Loads data into BigQuery via a series of asynchronous load jobs created at the rate pekko.stream.connectors.googlecloud.bigquery.BigQuerySettings.loadJobPerTableQuota.
Loads data into BigQuery via a series of asynchronous load jobs created at the rate pekko.stream.connectors.googlecloud.bigquery.BigQuerySettings.loadJobPerTableQuota.
- In
the data model for each record
- datasetId
dataset ID of the table to insert into
- tableId
table ID of the table to insert into
- labels
the labels associated with this job
- returns
a pekko.stream.scaladsl.Flow that uploads each In and emits a pekko.stream.connectors.googlecloud.bigquery.model.Job for every upload job created
- Definition Classes
- BigQueryJobs
- Note
WARNING: Pending the resolution of BigQuery issue 176002651 this method may not work as expected. As a workaround, you can use the config setting
pekko.http.parsing.conflicting-content-type-header-processing-mode = first
with Pekko HTTP.
- def insertAllAsync[In](datasetId: String, tableId: String)(implicit arg0: ToEntityMarshaller[In]): Flow[In, Job, NotUsed]
Loads data into BigQuery via a series of asynchronous load jobs created at the rate pekko.stream.connectors.googlecloud.bigquery.BigQuerySettings.loadJobPerTableQuota.
Loads data into BigQuery via a series of asynchronous load jobs created at the rate pekko.stream.connectors.googlecloud.bigquery.BigQuerySettings.loadJobPerTableQuota.
- In
the data model for each record
- datasetId
dataset ID of the table to insert into
- tableId
table ID of the table to insert into
- returns
a pekko.stream.scaladsl.Flow that uploads each In and emits a pekko.stream.connectors.googlecloud.bigquery.model.Job for every upload job created
- Definition Classes
- BigQueryJobs
- Note
WARNING: Pending the resolution of BigQuery issue 176002651 this method may not work as expected. As a workaround, you can use the config setting
pekko.http.parsing.conflicting-content-type-header-processing-mode = first
with Pekko HTTP.
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- def job(jobId: String, location: Option[String] = None)(implicit system: ClassicActorSystemProvider, settings: GoogleSettings): Future[Job]
Returns information about a specific job.
Returns information about a specific job.
- jobId
job ID of the requested job
- location
the geographic location of the job. Required except for US and EU
- returns
a scala.concurrent.Future containing the pekko.stream.connectors.googlecloud.bigquery.model.Job
- Definition Classes
- BigQueryJobs
- See also
- def mkFilterParam(filter: Map[String, String]): String
- Attributes
- protected[this]
- Definition Classes
- BigQueryRest
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def paginatedRequest[Out](request: HttpRequest)(implicit arg0: Paginated[Out], arg1: FromResponseUnmarshaller[Out]): Source[Out, NotUsed]
Makes a series of requests to page through a resource.
Makes a series of requests to page through a resource. Authentication is handled automatically. Requests are retried if the unmarshaller throws a pekko.stream.connectors.google.util.Retry.
- Out
the data model for each page of the resource
- request
the initial pekko.http.scaladsl.model.HttpRequest to make; must be a
GET
request- returns
a pekko.stream.scaladsl.Source that emits an
Out
for each page of the resource
- Definition Classes
- def query[Out](query: QueryRequest)(implicit um: FromEntityUnmarshaller[QueryResponse[Out]]): Source[Out, (Future[JobReference], Future[QueryResponse[Out]])]
Runs a BigQuery SQL query.
Runs a BigQuery SQL query.
- Out
the data model of the query results
- query
the pekko.stream.connectors.googlecloud.bigquery.model.QueryRequest
- returns
a pekko.stream.scaladsl.Source that emits an Out for each row of the results and materializes a scala.concurrent.Future containing the pekko.stream.connectors.googlecloud.bigquery.model.JobReference and a scala.concurrent.Future containing the pekko.stream.connectors.googlecloud.bigquery.model.QueryResponse
- Definition Classes
- BigQueryQueries
- See also
- def query[Out](query: String, dryRun: Boolean = false, useLegacySql: Boolean = true)(implicit um: FromEntityUnmarshaller[QueryResponse[Out]]): Source[Out, Future[QueryResponse[Out]]]
Runs a BigQuery SQL query.
Runs a BigQuery SQL query.
- Out
the data model of the query results
- query
a query string, following the BigQuery query syntax, of the query to execute
- dryRun
if set to
true
BigQuery doesn't run the job and instead returns statistics about the job such as how many bytes would be processed- useLegacySql
specifies whether to use BigQuery's legacy SQL dialect for this query
- returns
a pekko.stream.scaladsl.Source that emits an Out for each row of the result and materializes a scala.concurrent.Future containing the pekko.stream.connectors.googlecloud.bigquery.model.QueryResponse
- Definition Classes
- BigQueryQueries
- See also
- def queryResults[Out](jobId: String, startIndex: Option[Long] = None, maxResults: Option[Int] = None, timeout: Option[FiniteDuration] = None, location: Option[String] = None)(implicit um: FromEntityUnmarshaller[QueryResponse[Out]]): Source[Out, Future[QueryResponse[Out]]]
The results of a query job.
The results of a query job.
- Out
the data model of the query results
- jobId
job ID of the query job
- startIndex
zero-based index of the starting row
- maxResults
maximum number of results to read
- timeout
specifies the maximum amount of time that the client is willing to wait for the query to complete
- location
the geographic location of the job. Required except for US and EU
- returns
a pekko.stream.scaladsl.Source that emits an Out for each row of the results and materializes a scala.concurrent.Future containing the pekko.stream.connectors.googlecloud.bigquery.model.QueryResponse
- Definition Classes
- BigQueryQueries
- See also
- final def resumableUpload[Out](request: HttpRequest)(implicit arg0: FromResponseUnmarshaller[Out]): Sink[ByteString, Future[Out]]
Makes a series of requests to upload a stream of bytes to a media endpoint.
Makes a series of requests to upload a stream of bytes to a media endpoint. Authentication is handled automatically. If the unmarshaller throws a pekko.stream.connectors.google.util.Retry the upload will attempt to recover and continue.
- Out
the data model for the resource
- request
the pekko.http.scaladsl.model.HttpRequest to initiate the upload; must be a
POST
request with queryuploadType=resumable
and optionally a pekko.stream.connectors.google.scaladsl.`X-Upload-Content-Type` header- returns
a pekko.stream.scaladsl.Sink that materializes a scala.concurrent.Future containing the unmarshalled resource
- Definition Classes
- final def singleRequest[T](request: HttpRequest)(implicit arg0: FromResponseUnmarshaller[T], system: ClassicActorSystemProvider, settings: GoogleSettings): Future[T]
Makes a request and returns the unmarshalled response.
Makes a request and returns the unmarshalled response. Authentication is handled automatically. Retries the request if the unmarshaller throws a pekko.stream.connectors.google.util.Retry.
- T
the data model for the resource
- request
the pekko.http.scaladsl.model.HttpRequest to make
- returns
a scala.concurrent.Future containing the unmarshalled response
- Definition Classes
- def source[Out, Mat](f: (GoogleSettings) => Source[Out, Mat]): Source[Out, Future[Mat]]
- Attributes
- protected[this]
- Definition Classes
- BigQueryRest
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def table(datasetId: String, tableId: String)(implicit system: ClassicActorSystemProvider, settings: GoogleSettings): Future[Table]
Gets the specified table resource.
Gets the specified table resource. This method does not return the data in the table, it only returns the table resource, which describes the structure of this table.
- datasetId
dataset ID of the requested table
- tableId
table ID of the requested table
- returns
a scala.concurrent.Future containing the pekko.stream.connectors.googlecloud.bigquery.model.Table
- Definition Classes
- BigQueryTables
- See also
- def tableData[Out](datasetId: String, tableId: String, startIndex: Option[Long] = None, maxResults: Option[Int] = None, selectedFields: Seq[String] = Seq.empty)(implicit um: FromEntityUnmarshaller[TableDataListResponse[Out]]): Source[Out, Future[TableDataListResponse[Out]]]
Lists the content of a table in rows.
Lists the content of a table in rows.
- Out
the data model of each row
- datasetId
dataset ID of the table to list
- tableId
table ID of the table to list
- startIndex
start row index of the table
- maxResults
row limit of the table
- selectedFields
subset of fields to return, supports select into sub fields. Example:
selectedFields = Seq("a", "e.d.f")
- returns
a pekko.stream.scaladsl.Source that emits an Out for each row in the table
- Definition Classes
- BigQueryTableData
- See also
- def tables(datasetId: String, maxResults: Option[Int] = None): Source[Table, Future[TableListResponse]]
Lists all tables in the specified dataset.
Lists all tables in the specified dataset.
- datasetId
dataset ID of the tables to list
- maxResults
the maximum number of results to return in a single response page
- returns
a pekko.stream.scaladsl.Source that emits each pekko.stream.connectors.googlecloud.bigquery.model.Table in the dataset and materializes a scala.concurrent.Future containing the pekko.stream.connectors.googlecloud.bigquery.model.TableListResponse
- Definition Classes
- BigQueryTables
- See also
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()