Packages

object S3

Java API

Factory of S3 operations.

Source
S3.scala
Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. S3
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def checkIfBucketExists(bucketName: String, system: ClassicActorSystemProvider): CompletionStage[BucketAccess]

    Checks whether the bucket exits and user has rights to perform ListBucket operation

    Checks whether the bucket exits and user has rights to perform ListBucket operation

    bucketName

    bucket name

    system

    the actor system which provides the materializer to run with

    returns

    CompletionStage of type BucketAccess

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html

  6. def checkIfBucketExists(bucketName: String, system: ClassicActorSystemProvider, attributes: Attributes, s3Headers: S3Headers): CompletionStage[BucketAccess]

    Checks whether the bucket exists and the user has rights to perform the ListBucket operation

    Checks whether the bucket exists and the user has rights to perform the ListBucket operation

    bucketName

    bucket name

    system

    the actor system which provides the materializer to run with

    attributes

    attributes to run request with

    s3Headers

    any headers you want to add

    returns

    CompletionStage of type BucketAccess

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html

  7. def checkIfBucketExists(bucketName: String, system: ClassicActorSystemProvider, attributes: Attributes): CompletionStage[BucketAccess]

    Checks whether the bucket exists and the user has rights to perform the ListBucket operation

    Checks whether the bucket exists and the user has rights to perform the ListBucket operation

    bucketName

    bucket name

    system

    the actor system which provides the materializer to run with

    attributes

    attributes to run request with

    returns

    CompletionStage of type BucketAccess

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html

  8. def checkIfBucketExistsSource(bucketName: String, s3Headers: S3Headers): Source[BucketAccess, NotUsed]

    Checks whether the bucket exits and user has rights to perform ListBucket operation

    Checks whether the bucket exits and user has rights to perform ListBucket operation

    bucketName

    bucket name

    s3Headers

    any headers you want to add

    returns

    Source of type BucketAccess

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html

  9. def checkIfBucketExistsSource(bucketName: String): Source[BucketAccess, NotUsed]

    Checks whether the bucket exits and user has rights to perform ListBucket operation

    Checks whether the bucket exits and user has rights to perform ListBucket operation

    bucketName

    bucket name

    returns

    Source of type BucketAccess

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html

  10. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @native()
  11. def completeMultipartUpload(bucket: String, key: String, uploadId: String, parts: Iterable[Part], s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): CompletionStage[MultipartUploadResult]

    Complete a multipart upload with an already given list of parts

    Complete a multipart upload with an already given list of parts

    bucket

    bucket the s3 bucket name

    key

    the s3 object key

    uploadId

    the upload that you want to complete

    parts

    A list of all of the parts for the multipart upload

    s3Headers

    any headers you want to add

    returns

    CompletionStage of type MultipartUploadResult

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html

  12. def completeMultipartUpload(bucket: String, key: String, uploadId: String, parts: Iterable[Part])(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): CompletionStage[MultipartUploadResult]

    Complete a multipart upload with an already given list of parts

    Complete a multipart upload with an already given list of parts

    bucket

    bucket the s3 bucket name

    key

    the s3 object key

    uploadId

    the upload that you want to complete

    parts

    A list of all of the parts for the multipart upload

    returns

    CompletionStage of type MultipartUploadResult

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html

  13. def deleteBucket(bucketName: String, system: ClassicActorSystemProvider): CompletionStage[Done]

    Delete bucket with a given name

    Delete bucket with a given name

    bucketName

    bucket name

    system

    the actor system which provides the materializer to run with

    returns

    CompletionStage of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html

  14. def deleteBucket(bucketName: String, system: ClassicActorSystemProvider, attributes: Attributes, s3Headers: S3Headers): CompletionStage[Done]

    Delete bucket with a given name

    Delete bucket with a given name

    bucketName

    bucket name

    system

    the actor system which provides the materializer to run with

    attributes

    attributes to run request with

    s3Headers

    any headers you want to add

    returns

    CompletionStage of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html

  15. def deleteBucket(bucketName: String, system: ClassicActorSystemProvider, attributes: Attributes): CompletionStage[Done]

    Delete bucket with a given name

    Delete bucket with a given name

    bucketName

    bucket name

    system

    the actor system which provides the materializer to run with

    attributes

    attributes to run request with

    returns

    CompletionStage of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html

  16. def deleteBucketContents(bucket: String, deleteAllVersions: Boolean): Source[Done, NotUsed]

    Deletes all S3 Objects within the given bucket

    Deletes all S3 Objects within the given bucket

    bucket

    the s3 bucket name

    deleteAllVersions

    Whether to delete all object versions as well (applies to versioned buckets)

    returns

    A Source that will emit pekko.Done when operation is completed

  17. def deleteBucketContents(bucket: String): Source[Done, NotUsed]

    Deletes all S3 Objects within the given bucket

    Deletes all S3 Objects within the given bucket

    bucket

    the s3 bucket name

    returns

    A Source that will emit pekko.Done when operation is completed

  18. def deleteBucketSource(bucketName: String, s3Headers: S3Headers): Source[Done, NotUsed]

    Delete bucket with a given name

    Delete bucket with a given name

    bucketName

    bucket name

    s3Headers

    any headers you want to add

    returns

    Source of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html

  19. def deleteBucketSource(bucketName: String): Source[Done, NotUsed]

    Delete bucket with a given name

    Delete bucket with a given name

    bucketName

    bucket name

    returns

    Source of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html

  20. def deleteObject(bucket: String, key: String, versionId: Optional[String], s3Headers: S3Headers): Source[Done, NotUsed]

    Deletes a S3 Object

    Deletes a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    versionId

    optional version id of the object

    s3Headers

    any headers you want to add

    returns

    A Source that will emit pekko.Done when operation is completed

  21. def deleteObject(bucket: String, key: String, versionId: Optional[String]): Source[Done, NotUsed]

    Deletes a S3 Object

    Deletes a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    versionId

    optional version id of the object

    returns

    A Source that will emit pekko.Done when operation is completed

  22. def deleteObject(bucket: String, key: String): Source[Done, NotUsed]

    Deletes a S3 Object

    Deletes a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    returns

    A Source that will emit pekko.Done when operation is completed

  23. def deleteObjectsByPrefix(bucket: String, prefix: Optional[String], deleteAllVersions: Boolean, s3Headers: S3Headers): Source[Done, NotUsed]

    Deletes all keys which have the given prefix under the specified bucket

    Deletes all keys which have the given prefix under the specified bucket

    bucket

    the s3 bucket name

    prefix

    optional s3 objects prefix

    s3Headers

    any headers you want to add

    returns

    A Source that will emit pekko.Done when operation is completed

  24. def deleteObjectsByPrefix(bucket: String, prefix: Optional[String], s3Headers: S3Headers): Source[Done, NotUsed]

    Deletes all keys which have the given prefix under the specified bucket

    Deletes all keys which have the given prefix under the specified bucket

    bucket

    the s3 bucket name

    prefix

    optional s3 objects prefix

    s3Headers

    any headers you want to add

    returns

    A Source that will emit pekko.Done when operation is completed

  25. def deleteObjectsByPrefix(bucket: String, prefix: Optional[String], deleteAllVersions: Boolean): Source[Done, NotUsed]

    Deletes all keys which have the given prefix under the specified bucket

    Deletes all keys which have the given prefix under the specified bucket

    bucket

    the s3 bucket name

    prefix

    optional s3 objects prefix

    returns

    A Source that will emit pekko.Done when operation is completed

  26. def deleteObjectsByPrefix(bucket: String, prefix: Optional[String]): Source[Done, NotUsed]

    Deletes all keys which have the given prefix under the specified bucket

    Deletes all keys which have the given prefix under the specified bucket

    bucket

    the s3 bucket name

    prefix

    optional s3 objects prefix

    returns

    A Source that will emit pekko.Done when operation is completed

  27. def deleteObjectsByPrefix(bucket: String, deleteAllVersions: Boolean): Source[Done, NotUsed]

    Deletes all keys under the specified bucket

    Deletes all keys under the specified bucket

    bucket

    the s3 bucket name

    deleteAllVersions

    Whether to delete all object versions as well (applies to versioned buckets)

    returns

    A Source that will emit pekko.Done when operation is completed

  28. def deleteObjectsByPrefix(bucket: String): Source[Done, NotUsed]

    Deletes all keys under the specified bucket

    Deletes all keys under the specified bucket

    bucket

    the s3 bucket name

    returns

    A Source that will emit pekko.Done when operation is completed

  29. def deleteUpload(bucketName: String, key: String, uploadId: String, s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): CompletionStage[Done]

    Delete all existing parts for a specific upload

    Delete all existing parts for a specific upload

    bucketName

    Which bucket the upload is inside

    key

    The key for the upload

    uploadId

    Unique identifier of the upload

    s3Headers

    any headers you want to add

    returns

    CompletionStage of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html

  30. def deleteUpload(bucketName: String, key: String, uploadId: String)(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): CompletionStage[Done]

    Delete all existing parts for a specific upload id

    Delete all existing parts for a specific upload id

    bucketName

    Which bucket the upload is inside

    key

    The key for the upload

    uploadId

    Unique identifier of the upload

    returns

    CompletionStage of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html

  31. def deleteUploadSource(bucketName: String, key: String, uploadId: String, s3Headers: S3Headers): Source[Done, NotUsed]

    Delete all existing parts for a specific upload

    Delete all existing parts for a specific upload

    bucketName

    Which bucket the upload is inside

    key

    The key for the upload

    uploadId

    Unique identifier of the upload

    s3Headers

    any headers you want to add

    returns

    Source of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html

  32. def deleteUploadSource(bucketName: String, key: String, uploadId: String): Source[Done, NotUsed]

    Delete all existing parts for a specific upload

    Delete all existing parts for a specific upload

    bucketName

    Which bucket the upload is inside

    key

    The key for the upload

    uploadId

    Unique identifier of the upload

    returns

    Source of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html

  33. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  34. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  35. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable])
  36. def getBucketVersioning(bucketName: String, system: ClassicActorSystemProvider): CompletionStage[BucketVersioningResult]

    Gets the versioning of an existing bucket

    Gets the versioning of an existing bucket

    bucketName

    Bucket name

    system

    the actor system which provides the materializer to run with

    returns

    CompletionStage of type BucketVersioningResult

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html

  37. def getBucketVersioning(bucketName: String, system: ClassicActorSystemProvider, attributes: Attributes, s3Headers: S3Headers): CompletionStage[BucketVersioningResult]

    Gets the versioning of an existing bucket

    Gets the versioning of an existing bucket

    bucketName

    Bucket name

    system

    the actor system which provides the materializer to run with

    attributes

    attributes to run request with

    s3Headers

    any headers you want to add

    returns

    CompletionStage of type BucketVersioningResult

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html

  38. def getBucketVersioning(bucketName: String, system: ClassicActorSystemProvider, attributes: Attributes): CompletionStage[BucketVersioningResult]

    Gets the versioning of an existing bucket

    Gets the versioning of an existing bucket

    bucketName

    Bucket name

    system

    the actor system which provides the materializer to run with

    attributes

    attributes to run request with

    returns

    CompletionStage of type BucketVersioningResult

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html

  39. def getBucketVersioningSource(bucketName: String, s3Headers: S3Headers): Source[BucketVersioningResult, NotUsed]

    Gets the versioning of an existing bucket

    Gets the versioning of an existing bucket

    bucketName

    Bucket name

    s3Headers

    any headers you want to add

    returns

    Source of type BucketVersioningResult

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html

  40. def getBucketVersioningSource(bucketName: String): Source[BucketVersioningResult, NotUsed]

    Gets the versioning of an existing bucket

    Gets the versioning of an existing bucket

    bucketName

    Bucket name

    returns

    Source of type BucketVersioningResult

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html

  41. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  42. def getObject(bucket: String, key: String, range: ByteRange, versionId: Optional[String], s3Headers: S3Headers): Source[ByteString, CompletionStage[ObjectMetadata]]

    Gets a specific byte range of a S3 Object

    Gets a specific byte range of a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    range

    the ByteRange you want to download

    versionId

    optional version id of the object

    s3Headers

    any headers you want to add

    returns

    A pekko.stream.javadsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata

  43. def getObject(bucket: String, key: String, range: ByteRange, s3Headers: S3Headers): Source[ByteString, CompletionStage[ObjectMetadata]]

    Gets a specific byte range of a S3 Object

    Gets a specific byte range of a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    range

    the ByteRange you want to download

    s3Headers

    any headers you want to add

    returns

    A pekko.stream.javadsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata

  44. def getObject(bucket: String, key: String, s3Headers: S3Headers): Source[ByteString, CompletionStage[ObjectMetadata]]

    Gets a S3 Object

    Gets a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    s3Headers

    any headers you want to add

    returns

    A pekko.stream.javadsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata

  45. def getObject(bucket: String, key: String, range: ByteRange, versionId: Optional[String], sse: ServerSideEncryption): Source[ByteString, CompletionStage[ObjectMetadata]]

    Gets a specific byte range of a S3 Object

    Gets a specific byte range of a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    range

    the ByteRange you want to download

    versionId

    optional version id of the object

    sse

    the server side encryption to use

    returns

    A pekko.stream.javadsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata

  46. def getObject(bucket: String, key: String, range: ByteRange, sse: ServerSideEncryption): Source[ByteString, CompletionStage[ObjectMetadata]]

    Gets a specific byte range of a S3 Object

    Gets a specific byte range of a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    range

    the ByteRange you want to download

    sse

    the server side encryption to use

    returns

    A pekko.stream.javadsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata

  47. def getObject(bucket: String, key: String, range: ByteRange): Source[ByteString, CompletionStage[ObjectMetadata]]

    Gets a specific byte range of a S3 Object

    Gets a specific byte range of a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    range

    the ByteRange you want to download

    returns

    A pekko.stream.javadsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata

  48. def getObject(bucket: String, key: String, sse: ServerSideEncryption): Source[ByteString, CompletionStage[ObjectMetadata]]

    Gets a S3 Object

    Gets a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    sse

    the server side encryption to use

    returns

    A pekko.stream.javadsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata

  49. def getObject(bucket: String, key: String): Source[ByteString, CompletionStage[ObjectMetadata]]

    Gets a S3 Object

    Gets a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    returns

    A pekko.stream.javadsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata

  50. def getObjectMetadata(bucket: String, key: String, s3Headers: S3Headers): Source[Optional[ObjectMetadata], NotUsed]

    Gets the metadata for a S3 Object

    Gets the metadata for a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    s3Headers

    any headers you want to add

    returns

    A Source containing an Optional that will be empty in case the object does not exist

  51. def getObjectMetadata(bucket: String, key: String, versionId: Optional[String], sse: ServerSideEncryption): Source[Optional[ObjectMetadata], NotUsed]

    Gets the metadata for a S3 Object

    Gets the metadata for a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    versionId

    optional versionId of source object

    sse

    the server side encryption to use

    returns

    A Source containing an Optional that will be empty in case the object does not exist

  52. def getObjectMetadata(bucket: String, key: String, sse: ServerSideEncryption): Source[Optional[ObjectMetadata], NotUsed]

    Gets the metadata for a S3 Object

    Gets the metadata for a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    sse

    the server side encryption to use

    returns

    A Source containing an Optional that will be empty in case the object does not exist

  53. def getObjectMetadata(bucket: String, key: String): Source[Optional[ObjectMetadata], NotUsed]

    Gets the metadata for a S3 Object

    Gets the metadata for a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    returns

    A Source containing an Optional that will be empty in case the object does not exist

  54. def getObjectMetadataWithHeaders(bucket: String, key: String, versionId: Optional[String], s3Headers: S3Headers): Source[Optional[ObjectMetadata], NotUsed]

    Gets the metadata for a S3 Object

    Gets the metadata for a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    versionId

    optional versionId of source object

    s3Headers

    any headers you want to add

    returns

    A Source containing an Optional that will be empty in case the object does not exist

  55. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  56. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  57. def listBucket(bucket: String, delimiter: String, prefix: Optional[String], s3Headers: S3Headers): Source[ListBucketResultContents, NotUsed]

    Will return a source of object metadata for a given bucket with delimiter and optional prefix using version 2 of the List Bucket API.

    Will return a source of object metadata for a given bucket with delimiter and optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.

    The org.apache.pekko.stream.connectors.s3.list-bucket-api-version can be set to 1 to use the older API version 1

    bucket

    Which bucket that you list object metadata for

    delimiter

    Delimiter to use for listing only one level of hierarchy

    prefix

    Prefix of the keys you want to list under passed bucket

    s3Headers

    any headers you want to add

    returns

    Source of object metadata

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)

  58. def listBucket(bucket: String, delimiter: String, prefix: Optional[String]): Source[ListBucketResultContents, NotUsed]

    Will return a source of object metadata for a given bucket with delimiter and optional prefix using version 2 of the List Bucket API.

    Will return a source of object metadata for a given bucket with delimiter and optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.

    The org.apache.pekko.stream.connectors.s3.list-bucket-api-version can be set to 1 to use the older API version 1

    bucket

    Which bucket that you list object metadata for

    delimiter

    Delimiter to use for listing only one level of hierarchy

    prefix

    Prefix of the keys you want to list under passed bucket

    returns

    Source of object metadata

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)

  59. def listBucket(bucket: String, prefix: Optional[String], s3Headers: S3Headers): Source[ListBucketResultContents, NotUsed]

    Will return a source of object metadata for a given bucket with optional prefix using version 2 of the List Bucket API.

    Will return a source of object metadata for a given bucket with optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.

    The org.apache.pekko.stream.connectors.s3.list-bucket-api-version can be set to 1 to use the older API version 1

    bucket

    Which bucket that you list object metadata for

    prefix

    Prefix of the keys you want to list under passed bucket

    returns

    Source of object metadata

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)

  60. def listBucket(bucket: String, prefix: Optional[String]): Source[ListBucketResultContents, NotUsed]

    Will return a source of object metadata for a given bucket with optional prefix using version 2 of the List Bucket API.

    Will return a source of object metadata for a given bucket with optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.

    The org.apache.pekko.stream.connectors.s3.list-bucket-api-version can be set to 1 to use the older API version 1

    bucket

    Which bucket that you list object metadata for

    prefix

    Prefix of the keys you want to list under passed bucket

    returns

    Source of object metadata

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)

  61. def listBucketAndCommonPrefixes(bucket: String, delimiter: String, prefix: Optional[String], s3Headers: S3Headers): Source[Pair[List[ListBucketResultContents], List[ListBucketResultCommonPrefixes]], NotUsed]

    Will return a source of object metadata and common prefixes for a given bucket and delimiter with optional prefix using version 2 of the List Bucket API.

    Will return a source of object metadata and common prefixes for a given bucket and delimiter with optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.

    The pekko.connectors.s3.list-bucket-api-version can be set to 1 to use the older API version 1

    bucket

    Which bucket that you list object metadata for

    delimiter

    Delimiter to use for listing only one level of hierarchy

    prefix

    Prefix of the keys you want to list under passed bucket

    s3Headers

    any headers you want to add

    returns

    Source of Pair of (List of ListBucketResultContents, List of ListBucketResultCommonPrefixes

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)

    https://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysHierarchy.html (prefix and delimiter documentation)

  62. def listBuckets(s3Headers: S3Headers): Source[ListBucketsResultContents, NotUsed]

    Will return a list containing all of the buckets for the current AWS account

    Will return a list containing all of the buckets for the current AWS account

    returns

    Source of ListBucketsResultContents

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html

  63. def listBuckets(): Source[ListBucketsResultContents, NotUsed]

    Will return a list containing all of the buckets for the current AWS account

    Will return a list containing all of the buckets for the current AWS account

    returns

    Source of ListBucketsResultContents

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html

  64. def listMultipartUpload(bucket: String, prefix: Optional[String], s3Headers: S3Headers): Source[ListMultipartUploadResultUploads, NotUsed]

    Will return in progress or aborted multipart uploads.

    Will return in progress or aborted multipart uploads. This will automatically page through all keys with the given parameters.

    bucket

    Which bucket that you list in-progress multipart uploads for

    prefix

    Prefix of the keys you want to list under passed bucket

    s3Headers

    any headers you want to add

    returns

    Source of ListMultipartUploadResultUploads

  65. def listMultipartUpload(bucket: String, prefix: Optional[String]): Source[ListMultipartUploadResultUploads, NotUsed]

    Will return in progress or aborted multipart uploads.

    Will return in progress or aborted multipart uploads. This will automatically page through all keys with the given parameters.

    bucket

    Which bucket that you list in-progress multipart uploads for

    prefix

    Prefix of the keys you want to list under passed bucket

    returns

    Source of ListMultipartUploadResultUploads

  66. def listMultipartUploadAndCommonPrefixes(bucket: String, delimiter: String, prefix: Optional[String], s3Headers: S3Headers = S3Headers.empty): Source[Pair[List[ListMultipartUploadResultUploads], List[CommonPrefixes]], NotUsed]

    Will return in progress or aborted multipart uploads with optional prefix and delimiter.

    Will return in progress or aborted multipart uploads with optional prefix and delimiter. This will automatically page through all keys with the given parameters.

    bucket

    Which bucket that you list in-progress multipart uploads for

    delimiter

    Delimiter to use for listing only one level of hierarchy

    prefix

    Prefix of the keys you want to list under passed bucket

    s3Headers

    any headers you want to add

    returns

    Source of Pair of (List of ListMultipartUploadResultUploads, List of CommonPrefixes)

  67. def listObjectVersions(bucket: String, delimiter: String, prefix: Optional[String], s3Headers: S3Headers): Source[Pair[List[ListObjectVersionsResultVersions], List[DeleteMarkers]], NotUsed]

    List all versioned objects for a bucket with optional prefix and delimiter.

    List all versioned objects for a bucket with optional prefix and delimiter. This will automatically page through all keys with the given parameters.

    bucket

    Which bucket that you list object versions for

    delimiter

    Delimiter to use for listing only one level of hierarchy

    prefix

    Prefix of the keys you want to list under passed bucket

    s3Headers

    any headers you want to add

    returns

    Source of Pair of (List of ListObjectVersionsResultVersions, List of DeleteMarkers)

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html

  68. def listObjectVersions(bucket: String, prefix: Optional[String], s3Headers: S3Headers): Source[Pair[List[ListObjectVersionsResultVersions], List[DeleteMarkers]], NotUsed]

    List all versioned objects for a bucket with optional prefix.

    List all versioned objects for a bucket with optional prefix. This will automatically page through all keys with the given parameters.

    bucket

    Which bucket that you list object versions for

    prefix

    Prefix of the keys you want to list under passed bucket

    s3Headers

    any headers you want to add

    returns

    Source of Pair of (List of ListObjectVersionsResultVersions, List of DeleteMarkers)

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html

  69. def listObjectVersions(bucket: String, prefix: Optional[String]): Source[Pair[List[ListObjectVersionsResultVersions], List[DeleteMarkers]], NotUsed]

    List all versioned objects for a bucket with optional prefix.

    List all versioned objects for a bucket with optional prefix. This will automatically page through all keys with the given parameters.

    bucket

    Which bucket that you list object versions for

    prefix

    Prefix of the keys you want to list under passed bucket

    returns

    Source of Pair of (List of ListObjectVersionsResultVersions, List of DeleteMarkers)

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html

  70. def listObjectVersionsAndCommonPrefixes(bucket: String, delimiter: String, prefix: Option[String], s3Headers: S3Headers): Source[Tuple3[List[ListObjectVersionsResultVersions], List[DeleteMarkers], List[CommonPrefixes]], NotUsed]

    List all versioned objects for a bucket with optional prefix and delimiter.

    List all versioned objects for a bucket with optional prefix and delimiter. This will automatically page through all keys with the given parameters.

    bucket

    Which bucket that you list object versions for

    delimiter

    Delimiter to use for listing only one level of hierarchy

    prefix

    Prefix of the keys you want to list under passed bucket

    s3Headers

    any headers you want to add

    returns

    Source of Tuple3 of (List of ListObjectVersionsResultVersions, List of DeleteMarkers, List of CommonPrefixes)

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html

  71. def listParts(bucket: String, key: String, uploadId: String, s3Headers: S3Headers): Source[ListPartsResultParts, NotUsed]

    List uploaded parts for a specific upload.

    List uploaded parts for a specific upload. This will automatically page through all keys with the given parameters.

    bucket

    Under which bucket the upload parts are contained

    key

    They key where the parts were uploaded to

    uploadId

    Unique identifier of the upload for which you want to list the uploaded parts

    s3Headers

    any headers you want to add

    returns

    Source of ListPartsResultParts

  72. def listParts(bucket: String, key: String, uploadId: String): Source[ListPartsResultParts, NotUsed]

    List uploaded parts for a specific upload.

    List uploaded parts for a specific upload. This will automatically page through all keys with the given parameters.

    bucket

    Under which bucket the upload parts are contained

    key

    They key where the parts were uploaded to

    uploadId

    Unique identifier of the upload for which you want to list the uploaded parts

    returns

    Source of ListPartsResultParts

  73. def makeBucket(bucketName: String, system: ClassicActorSystemProvider, attributes: Attributes, s3Headers: S3Headers): CompletionStage[Done]

    Create new bucket with a given name

    Create new bucket with a given name

    bucketName

    bucket name

    system

    the actor system which provides the materializer to run with

    attributes

    attributes to run request with

    s3Headers

    any headers you want to add

    returns

    CompletionStage of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html

  74. def makeBucket(bucketName: String, system: ClassicActorSystemProvider): CompletionStage[Done]

    Create new bucket with a given name

    Create new bucket with a given name

    bucketName

    bucket name

    system

    actor system to run with

    returns

    CompletionStage of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html

  75. def makeBucket(bucketName: String, system: ClassicActorSystemProvider, attributes: Attributes): CompletionStage[Done]

    Create new bucket with a given name

    Create new bucket with a given name

    bucketName

    bucket name

    system

    actor system to run with

    attributes

    attributes to run request with

    returns

    CompletionStage of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html

  76. def makeBucketSource(bucketName: String, s3Headers: S3Headers): Source[Done, NotUsed]

    Create new bucket with a given name

    Create new bucket with a given name

    bucketName

    bucket name

    s3Headers

    any headers you want to add

    returns

    Source of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html

  77. def makeBucketSource(bucketName: String): Source[Done, NotUsed]

    Create new bucket with a given name

    Create new bucket with a given name

    bucketName

    bucket name

    returns

    Source of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html

  78. def multipartCopy(sourceBucket: String, sourceKey: String, targetBucket: String, targetKey: String): RunnableGraph[CompletionStage[MultipartUploadResult]]

    Copy a S3 Object by making multiple requests.

    Copy a S3 Object by making multiple requests.

    sourceBucket

    the source s3 bucket name

    sourceKey

    the source s3 key

    targetBucket

    the target s3 bucket name

    targetKey

    the target s3 key

    returns

    the MultipartUploadResult of the uploaded S3 Object

  79. def multipartCopy(sourceBucket: String, sourceKey: String, targetBucket: String, targetKey: String, s3Headers: S3Headers): RunnableGraph[CompletionStage[MultipartUploadResult]]

    Copy a S3 Object by making multiple requests.

    Copy a S3 Object by making multiple requests.

    sourceBucket

    the source s3 bucket name

    sourceKey

    the source s3 key

    targetBucket

    the target s3 bucket name

    targetKey

    the target s3 key

    s3Headers

    any headers you want to add

    returns

    the MultipartUploadResult of the uploaded S3 Object

  80. def multipartCopy(sourceBucket: String, sourceKey: String, targetBucket: String, targetKey: String, contentType: ContentType, s3Headers: S3Headers): RunnableGraph[CompletionStage[MultipartUploadResult]]

    Copy a S3 Object by making multiple requests.

    Copy a S3 Object by making multiple requests.

    sourceBucket

    the source s3 bucket name

    sourceKey

    the source s3 key

    targetBucket

    the target s3 bucket name

    targetKey

    the target s3 key

    contentType

    an optional ContentType

    s3Headers

    any headers you want to add

    returns

    the MultipartUploadResult of the uploaded S3 Object

  81. def multipartCopy(sourceBucket: String, sourceKey: String, targetBucket: String, targetKey: String, sourceVersionId: Optional[String], s3Headers: S3Headers): RunnableGraph[CompletionStage[MultipartUploadResult]]

    Copy a S3 Object by making multiple requests.

    Copy a S3 Object by making multiple requests.

    sourceBucket

    the source s3 bucket name

    sourceKey

    the source s3 key

    targetBucket

    the target s3 bucket name

    targetKey

    the target s3 key

    sourceVersionId

    version id of source object, if the versioning is enabled in source bucket

    s3Headers

    any headers you want to add

    returns

    the MultipartUploadResult of the uploaded S3 Object

  82. def multipartCopy(sourceBucket: String, sourceKey: String, targetBucket: String, targetKey: String, sourceVersionId: Optional[String], contentType: ContentType, s3Headers: S3Headers): RunnableGraph[CompletionStage[MultipartUploadResult]]

    Copy a S3 Object by making multiple requests.

    Copy a S3 Object by making multiple requests.

    sourceBucket

    the source s3 bucket name

    sourceKey

    the source s3 key

    targetBucket

    the target s3 bucket name

    targetKey

    the target s3 key

    sourceVersionId

    version id of source object, if the versioning is enabled in source bucket

    contentType

    an optional ContentType

    s3Headers

    any headers you want to add

    returns

    the MultipartUploadResult of the uploaded S3 Object

  83. def multipartUpload(bucket: String, key: String): Sink[ByteString, CompletionStage[MultipartUploadResult]]

    Uploads a S3 Object by making multiple requests

    Uploads a S3 Object by making multiple requests

    bucket

    the s3 bucket name

    key

    the s3 object key

    returns

    a Sink that accepts ByteString's and materializes to a CompletionStage of MultipartUploadResult

  84. def multipartUpload(bucket: String, key: String, contentType: ContentType): Sink[ByteString, CompletionStage[MultipartUploadResult]]

    Uploads a S3 Object by making multiple requests

    Uploads a S3 Object by making multiple requests

    bucket

    the s3 bucket name

    key

    the s3 object key

    contentType

    an optional ContentType

    returns

    a Sink that accepts ByteString's and materializes to a CompletionStage of MultipartUploadResult

  85. def multipartUpload(bucket: String, key: String, contentType: ContentType, s3Headers: S3Headers): Sink[ByteString, CompletionStage[MultipartUploadResult]]

    Uploads a S3 Object by making multiple requests

    Uploads a S3 Object by making multiple requests

    bucket

    the s3 bucket name

    key

    the s3 object key

    contentType

    an optional ContentType

    s3Headers

    any headers you want to add

    returns

    a Sink that accepts ByteString's and materializes to a CompletionStage of MultipartUploadResult

  86. def multipartUploadWithContext[C](bucket: String, key: String, chunkUploadSink: Sink[Pair[UploadPartResponse, Iterable[C]], _]): Sink[Pair[ByteString, C], CompletionStage[MultipartUploadResult]]

    Uploads a S3 Object by making multiple requests.

    Uploads a S3 Object by making multiple requests. Unlike multipartUpload, this version allows you to pass in a context (typically from a SourceWithContext/FlowWithContext) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.

    Note that this version of resuming multipart upload ignores buffering

    C

    The Context that is passed along with the ByteString

    bucket

    the s3 bucket name

    key

    the s3 object key

    chunkUploadSink

    A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses Flow.alsoTo for chunkUploadSink which means that backpressure is applied to the upload stream if chunkUploadSink is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returned Sink.

    returns

    a Sink that accepts Pair of (ByteString of C)'s and materializes to a CompletionStage of MultipartUploadResult

  87. def multipartUploadWithContext[C](bucket: String, key: String, chunkUploadSink: Sink[Pair[UploadPartResponse, Iterable[C]], _], contentType: ContentType): Sink[Pair[ByteString, C], CompletionStage[MultipartUploadResult]]

    Uploads a S3 Object by making multiple requests.

    Uploads a S3 Object by making multiple requests. Unlike multipartUpload, this version allows you to pass in a context (typically from a SourceWithContext/FlowWithContext) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.

    Note that this version of resuming multipart upload ignores buffering

    C

    The Context that is passed along with the ByteString

    bucket

    the s3 bucket name

    key

    the s3 object key

    chunkUploadSink

    A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses Flow.alsoTo for chunkUploadSink which means that backpressure is applied to the upload stream if chunkUploadSink is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returned Sink.

    contentType

    an optional ContentType

    returns

    a Sink that accepts Pair of (ByteString of C)'s and materializes to a CompletionStage of MultipartUploadResult

  88. def multipartUploadWithContext[C](bucket: String, key: String, chunkUploadSink: Sink[Pair[UploadPartResponse, Iterable[C]], _], contentType: ContentType, s3Headers: S3Headers): Sink[Pair[ByteString, C], CompletionStage[MultipartUploadResult]]

    Uploads a S3 Object by making multiple requests.

    Uploads a S3 Object by making multiple requests. Unlike multipartUpload, this version allows you to pass in a context (typically from a SourceWithContext/FlowWithContext) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.

    Note that this version of resuming multipart upload ignores buffering

    C

    The Context that is passed along with the ByteString

    bucket

    the s3 bucket name

    key

    the s3 object key

    chunkUploadSink

    A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses Flow.alsoTo for chunkUploadSink which means that backpressure is applied to the upload stream if chunkUploadSink is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returned Sink.

    contentType

    an optional ContentType

    s3Headers

    any headers you want to add

    returns

    a Sink that accepts Pair of (ByteString of C)'s and materializes to a CompletionStage of MultipartUploadResult

  89. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  90. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  91. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  92. def putBucketVersioning(bucketName: String, bucketVersioning: BucketVersioning, s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): CompletionStage[Done]

    Sets the versioning state of an existing bucket.

    Sets the versioning state of an existing bucket.

    bucketName

    Bucket name

    bucketVersioning

    The state that you want to update

    s3Headers

    any headers you want to add

    returns

    CompletionStage of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html

  93. def putBucketVersioning(bucketName: String, bucketVersioning: BucketVersioning)(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): CompletionStage[Done]

    Sets the versioning state of an existing bucket.

    Sets the versioning state of an existing bucket.

    bucketName

    Bucket name

    bucketVersioning

    The state that you want to update

    returns

    CompletionStage of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html

  94. def putBucketVersioningSource(bucketName: String, bucketVersioning: BucketVersioning, s3Headers: S3Headers): Source[Done, NotUsed]

    Sets the versioning state of an existing bucket.

    Sets the versioning state of an existing bucket.

    bucketName

    Bucket name

    bucketVersioning

    The state that you want to update

    s3Headers

    any headers you want to add

    returns

    Source of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html

  95. def putBucketVersioningSource(bucketName: String, bucketVersioning: BucketVersioning): Source[Done, NotUsed]

    Sets the versioning state of an existing bucket.

    Sets the versioning state of an existing bucket.

    bucketName

    Bucket name

    bucketVersioning

    The state that you want to update

    returns

    Source of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html

  96. def putObject(bucket: String, key: String, data: Source[ByteString, _], contentLength: Long): Source[ObjectMetadata, NotUsed]

    Uploads a S3 Object, use this for small files and multipartUpload for bigger ones

    Uploads a S3 Object, use this for small files and multipartUpload for bigger ones

    bucket

    the s3 bucket name

    key

    the s3 object key

    data

    a Source of ByteString

    contentLength

    the number of bytes that will be uploaded (required!)

    returns

    a Source containing the ObjectMetadata of the uploaded S3 Object

  97. def putObject(bucket: String, key: String, data: Source[ByteString, _], contentLength: Long, contentType: ContentType): Source[ObjectMetadata, NotUsed]

    Uploads a S3 Object, use this for small files and multipartUpload for bigger ones

    Uploads a S3 Object, use this for small files and multipartUpload for bigger ones

    bucket

    the s3 bucket name

    key

    the s3 object key

    data

    a Source of ByteString

    contentLength

    the number of bytes that will be uploaded (required!)

    contentType

    an optional ContentType

    returns

    a Source containing the ObjectMetadata of the uploaded S3 Object

  98. def putObject(bucket: String, key: String, data: Source[ByteString, _], contentLength: Long, contentType: ContentType, s3Headers: S3Headers): Source[ObjectMetadata, NotUsed]

    Uploads a S3 Object, use this for small files and multipartUpload for bigger ones

    Uploads a S3 Object, use this for small files and multipartUpload for bigger ones

    bucket

    the s3 bucket name

    key

    the s3 object key

    data

    a Source of ByteString

    contentLength

    the number of bytes that will be uploaded (required!)

    contentType

    an optional ContentType

    s3Headers

    any additional headers for the request

    returns

    a Source containing the ObjectMetadata of the uploaded S3 Object

  99. def request(bucket: String, key: String, versionId: Optional[String], method: HttpMethod = HttpMethods.GET, s3Headers: S3Headers = S3Headers.empty): Source[HttpResponse, NotUsed]

    Use this for a low level access to S3.

    Use this for a low level access to S3.

    bucket

    the s3 bucket name

    key

    the s3 object key

    versionId

    optional versionId of source object

    method

    the HttpMethod to use when making the request

    s3Headers

    any headers you want to add

    returns

    a raw HTTP response from S3

  100. def request(bucket: String, key: String, method: HttpMethod, s3Headers: S3Headers): Source[HttpResponse, NotUsed]

    Use this for a low level access to S3.

    Use this for a low level access to S3.

    bucket

    the s3 bucket name

    key

    the s3 object key

    method

    the HttpMethod to use when making the request

    s3Headers

    any headers you want to add

    returns

    a raw HTTP response from S3

  101. def resumeMultipartUpload(bucket: String, key: String, uploadId: String, previousParts: Iterable[Part]): Sink[ByteString, CompletionStage[MultipartUploadResult]]

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers

    bucket

    the s3 bucket name

    key

    the s3 object key

    uploadId

    the upload that you want to resume

    previousParts

    The previously uploaded parts ending just before when this upload will commence

    returns

    a Sink that accepts ByteString's and materializes to a CompletionStage of MultipartUploadResult

  102. def resumeMultipartUpload(bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], contentType: ContentType): Sink[ByteString, CompletionStage[MultipartUploadResult]]

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers

    bucket

    the s3 bucket name

    key

    the s3 object key

    uploadId

    the upload that you want to resume

    previousParts

    The previously uploaded parts ending just before when this upload will commence

    contentType

    an optional ContentType

    returns

    a Sink that accepts ByteString's and materializes to a CompletionStage of MultipartUploadResult

  103. def resumeMultipartUpload(bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], contentType: ContentType, s3Headers: S3Headers): Sink[ByteString, CompletionStage[MultipartUploadResult]]

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers

    bucket

    the s3 bucket name

    key

    the s3 object key

    uploadId

    the upload that you want to resume

    previousParts

    The previously uploaded parts ending just before when this upload will commence

    contentType

    an optional ContentType

    s3Headers

    any headers you want to add

    returns

    a Sink that accepts ByteString's and materializes to a CompletionStage of MultipartUploadResult

  104. def resumeMultipartUploadWithContext[C](bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], chunkUploadSink: Sink[Pair[UploadPartResponse, Iterable[C]], _]): Sink[Pair[ByteString, C], CompletionStage[MultipartUploadResult]]

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers.

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers. Unlike resumeMultipartUpload, this version allows you to pass in a context (typically from a SourceWithContext/FlowWithContext) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.

    Note that this version of resuming multipart upload ignores buffering

    C

    The Context that is passed along with the ByteString

    bucket

    the s3 bucket name

    key

    the s3 object key

    uploadId

    the upload that you want to resume

    previousParts

    The previously uploaded parts ending just before when this upload will commence

    chunkUploadSink

    A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses Flow.alsoTo for chunkUploadSink which means that backpressure is applied to the upload stream if chunkUploadSink is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returned Sink.

    returns

    a Sink that accepts Pair of (ByteString of C)'s and materializes to a CompletionStage of MultipartUploadResult

  105. def resumeMultipartUploadWithContext[C](bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], chunkUploadSink: Sink[Pair[UploadPartResponse, Iterable[C]], _], contentType: ContentType): Sink[Pair[ByteString, C], CompletionStage[MultipartUploadResult]]

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers.

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers. Unlike resumeMultipartUpload, this version allows you to pass in a context (typically from a SourceWithContext/FlowWithContext) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.

    Note that this version of resuming multipart upload ignores buffering

    C

    The Context that is passed along with the ByteString

    bucket

    the s3 bucket name

    key

    the s3 object key

    uploadId

    the upload that you want to resume

    previousParts

    The previously uploaded parts ending just before when this upload will commence

    chunkUploadSink

    A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses Flow.alsoTo for chunkUploadSink which means that backpressure is applied to the upload stream if chunkUploadSink is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returned Sink.

    contentType

    an optional ContentType

    returns

    a Sink that accepts Pair of (ByteString of C)'s and materializes to a CompletionStage of MultipartUploadResult

  106. def resumeMultipartUploadWithContext[C](bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], chunkUploadSink: Sink[Pair[UploadPartResponse, Iterable[C]], _], contentType: ContentType, s3Headers: S3Headers): Sink[Pair[ByteString, C], CompletionStage[MultipartUploadResult]]

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers.

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers. Unlike resumeMultipartUpload, this version allows you to pass in a context (typically from a SourceWithContext/FlowWithContext) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.

    Note that this version of resuming multipart upload ignores buffering

    C

    The Context that is passed along with the ByteString

    bucket

    the s3 bucket name

    key

    the s3 object key

    uploadId

    the upload that you want to resume

    previousParts

    The previously uploaded parts ending just before when this upload will commence

    chunkUploadSink

    A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses Flow.alsoTo for chunkUploadSink which means that backpressure is applied to the upload stream if chunkUploadSink is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returned Sink.

    contentType

    an optional ContentType

    s3Headers

    any headers you want to add

    returns

    a Sink that accepts Pair of (ByteString of C)'s and materializes to a CompletionStage of MultipartUploadResult

  107. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  108. def toString(): String
    Definition Classes
    AnyRef → Any
  109. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  110. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  111. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()

Deprecated Value Members

  1. def download(bucket: String, key: String, range: ByteRange, versionId: Optional[String], s3Headers: S3Headers): Source[Optional[Pair[Source[ByteString, NotUsed], ObjectMetadata]], NotUsed]

    Downloads a specific byte range of a S3 Object

    Downloads a specific byte range of a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    range

    the ByteRange you want to download

    versionId

    optional version id of the object

    s3Headers

    any headers you want to add

    returns

    A pekko.japi.Pair with a Source of ByteString, and a Source containing the ObjectMetadata

    Annotations
    @deprecated
    Deprecated

    (Since version 4.0.0) Use S3.getObject instead

  2. def download(bucket: String, key: String, range: ByteRange, s3Headers: S3Headers): Source[Optional[Pair[Source[ByteString, NotUsed], ObjectMetadata]], NotUsed]

    Downloads a specific byte range of a S3 Object

    Downloads a specific byte range of a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    range

    the ByteRange you want to download

    s3Headers

    any headers you want to add

    returns

    A pekko.japi.Pair with a Source of ByteString, and a Source containing the ObjectMetadata

    Annotations
    @deprecated
    Deprecated

    (Since version 4.0.0) Use S3.getObject instead

  3. def download(bucket: String, key: String, s3Headers: S3Headers): Source[Optional[Pair[Source[ByteString, NotUsed], ObjectMetadata]], NotUsed]

    Downloads a S3 Object

    Downloads a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    s3Headers

    any headers you want to add

    returns

    A pekko.japi.Pair with a Source of ByteString, and a Source containing the ObjectMetadata

    Annotations
    @deprecated
    Deprecated

    (Since version 4.0.0) Use S3.getObject instead

  4. def download(bucket: String, key: String, range: ByteRange, versionId: Optional[String], sse: ServerSideEncryption): Source[Optional[Pair[Source[ByteString, NotUsed], ObjectMetadata]], NotUsed]

    Downloads a specific byte range of a S3 Object

    Downloads a specific byte range of a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    range

    the ByteRange you want to download

    versionId

    optional version id of the object

    sse

    the server side encryption to use

    returns

    A pekko.japi.Pair with a Source of ByteString, and a Source containing the ObjectMetadata

    Annotations
    @deprecated
    Deprecated

    (Since version 4.0.0) Use S3.getObject instead

  5. def download(bucket: String, key: String, range: ByteRange, sse: ServerSideEncryption): Source[Optional[Pair[Source[ByteString, NotUsed], ObjectMetadata]], NotUsed]

    Downloads a specific byte range of a S3 Object

    Downloads a specific byte range of a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    range

    the ByteRange you want to download

    sse

    the server side encryption to use

    returns

    A pekko.japi.Pair with a Source of ByteString, and a Source containing the ObjectMetadata

    Annotations
    @deprecated
    Deprecated

    (Since version 4.0.0) Use S3.getObject instead

  6. def download(bucket: String, key: String, range: ByteRange): Source[Optional[Pair[Source[ByteString, NotUsed], ObjectMetadata]], NotUsed]

    Downloads a specific byte range of a S3 Object

    Downloads a specific byte range of a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    range

    the ByteRange you want to download

    returns

    A pekko.japi.Pair with a Source of ByteString, and a Source containing the ObjectMetadata

    Annotations
    @deprecated
    Deprecated

    (Since version 4.0.0) Use S3.getObject instead

  7. def download(bucket: String, key: String, sse: ServerSideEncryption): Source[Optional[Pair[Source[ByteString, NotUsed], ObjectMetadata]], NotUsed]

    Downloads a S3 Object

    Downloads a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    sse

    the server side encryption to use

    returns

    A pekko.japi.Pair with a Source of ByteString, and a Source containing the ObjectMetadata

    Annotations
    @deprecated
    Deprecated

    (Since version 4.0.0) Use S3.getObject instead

  8. def download(bucket: String, key: String): Source[Optional[Pair[Source[ByteString, NotUsed], ObjectMetadata]], NotUsed]

    Downloads a S3 Object

    Downloads a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    returns

    A pekko.japi.Pair with a Source of ByteString, and a Source containing the ObjectMetadata

    Annotations
    @deprecated
    Deprecated

    (Since version 4.0.0) Use S3.getObject instead

Inherited from AnyRef

Inherited from Any

Ungrouped