Packages

object S3

Factory of S3 operations.

Source
S3.scala
Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. S3
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Value Members

  1. val MinChunkSize: Int
  2. def checkIfBucketExists(bucketName: String, s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): Future[BucketAccess]

    Checks whether the bucket exits and user has rights to perform ListBucket operation

    Checks whether the bucket exits and user has rights to perform ListBucket operation

    bucketName

    bucket name

    s3Headers

    any headers you want to add

    returns

    Future of type BucketAccess

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html

  3. def checkIfBucketExists(bucketName: String)(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): Future[BucketAccess]

    Checks whether the bucket exits and user has rights to perform ListBucket operation

    Checks whether the bucket exits and user has rights to perform ListBucket operation

    bucketName

    bucket name

    returns

    Future of type BucketAccess

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html

  4. def checkIfBucketExistsSource(bucketName: String, s3Headers: S3Headers): Source[BucketAccess, NotUsed]

    Checks whether the bucket exits and user has rights to perform ListBucket operation

    Checks whether the bucket exits and user has rights to perform ListBucket operation

    bucketName

    bucket name

    s3Headers

    any headers you want to add

    returns

    Source of type BucketAccess

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html

  5. def checkIfBucketExistsSource(bucketName: String): Source[BucketAccess, NotUsed]

    Checks whether the bucket exits and user has rights to perform ListBucket operation

    Checks whether the bucket exits and user has rights to perform ListBucket operation

    bucketName

    bucket name

    returns

    Source of type BucketAccess

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html

  6. def completeMultipartUpload(bucket: String, key: String, uploadId: String, parts: Iterable[Part], s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): Future[MultipartUploadResult]

    Complete a multipart upload with an already given list of parts

    Complete a multipart upload with an already given list of parts

    bucket

    the s3 bucket name

    key

    the s3 object key

    uploadId

    the upload that you want to complete

    parts

    A list of all of the parts for the multipart upload

    s3Headers

    any headers you want to add

    returns

    Future of type MultipartUploadResult

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html

  7. def completeMultipartUpload(bucket: String, key: String, uploadId: String, parts: Iterable[Part])(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): Future[MultipartUploadResult]

    Complete a multipart upload with an already given list of parts

    Complete a multipart upload with an already given list of parts

    bucket

    the s3 bucket name

    key

    the s3 object key

    uploadId

    the upload that you want to complete

    parts

    A list of all of the parts for the multipart upload

    returns

    Future of type MultipartUploadResult

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html

  8. def deleteBucket(bucketName: String, s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): Future[Done]

    Delete bucket with a given name

    Delete bucket with a given name

    bucketName

    bucket name

    s3Headers

    any headers you want to add

    returns

    Future of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html

  9. def deleteBucket(bucketName: String)(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): Future[Done]

    Delete bucket with a given name

    Delete bucket with a given name

    bucketName

    bucket name

    returns

    Future of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html

  10. def deleteBucketContents(bucket: String, deleteAllVersions: Boolean): Source[Done, NotUsed]

    Deletes all S3 Objects within the given bucket

    Deletes all S3 Objects within the given bucket

    bucket

    the s3 bucket name

    deleteAllVersions

    Whether to delete all object versions as well (applies to versioned buckets)

    returns

    A Source that will emit pekko.Done when operation is completed

  11. def deleteBucketContents(bucket: String): Source[Done, NotUsed]

    Deletes all S3 Objects within the given bucket

    Deletes all S3 Objects within the given bucket

    bucket

    the s3 bucket name

    returns

    A Source that will emit pekko.Done when operation is completed

  12. def deleteBucketSource(bucketName: String, s3Headers: S3Headers): Source[Done, NotUsed]

    Delete bucket with a given name

    Delete bucket with a given name

    bucketName

    bucket name

    s3Headers

    any headers you want to add

    returns

    Source of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html

  13. def deleteBucketSource(bucketName: String): Source[Done, NotUsed]

    Delete bucket with a given name

    Delete bucket with a given name

    bucketName

    bucket name

    returns

    Source of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html

  14. def deleteObject(bucket: String, key: String, versionId: Option[String], s3Headers: S3Headers): Source[Done, NotUsed]

    Deletes a S3 Object

    Deletes a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    versionId

    optional version id of the object

    s3Headers

    any headers you want to add

    returns

    A Source that will emit pekko.Done when operation is completed

  15. def deleteObject(bucket: String, key: String, versionId: Option[String] = None): Source[Done, NotUsed]

    Deletes a S3 Object

    Deletes a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    versionId

    optional version id of the object

    returns

    A Source that will emit pekko.Done when operation is completed

  16. def deleteObjectsByPrefix(bucket: String, prefix: Option[String], deleteAllVersions: Boolean, s3Headers: S3Headers): Source[Done, NotUsed]

    Deletes a S3 Objects which contain given prefix

    Deletes a S3 Objects which contain given prefix

    bucket

    the s3 bucket name

    prefix

    optional s3 objects prefix

    deleteAllVersions

    Whether to delete all object versions as well (applies to versioned buckets)

    s3Headers

    any headers you want to add

    returns

    A Source that will emit pekko.Done when operation is completed

  17. def deleteObjectsByPrefix(bucket: String, prefix: Option[String], s3Headers: S3Headers): Source[Done, NotUsed]

    Deletes a S3 Objects which contain given prefix

    Deletes a S3 Objects which contain given prefix

    bucket

    the s3 bucket name

    prefix

    optional s3 objects prefix

    s3Headers

    any headers you want to add

    returns

    A Source that will emit pekko.Done when operation is completed

  18. def deleteObjectsByPrefix(bucket: String, prefix: Option[String], deleteAllVersions: Boolean): Source[Done, NotUsed]

    Deletes a S3 Objects which contain given prefix

    Deletes a S3 Objects which contain given prefix

    bucket

    the s3 bucket name

    prefix

    optional s3 objects prefix

    deleteAllVersions

    Whether to delete all object versions as well (applies to versioned buckets)

    returns

    A Source that will emit pekko.Done when operation is completed

  19. def deleteObjectsByPrefix(bucket: String, prefix: Option[String]): Source[Done, NotUsed]

    Deletes a S3 Objects which contain given prefix

    Deletes a S3 Objects which contain given prefix

    bucket

    the s3 bucket name

    prefix

    optional s3 objects prefix

    returns

    A Source that will emit pekko.Done when operation is completed

  20. def deleteUpload(bucketName: String, key: String, uploadId: String, s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): Future[Done]

    Delete all existing parts for a specific upload

    Delete all existing parts for a specific upload

    bucketName

    Which bucket the upload is inside

    key

    The key for the upload

    uploadId

    Unique identifier of the upload

    s3Headers

    any headers you want to add

    returns

    Future of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html

  21. def deleteUpload(bucketName: String, key: String, uploadId: String)(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): Future[Done]

    Delete all existing parts for a specific upload id

    Delete all existing parts for a specific upload id

    bucketName

    Which bucket the upload is inside

    key

    The key for the upload

    uploadId

    Unique identifier of the upload

    returns

    Future of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html

  22. def deleteUploadSource(bucketName: String, key: String, uploadId: String, s3Headers: S3Headers): Source[Done, NotUsed]

    Delete all existing parts for a specific upload

    Delete all existing parts for a specific upload

    bucketName

    Which bucket the upload is inside

    key

    The key for the upload

    uploadId

    Unique identifier of the upload

    s3Headers

    any headers you want to add

    returns

    Source of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html

  23. def deleteUploadSource(bucketName: String, key: String, uploadId: String): Source[Done, NotUsed]

    Delete all existing parts for a specific upload

    Delete all existing parts for a specific upload

    bucketName

    Which bucket the upload is inside

    key

    The key for the upload

    uploadId

    Unique identifier of the upload

    returns

    Source of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html

  24. def getBucketVersioning(bucketName: String, s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): Future[BucketVersioningResult]

    Gets the versioning of an existing bucket

    Gets the versioning of an existing bucket

    bucketName

    Bucket name

    s3Headers

    any headers you want to add

    returns

    Future of type BucketVersioningResult

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html

  25. def getBucketVersioning(bucketName: String)(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): Future[BucketVersioningResult]

    Gets the versioning of an existing bucket

    Gets the versioning of an existing bucket

    bucketName

    Bucket name

    returns

    Future of type BucketVersioningResult

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html

  26. def getBucketVersioningSource(bucketName: String, s3Headers: S3Headers): Source[BucketVersioningResult, NotUsed]

    Gets the versioning of an existing bucket

    Gets the versioning of an existing bucket

    bucketName

    Bucket name

    s3Headers

    any headers you want to add

    returns

    Source of type BucketVersioningResult

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html

  27. def getBucketVersioningSource(bucketName: String): Source[BucketVersioningResult, NotUsed]

    Gets the versioning of an existing bucket

    Gets the versioning of an existing bucket

    bucketName

    Bucket name

    returns

    Source of type BucketVersioningResult

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html

  28. def getObject(bucket: String, key: String, range: Option[ByteRange], versionId: Option[String], s3Headers: S3Headers): Source[ByteString, Future[ObjectMetadata]]

    Gets a S3 Object

    Gets a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    range

    [optional] the ByteRange you want to download

    s3Headers

    any headers you want to add

    returns

    A pekko.stream.scaladsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata

  29. def getObject(bucket: String, key: String, range: Option[ByteRange] = None, versionId: Option[String] = None, sse: Option[ServerSideEncryption] = None): Source[ByteString, Future[ObjectMetadata]]

    Gets a S3 Object

    Gets a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    range

    [optional] the ByteRange you want to download

    sse

    [optional] the server side encryption used on upload

    returns

    A pekko.stream.scaladsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata

  30. def getObjectMetadata(bucket: String, key: String, versionId: Option[String], s3Headers: S3Headers): Source[Option[ObjectMetadata], NotUsed]

    Gets the metadata for a S3 Object

    Gets the metadata for a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    versionId

    optional version id of the object

    s3Headers

    any headers you want to add

    returns

    A Source containing an scala.Option that will be scala.None in case the object does not exist

  31. def getObjectMetadata(bucket: String, key: String, versionId: Option[String] = None, sse: Option[ServerSideEncryption] = None): Source[Option[ObjectMetadata], NotUsed]

    Gets the metadata for a S3 Object

    Gets the metadata for a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    versionId

    optional version id of the object

    sse

    the server side encryption to use

    returns

    A Source containing an scala.Option that will be scala.None in case the object does not exist

  32. def listBucket(bucket: String, delimiter: String, prefix: Option[String] = None, s3Headers: S3Headers = S3Headers.empty): Source[ListBucketResultContents, NotUsed]

    Will return a source of object metadata for a given bucket and delimiter with optional prefix using version 2 of the List Bucket API.

    Will return a source of object metadata for a given bucket and delimiter with optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.

    The pekko.connectors.s3.list-bucket-api-version can be set to 1 to use the older API version 1

    bucket

    Which bucket that you list object metadata for

    prefix

    Prefix of the keys you want to list under passed bucket

    s3Headers

    any headers you want to add

    returns

    Source of ListBucketResultContents

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)

  33. def listBucket(bucket: String, prefix: Option[String], s3Headers: S3Headers): Source[ListBucketResultContents, NotUsed]

    Will return a source of object metadata for a given bucket with optional prefix using version 2 of the List Bucket API.

    Will return a source of object metadata for a given bucket with optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.

    The pekko.connectors.s3.list-bucket-api-version can be set to 1 to use the older API version 1

    bucket

    Which bucket that you list object metadata for

    prefix

    Prefix of the keys you want to list under passed bucket

    s3Headers

    any headers you want to add

    returns

    Source of ListBucketResultContents

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)

  34. def listBucket(bucket: String, prefix: Option[String]): Source[ListBucketResultContents, NotUsed]

    Will return a source of object metadata for a given bucket with optional prefix using version 2 of the List Bucket API.

    Will return a source of object metadata for a given bucket with optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.

    The pekko.connectors.s3.list-bucket-api-version can be set to 1 to use the older API version 1

    bucket

    Which bucket that you list object metadata for

    prefix

    Prefix of the keys you want to list under passed bucket

    returns

    Source of ListBucketResultContents

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)

  35. def listBucketAndCommonPrefixes(bucket: String, delimiter: String, prefix: Option[String] = None, s3Headers: S3Headers = S3Headers.empty): Source[(Seq[ListBucketResultContents], Seq[ListBucketResultCommonPrefixes]), NotUsed]

    Will return a source of object metadata and common prefixes for a given bucket and delimiter with optional prefix using version 2 of the List Bucket API.

    Will return a source of object metadata and common prefixes for a given bucket and delimiter with optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.

    The pekko.connectors.s3.list-bucket-api-version can be set to 1 to use the older API version 1

    bucket

    Which bucket that you list object metadata for

    delimiter

    Delimiter to use for listing only one level of hierarchy

    prefix

    Prefix of the keys you want to list under passed bucket

    s3Headers

    any headers you want to add

    returns

    Source of (Seq of ListBucketResultContents, Seq of ListBucketResultContents)

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)

    https://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysHierarchy.html (prefix and delimiter documentation)

  36. def listBuckets(s3Headers: S3Headers): Source[ListBucketsResultContents, NotUsed]

    Will return a list containing all of the buckets for the current AWS account

    Will return a list containing all of the buckets for the current AWS account

    s3Headers

    any headers you want to add

    returns

    Source of ListBucketsResultContents

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html

  37. def listBuckets(): Source[ListBucketsResultContents, NotUsed]

    Will return a list containing all of the buckets for the current AWS account

    Will return a list containing all of the buckets for the current AWS account

    returns

    Source of ListBucketsResultContents

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html

  38. def listMultipartUpload(bucket: String, prefix: Option[String], s3Headers: S3Headers): Source[ListMultipartUploadResultUploads, NotUsed]

    Will return in progress or aborted multipart uploads.

    Will return in progress or aborted multipart uploads. This will automatically page through all keys with the given parameters.

    bucket

    Which bucket that you list in-progress multipart uploads for

    prefix

    Prefix of the keys you want to list under passed bucket

    s3Headers

    any headers you want to add

    returns

    Source of ListMultipartUploadResultUploads

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html

  39. def listMultipartUpload(bucket: String, prefix: Option[String]): Source[ListMultipartUploadResultUploads, NotUsed]

    Will return in progress or aborted multipart uploads.

    Will return in progress or aborted multipart uploads. This will automatically page through all keys with the given parameters.

    bucket

    Which bucket that you list in-progress multipart uploads for

    prefix

    Prefix of the keys you want to list under passed bucket

    returns

    Source of ListMultipartUploadResultUploads

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html

  40. def listMultipartUploadAndCommonPrefixes(bucket: String, delimiter: String, prefix: Option[String] = None, s3Headers: S3Headers = S3Headers.empty): Source[(Seq[ListMultipartUploadResultUploads], Seq[CommonPrefixes]), NotUsed]

    Will return in progress or aborted multipart uploads with optional prefix and delimiter.

    Will return in progress or aborted multipart uploads with optional prefix and delimiter. This will automatically page through all keys with the given parameters.

    bucket

    Which bucket that you list in-progress multipart uploads for

    delimiter

    Delimiter to use for listing only one level of hierarchy

    prefix

    Prefix of the keys you want to list under passed bucket

    s3Headers

    any headers you want to add

    returns

    Source of (Seq of ListMultipartUploadResultUploads, Seq of CommonPrefixes)

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html

  41. def listObjectVersions(bucket: String, delimiter: String, prefix: Option[String], s3Headers: S3Headers): Source[(Seq[ListObjectVersionsResultVersions], Seq[DeleteMarkers]), NotUsed]

    List all versioned objects for a bucket with optional prefix and delimiter.

    List all versioned objects for a bucket with optional prefix and delimiter. This will automatically page through all keys with the given parameters.

    bucket

    Which bucket that you list object versions for

    delimiter

    Delimiter to use for listing only one level of hierarchy

    prefix

    Prefix of the keys you want to list under passed bucket

    s3Headers

    any headers you want to add

    returns

    Source of (Seq of ListObjectVersionsResultVersions, Seq of DeleteMarkers)

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html

  42. def listObjectVersions(bucket: String, prefix: Option[String], s3Headers: S3Headers): Source[(Seq[ListObjectVersionsResultVersions], Seq[DeleteMarkers]), NotUsed]

    List all versioned objects for a bucket with optional prefix.

    List all versioned objects for a bucket with optional prefix. This will automatically page through all keys with the given parameters.

    bucket

    Which bucket that you list object versions for

    prefix

    Prefix of the keys you want to list under passed bucket

    s3Headers

    any headers you want to add

    returns

    Source of (Seq of ListObjectVersionsResultVersions, Seq of DeleteMarkers)

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html

  43. def listObjectVersions(bucket: String, prefix: Option[String]): Source[(Seq[ListObjectVersionsResultVersions], Seq[DeleteMarkers]), NotUsed]

    List all versioned objects for a bucket with optional prefix.

    List all versioned objects for a bucket with optional prefix. This will automatically page through all keys with the given parameters.

    bucket

    Which bucket that you list object versions for

    prefix

    Prefix of the keys you want to list under passed bucket

    returns

    Source of (Seq of ListObjectVersionsResultVersions, Seq of DeleteMarkers)

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html

  44. def listObjectVersionsAndCommonPrefixes(bucket: String, delimiter: String, prefix: Option[String], s3Headers: S3Headers): Source[(Seq[ListObjectVersionsResultVersions], Seq[DeleteMarkers], Seq[CommonPrefixes]), NotUsed]

    List all versioned objects for a bucket with optional prefix and delimiter.

    List all versioned objects for a bucket with optional prefix and delimiter. This will automatically page through all keys with the given parameters.

    bucket

    Which bucket that you list object versions for

    delimiter

    Delimiter to use for listing only one level of hierarchy

    prefix

    Prefix of the keys you want to list under passed bucket

    s3Headers

    any headers you want to add

    returns

    Source of (Seq of ListObjectVersionsResultVersions, Seq of DeleteMarkers, Seq of CommonPrefixes)

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html

  45. def listParts(bucket: String, key: String, uploadId: String, s3Headers: S3Headers): Source[ListPartsResultParts, NotUsed]

    List uploaded parts for a specific upload.

    List uploaded parts for a specific upload. This will automatically page through all keys with the given parameters.

    bucket

    Under which bucket the upload parts are contained

    key

    They key where the parts were uploaded to

    uploadId

    Unique identifier of the upload for which you want to list the uploaded parts

    s3Headers

    any headers you want to add

    returns

    Source of ListPartsResultParts

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html

  46. def listParts(bucket: String, key: String, uploadId: String): Source[ListPartsResultParts, NotUsed]

    List uploaded parts for a specific upload.

    List uploaded parts for a specific upload. This will automatically page through all keys with the given parameters.

    bucket

    Under which bucket the upload parts are contained

    key

    They key where the parts were uploaded to

    uploadId

    Unique identifier of the upload for which you want to list the uploaded parts

    returns

    Source of ListPartsResultParts

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html

  47. def makeBucket(bucketName: String, s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attr: Attributes): Future[Done]

    Create new bucket with a given name

    Create new bucket with a given name

    bucketName

    bucket name

    s3Headers

    any headers you want to add

    returns

    Future with type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html

  48. def makeBucket(bucketName: String)(implicit system: ClassicActorSystemProvider, attr: Attributes = Attributes()): Future[Done]

    Create new bucket with a given name

    Create new bucket with a given name

    bucketName

    bucket name

    returns

    Future with type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html

  49. def makeBucketSource(bucketName: String, s3Headers: S3Headers): Source[Done, NotUsed]

    Create new bucket with a given name

    Create new bucket with a given name

    bucketName

    bucket name

    s3Headers

    any headers you want to add

    returns

    Source of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html

  50. def makeBucketSource(bucketName: String): Source[Done, NotUsed]

    Create new bucket with a given name

    Create new bucket with a given name

    bucketName

    bucket name

    returns

    Source of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html

  51. def multipartCopy(sourceBucket: String, sourceKey: String, targetBucket: String, targetKey: String, sourceVersionId: Option[String] = None, contentType: ContentType = ContentTypes.`application/octet-stream`, s3Headers: S3Headers = S3Headers.empty, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4): RunnableGraph[Future[MultipartUploadResult]]

    Copy an S3 object from source bucket to target bucket using multi part copy upload.

    Copy an S3 object from source bucket to target bucket using multi part copy upload.

    sourceBucket

    source s3 bucket name

    sourceKey

    source s3 key

    targetBucket

    target s3 bucket name

    targetKey

    target s3 key

    sourceVersionId

    optional version id of source object, if the versioning is enabled in source bucket

    contentType

    an optional ContentType

    s3Headers

    any headers you want to add

    chunkSize

    the size of the requests sent to S3, minimum MinChunkSize

    chunkingParallelism

    the number of parallel requests used for the upload, defaults to 4

    returns

    a runnable graph which upon materialization will return a Future containing the results of the copy operation.

  52. def multipartUpload(bucket: String, key: String, contentType: ContentType = ContentTypes.`application/octet-stream`, metaHeaders: MetaHeaders = MetaHeaders(Map()), cannedAcl: CannedAcl = CannedAcl.Private, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4, sse: Option[ServerSideEncryption] = None): Sink[ByteString, Future[MultipartUploadResult]]

    Uploads a S3 Object by making multiple requests

    Uploads a S3 Object by making multiple requests

    bucket

    the s3 bucket name

    key

    the s3 object key

    contentType

    an optional ContentType

    metaHeaders

    any meta-headers you want to add

    cannedAcl

    a CannedAcl, defaults to CannedAcl.Private

    chunkSize

    the size of the requests sent to S3, minimum MinChunkSize

    chunkingParallelism

    the number of parallel requests used for the upload, defaults to 4

    returns

    a Sink that accepts ByteString's and materializes to a Future of MultipartUploadResult

  53. def multipartUploadWithContext[C](bucket: String, key: String, chunkUploadSink: Sink[(UploadPartResponse, Iterable[C]), _], contentType: ContentType = ContentTypes.`application/octet-stream`, metaHeaders: MetaHeaders = MetaHeaders(Map()), cannedAcl: CannedAcl = CannedAcl.Private, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4, sse: Option[ServerSideEncryption] = None): Sink[(ByteString, C), Future[MultipartUploadResult]]

    Uploads a S3 Object by making multiple requests.

    Uploads a S3 Object by making multiple requests. Unlike multipartUpload, this version allows you to pass in a context (typically from a SourceWithContext/FlowWithContext) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.

    Note that this version of resuming multipart upload ignores buffering

    C

    The Context that is passed along with the ByteString

    bucket

    the s3 bucket name

    key

    the s3 object key

    chunkUploadSink

    A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses Flow.alsoTo for chunkUploadSink which means that backpressure is applied to the upload stream if chunkUploadSink is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returned Sink.

    contentType

    an optional ContentType

    metaHeaders

    any meta-headers you want to add

    cannedAcl

    a CannedAcl, defaults to CannedAcl.Private

    chunkSize

    the size of the requests sent to S3, minimum MinChunkSize

    chunkingParallelism

    the number of parallel requests used for the upload, defaults to 4

    returns

    a Sink that accepts (ByteString, C)'s and materializes to a Future of MultipartUploadResult

  54. def multipartUploadWithHeaders(bucket: String, key: String, contentType: ContentType = ContentTypes.`application/octet-stream`, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4, s3Headers: S3Headers = S3Headers.empty): Sink[ByteString, Future[MultipartUploadResult]]

    Uploads a S3 Object by making multiple requests

    Uploads a S3 Object by making multiple requests

    bucket

    the s3 bucket name

    key

    the s3 object key

    contentType

    an optional ContentType

    chunkSize

    the size of the requests sent to S3, minimum MinChunkSize

    chunkingParallelism

    the number of parallel requests used for the upload, defaults to 4

    s3Headers

    any headers you want to add

    returns

    a Sink that accepts ByteString's and materializes to a Future of MultipartUploadResult

  55. def multipartUploadWithHeadersAndContext[C](bucket: String, key: String, chunkUploadSink: Sink[(UploadPartResponse, Iterable[C]), _], contentType: ContentType = ContentTypes.`application/octet-stream`, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4, s3Headers: S3Headers = S3Headers.empty): Sink[(ByteString, C), Future[MultipartUploadResult]]

    Uploads a S3 Object by making multiple requests.

    Uploads a S3 Object by making multiple requests. Unlike multipartUploadWithHeaders, this version allows you to pass in a context (typically from a SourceWithContext/FlowWithContext) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.

    Note that this version of resuming multipart upload ignores buffering

    C

    The Context that is passed along with the ByteString

    bucket

    the s3 bucket name

    key

    the s3 object key

    chunkUploadSink

    A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses Flow.alsoTo for chunkUploadSink which means that backpressure is applied to the upload stream if chunkUploadSink is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returned Sink.

    contentType

    an optional ContentType

    chunkSize

    the size of the requests sent to S3, minimum MinChunkSize

    chunkingParallelism

    the number of parallel requests used for the upload, defaults to 4

    s3Headers

    any headers you want to add

    returns

    a Sink that accepts (ByteString, C)'s and materializes to a Future of MultipartUploadResult

  56. def putBucketVersioning(bucketName: String, bucketVersioning: BucketVersioning, s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): Future[Done]

    Sets the versioning state of an existing bucket.

    Sets the versioning state of an existing bucket.

    bucketName

    Bucket name

    bucketVersioning

    The state that you want to update

    s3Headers

    any headers you want to add

    returns

    Future of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html

  57. def putBucketVersioning(bucketName: String, bucketVersioning: BucketVersioning)(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): Future[Done]

    Sets the versioning state of an existing bucket.

    Sets the versioning state of an existing bucket.

    bucketName

    Bucket name

    bucketVersioning

    The state that you want to update

    returns

    Future of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html

  58. def putBucketVersioningSource(bucketName: String, bucketVersioning: BucketVersioning, s3Headers: S3Headers): Source[Done, NotUsed]

    Sets the versioning state of an existing bucket.

    Sets the versioning state of an existing bucket.

    bucketName

    Bucket name

    bucketVersioning

    The state that you want to update

    s3Headers

    any headers you want to add

    returns

    Source of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html

  59. def putBucketVersioningSource(bucketName: String, bucketVersioning: BucketVersioning): Source[Done, NotUsed]

    Delete all existing parts for a specific upload

    Delete all existing parts for a specific upload

    bucketName

    Bucket name

    bucketVersioning

    The state that you want to update

    returns

    Source of type Done as API doesn't return any additional information

    See also

    https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html

  60. def putObject(bucket: String, key: String, data: Source[ByteString, _], contentLength: Long, contentType: ContentType = ContentTypes.`application/octet-stream`, s3Headers: S3Headers): Source[ObjectMetadata, NotUsed]

    Uploads a S3 Object, use this for small files and multipartUpload for bigger ones

    Uploads a S3 Object, use this for small files and multipartUpload for bigger ones

    bucket

    the s3 bucket name

    key

    the s3 object key

    data

    a Stream of ByteString

    contentLength

    the number of bytes that will be uploaded (required!)

    contentType

    an optional ContentType

    s3Headers

    any headers you want to add

    returns

    a Source containing the ObjectMetadata of the uploaded S3 Object

  61. def request(bucket: String, key: String, method: HttpMethod = HttpMethods.GET, versionId: Option[String] = None, s3Headers: S3Headers = S3Headers.empty): Source[HttpResponse, NotUsed]

    Use this for a low level access to S3.

    Use this for a low level access to S3.

    bucket

    the s3 bucket name

    key

    the s3 object key

    method

    the HttpMethod to use when making the request

    versionId

    optional version id of the object

    s3Headers

    any headers you want to add

    returns

    a raw HTTP response from S3

  62. def resumeMultipartUpload(bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], contentType: ContentType = ContentTypes.`application/octet-stream`, metaHeaders: MetaHeaders = MetaHeaders(Map()), cannedAcl: CannedAcl = CannedAcl.Private, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4, sse: Option[ServerSideEncryption] = None): Sink[ByteString, Future[MultipartUploadResult]]

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers

    bucket

    the s3 bucket name

    key

    the s3 object key

    uploadId

    the upload that you want to resume

    previousParts

    The previously uploaded parts ending just before when this upload will commence

    contentType

    an optional ContentType

    metaHeaders

    any meta-headers you want to add

    cannedAcl

    a CannedAcl, defaults to CannedAcl.Private

    chunkSize

    the size of the requests sent to S3, minimum MinChunkSize

    chunkingParallelism

    the number of parallel requests used for the upload, defaults to 4

    returns

    a Sink that accepts ByteString's and materializes to a Future of MultipartUploadResult

  63. def resumeMultipartUploadWithContext[C](bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], chunkUploadSink: Sink[(UploadPartResponse, Iterable[C]), _], contentType: ContentType = ContentTypes.`application/octet-stream`, metaHeaders: MetaHeaders = MetaHeaders(Map()), cannedAcl: CannedAcl = CannedAcl.Private, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4, sse: Option[ServerSideEncryption] = None): Sink[(ByteString, C), Future[MultipartUploadResult]]

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers.

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers. Unlike resumeMultipartUpload, this version allows you to pass in a context (typically from a SourceWithContext/FlowWithContext) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.

    Note that this version of resuming multipart upload ignores buffering

    C

    The Context that is passed along with the ByteString

    bucket

    the s3 bucket name

    key

    the s3 object key

    uploadId

    the upload that you want to resume

    previousParts

    The previously uploaded parts ending just before when this upload will commence

    chunkUploadSink

    A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses Flow.alsoTo for chunkUploadSink which means that backpressure is applied to the upload stream if chunkUploadSink is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returned Sink.

    contentType

    an optional ContentType

    metaHeaders

    any meta-headers you want to add

    cannedAcl

    a CannedAcl, defaults to CannedAcl.Private

    chunkSize

    the size of the requests sent to S3, minimum MinChunkSize

    chunkingParallelism

    the number of parallel requests used for the upload, defaults to 4

    returns

    a Sink that accepts (ByteString, C)'s and materializes to a Future of MultipartUploadResult

  64. def resumeMultipartUploadWithHeaders(bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], contentType: ContentType = ContentTypes.`application/octet-stream`, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4, s3Headers: S3Headers = S3Headers.empty): Sink[ByteString, Future[MultipartUploadResult]]

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers.

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers. Unlike resumeMultipartUpload, this version allows you to pass in a context (typically from a SourceWithContext/FlowWithContext) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.

    bucket

    the s3 bucket name

    key

    the s3 object key

    uploadId

    the upload that you want to resume

    previousParts

    The previously uploaded parts ending just before when this upload will commence

    contentType

    an optional ContentType

    chunkSize

    the size of the requests sent to S3, minimum MinChunkSize

    chunkingParallelism

    the number of parallel requests used for the upload, defaults to 4

    s3Headers

    any headers you want to add

    returns

    a Sink that accepts ByteString's and materializes to a Future of MultipartUploadResult

  65. def resumeMultipartUploadWithHeadersAndContext[C](bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], chunkUploadSink: Sink[(UploadPartResponse, Iterable[C]), _], contentType: ContentType = ContentTypes.`application/octet-stream`, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4, s3Headers: S3Headers = S3Headers.empty): Sink[(ByteString, C), Future[MultipartUploadResult]]

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers.

    Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers. Unlike resumeMultipartUploadWithHeaders, this version allows you to pass in a context (typically from a SourceWithContext/FlowWithContext) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.

    Note that this version of resuming multipart upload ignores buffering

    C

    The Context that is passed along with the ByteString

    bucket

    the s3 bucket name

    key

    the s3 object key

    uploadId

    the upload that you want to resume

    previousParts

    The previously uploaded parts ending just before when this upload will commence

    chunkUploadSink

    A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses Flow.alsoTo for chunkUploadSink which means that backpressure is applied to the upload stream if chunkUploadSink is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returned Sink.

    contentType

    an optional ContentType

    chunkSize

    the size of the requests sent to S3, minimum MinChunkSize

    chunkingParallelism

    the number of parallel requests used for the upload, defaults to 4

    s3Headers

    any headers you want to add

    returns

    a Sink that accepts (ByteString, C)'s and materializes to a Future of MultipartUploadResult

Deprecated Value Members

  1. def download(bucket: String, key: String, range: Option[ByteRange], versionId: Option[String], s3Headers: S3Headers): Source[Option[(Source[ByteString, NotUsed], ObjectMetadata)], NotUsed]

    Downloads a S3 Object

    Downloads a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    range

    [optional] the ByteRange you want to download

    s3Headers

    any headers you want to add

    returns

    The source will emit an empty Option if an object can not be found. Otherwise Option will contain a tuple of object's data and metadata.

    Annotations
    @deprecated
    Deprecated

    (Since version 4.0.0) Use S3.getObject instead

  2. def download(bucket: String, key: String, range: Option[ByteRange] = None, versionId: Option[String] = None, sse: Option[ServerSideEncryption] = None): Source[Option[(Source[ByteString, NotUsed], ObjectMetadata)], NotUsed]

    Downloads a S3 Object

    Downloads a S3 Object

    bucket

    the s3 bucket name

    key

    the s3 object key

    range

    [optional] the ByteRange you want to download

    sse

    [optional] the server side encryption used on upload

    returns

    The source will emit an empty Option if an object can not be found. Otherwise Option will contain a tuple of object's data and metadata.

    Annotations
    @deprecated
    Deprecated

    (Since version 4.0.0) Use S3.getObject instead