object S3
- Alphabetic
- By Inheritance
- S3
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- val MinChunkSize: Int
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def checkIfBucketExists(bucketName: String, s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): Future[BucketAccess]
Checks whether the bucket exits and user has rights to perform ListBucket operation
Checks whether the bucket exits and user has rights to perform ListBucket operation
- bucketName
bucket name
- s3Headers
any headers you want to add
- returns
Future of type BucketAccess
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html
- def checkIfBucketExists(bucketName: String)(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): Future[BucketAccess]
Checks whether the bucket exits and user has rights to perform ListBucket operation
Checks whether the bucket exits and user has rights to perform ListBucket operation
- bucketName
bucket name
- returns
Future of type BucketAccess
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html
- def checkIfBucketExistsSource(bucketName: String, s3Headers: S3Headers): Source[BucketAccess, NotUsed]
Checks whether the bucket exits and user has rights to perform ListBucket operation
Checks whether the bucket exits and user has rights to perform ListBucket operation
- bucketName
bucket name
- s3Headers
any headers you want to add
- returns
Source of type BucketAccess
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html
- def checkIfBucketExistsSource(bucketName: String): Source[BucketAccess, NotUsed]
Checks whether the bucket exits and user has rights to perform ListBucket operation
Checks whether the bucket exits and user has rights to perform ListBucket operation
- bucketName
bucket name
- returns
Source of type BucketAccess
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- def completeMultipartUpload(bucket: String, key: String, uploadId: String, parts: Iterable[Part], s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): Future[MultipartUploadResult]
Complete a multipart upload with an already given list of parts
Complete a multipart upload with an already given list of parts
- bucket
the s3 bucket name
- key
the s3 object key
- uploadId
the upload that you want to complete
- parts
A list of all of the parts for the multipart upload
- s3Headers
any headers you want to add
- returns
Future of type MultipartUploadResult
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html
- def completeMultipartUpload(bucket: String, key: String, uploadId: String, parts: Iterable[Part])(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): Future[MultipartUploadResult]
Complete a multipart upload with an already given list of parts
Complete a multipart upload with an already given list of parts
- bucket
the s3 bucket name
- key
the s3 object key
- uploadId
the upload that you want to complete
- parts
A list of all of the parts for the multipart upload
- returns
Future of type MultipartUploadResult
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html
- def deleteBucket(bucketName: String, s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): Future[Done]
Delete bucket with a given name
- def deleteBucket(bucketName: String)(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): Future[Done]
Delete bucket with a given name
- def deleteBucketContents(bucket: String, deleteAllVersions: Boolean): Source[Done, NotUsed]
Deletes all S3 Objects within the given bucket
Deletes all S3 Objects within the given bucket
- bucket
the s3 bucket name
- deleteAllVersions
Whether to delete all object versions as well (applies to versioned buckets)
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteBucketContents(bucket: String): Source[Done, NotUsed]
Deletes all S3 Objects within the given bucket
Deletes all S3 Objects within the given bucket
- bucket
the s3 bucket name
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteBucketSource(bucketName: String, s3Headers: S3Headers): Source[Done, NotUsed]
Delete bucket with a given name
- def deleteBucketSource(bucketName: String): Source[Done, NotUsed]
Delete bucket with a given name
- def deleteObject(bucket: String, key: String, versionId: Option[String], s3Headers: S3Headers): Source[Done, NotUsed]
Deletes a S3 Object
Deletes a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- versionId
optional version id of the object
- s3Headers
any headers you want to add
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteObject(bucket: String, key: String, versionId: Option[String] = None): Source[Done, NotUsed]
Deletes a S3 Object
Deletes a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- versionId
optional version id of the object
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteObjectsByPrefix(bucket: String, prefix: Option[String], deleteAllVersions: Boolean, s3Headers: S3Headers): Source[Done, NotUsed]
Deletes a S3 Objects which contain given prefix
Deletes a S3 Objects which contain given prefix
- bucket
the s3 bucket name
- prefix
optional s3 objects prefix
- deleteAllVersions
Whether to delete all object versions as well (applies to versioned buckets)
- s3Headers
any headers you want to add
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteObjectsByPrefix(bucket: String, prefix: Option[String], s3Headers: S3Headers): Source[Done, NotUsed]
Deletes a S3 Objects which contain given prefix
Deletes a S3 Objects which contain given prefix
- bucket
the s3 bucket name
- prefix
optional s3 objects prefix
- s3Headers
any headers you want to add
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteObjectsByPrefix(bucket: String, prefix: Option[String], deleteAllVersions: Boolean): Source[Done, NotUsed]
Deletes a S3 Objects which contain given prefix
Deletes a S3 Objects which contain given prefix
- bucket
the s3 bucket name
- prefix
optional s3 objects prefix
- deleteAllVersions
Whether to delete all object versions as well (applies to versioned buckets)
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteObjectsByPrefix(bucket: String, prefix: Option[String]): Source[Done, NotUsed]
Deletes a S3 Objects which contain given prefix
Deletes a S3 Objects which contain given prefix
- bucket
the s3 bucket name
- prefix
optional s3 objects prefix
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteUpload(bucketName: String, key: String, uploadId: String, s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): Future[Done]
Delete all existing parts for a specific upload
Delete all existing parts for a specific upload
- bucketName
Which bucket the upload is inside
- key
The key for the upload
- uploadId
Unique identifier of the upload
- s3Headers
any headers you want to add
- returns
Future of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html
- def deleteUpload(bucketName: String, key: String, uploadId: String)(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): Future[Done]
Delete all existing parts for a specific upload id
Delete all existing parts for a specific upload id
- bucketName
Which bucket the upload is inside
- key
The key for the upload
- uploadId
Unique identifier of the upload
- returns
Future of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html
- def deleteUploadSource(bucketName: String, key: String, uploadId: String, s3Headers: S3Headers): Source[Done, NotUsed]
Delete all existing parts for a specific upload
Delete all existing parts for a specific upload
- bucketName
Which bucket the upload is inside
- key
The key for the upload
- uploadId
Unique identifier of the upload
- s3Headers
any headers you want to add
- returns
Source of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html
- def deleteUploadSource(bucketName: String, key: String, uploadId: String): Source[Done, NotUsed]
Delete all existing parts for a specific upload
Delete all existing parts for a specific upload
- bucketName
Which bucket the upload is inside
- key
The key for the upload
- uploadId
Unique identifier of the upload
- returns
Source of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- def getBucketVersioning(bucketName: String, s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): Future[BucketVersioningResult]
Gets the versioning of an existing bucket
Gets the versioning of an existing bucket
- bucketName
Bucket name
- s3Headers
any headers you want to add
- returns
Future of type BucketVersioningResult
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html
- def getBucketVersioning(bucketName: String)(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): Future[BucketVersioningResult]
Gets the versioning of an existing bucket
Gets the versioning of an existing bucket
- bucketName
Bucket name
- returns
Future of type BucketVersioningResult
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html
- def getBucketVersioningSource(bucketName: String, s3Headers: S3Headers): Source[BucketVersioningResult, NotUsed]
Gets the versioning of an existing bucket
Gets the versioning of an existing bucket
- bucketName
Bucket name
- s3Headers
any headers you want to add
- returns
Source of type BucketVersioningResult
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html
- def getBucketVersioningSource(bucketName: String): Source[BucketVersioningResult, NotUsed]
Gets the versioning of an existing bucket
Gets the versioning of an existing bucket
- bucketName
Bucket name
- returns
Source of type BucketVersioningResult
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def getObject(bucket: String, key: String, range: Option[ByteRange], versionId: Option[String], s3Headers: S3Headers): Source[ByteString, Future[ObjectMetadata]]
Gets a S3 Object
Gets a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- range
[optional] the ByteRange you want to download
- s3Headers
any headers you want to add
- returns
A pekko.stream.scaladsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata
- def getObject(bucket: String, key: String, range: Option[ByteRange] = None, versionId: Option[String] = None, sse: Option[ServerSideEncryption] = None): Source[ByteString, Future[ObjectMetadata]]
Gets a S3 Object
Gets a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- range
[optional] the ByteRange you want to download
- sse
[optional] the server side encryption used on upload
- returns
A pekko.stream.scaladsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata
- def getObjectMetadata(bucket: String, key: String, versionId: Option[String], s3Headers: S3Headers): Source[Option[ObjectMetadata], NotUsed]
Gets the metadata for a S3 Object
Gets the metadata for a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- versionId
optional version id of the object
- s3Headers
any headers you want to add
- returns
A Source containing an scala.Option that will be scala.None in case the object does not exist
- def getObjectMetadata(bucket: String, key: String, versionId: Option[String] = None, sse: Option[ServerSideEncryption] = None): Source[Option[ObjectMetadata], NotUsed]
Gets the metadata for a S3 Object
Gets the metadata for a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- versionId
optional version id of the object
- sse
the server side encryption to use
- returns
A Source containing an scala.Option that will be scala.None in case the object does not exist
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- def listBucket(bucket: String, delimiter: String, prefix: Option[String] = None, s3Headers: S3Headers = S3Headers.empty): Source[ListBucketResultContents, NotUsed]
Will return a source of object metadata for a given bucket and delimiter with optional prefix using version 2 of the List Bucket API.
Will return a source of object metadata for a given bucket and delimiter with optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.
The
pekko.connectors.s3.list-bucket-api-version
can be set to 1 to use the older API version 1- bucket
Which bucket that you list object metadata for
- prefix
Prefix of the keys you want to list under passed bucket
- s3Headers
any headers you want to add
- returns
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)
- def listBucket(bucket: String, prefix: Option[String], s3Headers: S3Headers): Source[ListBucketResultContents, NotUsed]
Will return a source of object metadata for a given bucket with optional prefix using version 2 of the List Bucket API.
Will return a source of object metadata for a given bucket with optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.
The
pekko.connectors.s3.list-bucket-api-version
can be set to 1 to use the older API version 1- bucket
Which bucket that you list object metadata for
- prefix
Prefix of the keys you want to list under passed bucket
- s3Headers
any headers you want to add
- returns
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)
- def listBucket(bucket: String, prefix: Option[String]): Source[ListBucketResultContents, NotUsed]
Will return a source of object metadata for a given bucket with optional prefix using version 2 of the List Bucket API.
Will return a source of object metadata for a given bucket with optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.
The
pekko.connectors.s3.list-bucket-api-version
can be set to 1 to use the older API version 1- bucket
Which bucket that you list object metadata for
- prefix
Prefix of the keys you want to list under passed bucket
- returns
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)
- def listBucketAndCommonPrefixes(bucket: String, delimiter: String, prefix: Option[String] = None, s3Headers: S3Headers = S3Headers.empty): Source[(Seq[ListBucketResultContents], Seq[ListBucketResultCommonPrefixes]), NotUsed]
Will return a source of object metadata and common prefixes for a given bucket and delimiter with optional prefix using version 2 of the List Bucket API.
Will return a source of object metadata and common prefixes for a given bucket and delimiter with optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.
The
pekko.connectors.s3.list-bucket-api-version
can be set to 1 to use the older API version 1- bucket
Which bucket that you list object metadata for
- delimiter
Delimiter to use for listing only one level of hierarchy
- prefix
Prefix of the keys you want to list under passed bucket
- s3Headers
any headers you want to add
- returns
Source of (Seq of ListBucketResultContents, Seq of ListBucketResultContents)
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)
https://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysHierarchy.html (prefix and delimiter documentation)
- def listBuckets(s3Headers: S3Headers): Source[ListBucketsResultContents, NotUsed]
Will return a list containing all of the buckets for the current AWS account
Will return a list containing all of the buckets for the current AWS account
- s3Headers
any headers you want to add
- returns
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html
- def listBuckets(): Source[ListBucketsResultContents, NotUsed]
Will return a list containing all of the buckets for the current AWS account
Will return a list containing all of the buckets for the current AWS account
- returns
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html
- def listMultipartUpload(bucket: String, prefix: Option[String], s3Headers: S3Headers): Source[ListMultipartUploadResultUploads, NotUsed]
Will return in progress or aborted multipart uploads.
Will return in progress or aborted multipart uploads. This will automatically page through all keys with the given parameters.
- bucket
Which bucket that you list in-progress multipart uploads for
- prefix
Prefix of the keys you want to list under passed bucket
- s3Headers
any headers you want to add
- returns
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html
- def listMultipartUpload(bucket: String, prefix: Option[String]): Source[ListMultipartUploadResultUploads, NotUsed]
Will return in progress or aborted multipart uploads.
Will return in progress or aborted multipart uploads. This will automatically page through all keys with the given parameters.
- bucket
Which bucket that you list in-progress multipart uploads for
- prefix
Prefix of the keys you want to list under passed bucket
- returns
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html
- def listMultipartUploadAndCommonPrefixes(bucket: String, delimiter: String, prefix: Option[String] = None, s3Headers: S3Headers = S3Headers.empty): Source[(Seq[ListMultipartUploadResultUploads], Seq[CommonPrefixes]), NotUsed]
Will return in progress or aborted multipart uploads with optional prefix and delimiter.
Will return in progress or aborted multipart uploads with optional prefix and delimiter. This will automatically page through all keys with the given parameters.
- bucket
Which bucket that you list in-progress multipart uploads for
- delimiter
Delimiter to use for listing only one level of hierarchy
- prefix
Prefix of the keys you want to list under passed bucket
- s3Headers
any headers you want to add
- returns
Source of (Seq of ListMultipartUploadResultUploads, Seq of CommonPrefixes)
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html
- def listObjectVersions(bucket: String, delimiter: String, prefix: Option[String], s3Headers: S3Headers): Source[(Seq[ListObjectVersionsResultVersions], Seq[DeleteMarkers]), NotUsed]
List all versioned objects for a bucket with optional prefix and delimiter.
List all versioned objects for a bucket with optional prefix and delimiter. This will automatically page through all keys with the given parameters.
- bucket
Which bucket that you list object versions for
- delimiter
Delimiter to use for listing only one level of hierarchy
- prefix
Prefix of the keys you want to list under passed bucket
- s3Headers
any headers you want to add
- returns
Source of (Seq of ListObjectVersionsResultVersions, Seq of DeleteMarkers)
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html
- def listObjectVersions(bucket: String, prefix: Option[String], s3Headers: S3Headers): Source[(Seq[ListObjectVersionsResultVersions], Seq[DeleteMarkers]), NotUsed]
List all versioned objects for a bucket with optional prefix.
List all versioned objects for a bucket with optional prefix. This will automatically page through all keys with the given parameters.
- bucket
Which bucket that you list object versions for
- prefix
Prefix of the keys you want to list under passed bucket
- s3Headers
any headers you want to add
- returns
Source of (Seq of ListObjectVersionsResultVersions, Seq of DeleteMarkers)
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html
- def listObjectVersions(bucket: String, prefix: Option[String]): Source[(Seq[ListObjectVersionsResultVersions], Seq[DeleteMarkers]), NotUsed]
List all versioned objects for a bucket with optional prefix.
List all versioned objects for a bucket with optional prefix. This will automatically page through all keys with the given parameters.
- bucket
Which bucket that you list object versions for
- prefix
Prefix of the keys you want to list under passed bucket
- returns
Source of (Seq of ListObjectVersionsResultVersions, Seq of DeleteMarkers)
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html
- def listObjectVersionsAndCommonPrefixes(bucket: String, delimiter: String, prefix: Option[String], s3Headers: S3Headers): Source[(Seq[ListObjectVersionsResultVersions], Seq[DeleteMarkers], Seq[CommonPrefixes]), NotUsed]
List all versioned objects for a bucket with optional prefix and delimiter.
List all versioned objects for a bucket with optional prefix and delimiter. This will automatically page through all keys with the given parameters.
- bucket
Which bucket that you list object versions for
- delimiter
Delimiter to use for listing only one level of hierarchy
- prefix
Prefix of the keys you want to list under passed bucket
- s3Headers
any headers you want to add
- returns
Source of (Seq of ListObjectVersionsResultVersions, Seq of DeleteMarkers, Seq of CommonPrefixes)
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html
- def listParts(bucket: String, key: String, uploadId: String, s3Headers: S3Headers): Source[ListPartsResultParts, NotUsed]
List uploaded parts for a specific upload.
List uploaded parts for a specific upload. This will automatically page through all keys with the given parameters.
- bucket
Under which bucket the upload parts are contained
- key
They key where the parts were uploaded to
- uploadId
Unique identifier of the upload for which you want to list the uploaded parts
- s3Headers
any headers you want to add
- returns
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html
- def listParts(bucket: String, key: String, uploadId: String): Source[ListPartsResultParts, NotUsed]
List uploaded parts for a specific upload.
List uploaded parts for a specific upload. This will automatically page through all keys with the given parameters.
- bucket
Under which bucket the upload parts are contained
- key
They key where the parts were uploaded to
- uploadId
Unique identifier of the upload for which you want to list the uploaded parts
- returns
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html
- def makeBucket(bucketName: String, s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attr: Attributes): Future[Done]
Create new bucket with a given name
- def makeBucket(bucketName: String)(implicit system: ClassicActorSystemProvider, attr: Attributes = Attributes()): Future[Done]
Create new bucket with a given name
- def makeBucketSource(bucketName: String, s3Headers: S3Headers): Source[Done, NotUsed]
Create new bucket with a given name
- def makeBucketSource(bucketName: String): Source[Done, NotUsed]
Create new bucket with a given name
- def multipartCopy(sourceBucket: String, sourceKey: String, targetBucket: String, targetKey: String, sourceVersionId: Option[String] = None, contentType: ContentType = ContentTypes.`application/octet-stream`, s3Headers: S3Headers = S3Headers.empty, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4): RunnableGraph[Future[MultipartUploadResult]]
Copy an S3 object from source bucket to target bucket using multi part copy upload.
Copy an S3 object from source bucket to target bucket using multi part copy upload.
- sourceBucket
source s3 bucket name
- sourceKey
source s3 key
- targetBucket
target s3 bucket name
- targetKey
target s3 key
- sourceVersionId
optional version id of source object, if the versioning is enabled in source bucket
- contentType
an optional ContentType
- s3Headers
any headers you want to add
- chunkSize
the size of the requests sent to S3, minimum MinChunkSize
- chunkingParallelism
the number of parallel requests used for the upload, defaults to 4
- returns
a runnable graph which upon materialization will return a Future containing the results of the copy operation.
- def multipartUpload(bucket: String, key: String, contentType: ContentType = ContentTypes.`application/octet-stream`, metaHeaders: MetaHeaders = MetaHeaders(Map()), cannedAcl: CannedAcl = CannedAcl.Private, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4, sse: Option[ServerSideEncryption] = None): Sink[ByteString, Future[MultipartUploadResult]]
Uploads a S3 Object by making multiple requests
Uploads a S3 Object by making multiple requests
- bucket
the s3 bucket name
- key
the s3 object key
- contentType
an optional ContentType
- metaHeaders
any meta-headers you want to add
- cannedAcl
a CannedAcl, defaults to CannedAcl.Private
- chunkSize
the size of the requests sent to S3, minimum MinChunkSize
- chunkingParallelism
the number of parallel requests used for the upload, defaults to 4
- returns
a Sink that accepts ByteString's and materializes to a Future of MultipartUploadResult
- def multipartUploadWithContext[C](bucket: String, key: String, chunkUploadSink: Sink[(UploadPartResponse, Iterable[C]), _], contentType: ContentType = ContentTypes.`application/octet-stream`, metaHeaders: MetaHeaders = MetaHeaders(Map()), cannedAcl: CannedAcl = CannedAcl.Private, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4, sse: Option[ServerSideEncryption] = None): Sink[(ByteString, C), Future[MultipartUploadResult]]
Uploads a S3 Object by making multiple requests.
Uploads a S3 Object by making multiple requests. Unlike
multipartUpload
, this version allows you to pass in a context (typically from aSourceWithContext
/FlowWithContext
) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.Note that this version of resuming multipart upload ignores buffering
- C
The Context that is passed along with the
ByteString
- bucket
the s3 bucket name
- key
the s3 object key
- chunkUploadSink
A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses
Flow.alsoTo
forchunkUploadSink
which means that backpressure is applied to the upload stream ifchunkUploadSink
is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returnedSink
.- contentType
an optional ContentType
- metaHeaders
any meta-headers you want to add
- cannedAcl
a CannedAcl, defaults to CannedAcl.Private
- chunkSize
the size of the requests sent to S3, minimum MinChunkSize
- chunkingParallelism
the number of parallel requests used for the upload, defaults to 4
- returns
a Sink that accepts (ByteString, C)'s and materializes to a Future of MultipartUploadResult
- def multipartUploadWithHeaders(bucket: String, key: String, contentType: ContentType = ContentTypes.`application/octet-stream`, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4, s3Headers: S3Headers = S3Headers.empty): Sink[ByteString, Future[MultipartUploadResult]]
Uploads a S3 Object by making multiple requests
Uploads a S3 Object by making multiple requests
- bucket
the s3 bucket name
- key
the s3 object key
- contentType
an optional ContentType
- chunkSize
the size of the requests sent to S3, minimum MinChunkSize
- chunkingParallelism
the number of parallel requests used for the upload, defaults to 4
- s3Headers
any headers you want to add
- returns
a Sink that accepts ByteString's and materializes to a Future of MultipartUploadResult
- def multipartUploadWithHeadersAndContext[C](bucket: String, key: String, chunkUploadSink: Sink[(UploadPartResponse, Iterable[C]), _], contentType: ContentType = ContentTypes.`application/octet-stream`, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4, s3Headers: S3Headers = S3Headers.empty): Sink[(ByteString, C), Future[MultipartUploadResult]]
Uploads a S3 Object by making multiple requests.
Uploads a S3 Object by making multiple requests. Unlike
multipartUploadWithHeaders
, this version allows you to pass in a context (typically from aSourceWithContext
/FlowWithContext
) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.Note that this version of resuming multipart upload ignores buffering
- C
The Context that is passed along with the
ByteString
- bucket
the s3 bucket name
- key
the s3 object key
- chunkUploadSink
A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses
Flow.alsoTo
forchunkUploadSink
which means that backpressure is applied to the upload stream ifchunkUploadSink
is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returnedSink
.- contentType
an optional ContentType
- chunkSize
the size of the requests sent to S3, minimum MinChunkSize
- chunkingParallelism
the number of parallel requests used for the upload, defaults to 4
- s3Headers
any headers you want to add
- returns
a Sink that accepts (ByteString, C)'s and materializes to a Future of MultipartUploadResult
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- def putBucketVersioning(bucketName: String, bucketVersioning: BucketVersioning, s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): Future[Done]
Sets the versioning state of an existing bucket.
Sets the versioning state of an existing bucket.
- bucketName
Bucket name
- bucketVersioning
The state that you want to update
- s3Headers
any headers you want to add
- returns
Future of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html
- def putBucketVersioning(bucketName: String, bucketVersioning: BucketVersioning)(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): Future[Done]
Sets the versioning state of an existing bucket.
- def putBucketVersioningSource(bucketName: String, bucketVersioning: BucketVersioning, s3Headers: S3Headers): Source[Done, NotUsed]
Sets the versioning state of an existing bucket.
Sets the versioning state of an existing bucket.
- bucketName
Bucket name
- bucketVersioning
The state that you want to update
- s3Headers
any headers you want to add
- returns
Source of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html
- def putBucketVersioningSource(bucketName: String, bucketVersioning: BucketVersioning): Source[Done, NotUsed]
Delete all existing parts for a specific upload
- def putObject(bucket: String, key: String, data: Source[ByteString, _], contentLength: Long, contentType: ContentType = ContentTypes.`application/octet-stream`, s3Headers: S3Headers): Source[ObjectMetadata, NotUsed]
Uploads a S3 Object, use this for small files and multipartUpload for bigger ones
Uploads a S3 Object, use this for small files and multipartUpload for bigger ones
- bucket
the s3 bucket name
- key
the s3 object key
- data
a Stream of ByteString
- contentLength
the number of bytes that will be uploaded (required!)
- contentType
an optional ContentType
- s3Headers
any headers you want to add
- returns
a Source containing the ObjectMetadata of the uploaded S3 Object
- def request(bucket: String, key: String, method: HttpMethod = HttpMethods.GET, versionId: Option[String] = None, s3Headers: S3Headers = S3Headers.empty): Source[HttpResponse, NotUsed]
Use this for a low level access to S3.
Use this for a low level access to S3.
- bucket
the s3 bucket name
- key
the s3 object key
- method
the HttpMethod to use when making the request
- versionId
optional version id of the object
- s3Headers
any headers you want to add
- returns
a raw HTTP response from S3
- def resumeMultipartUpload(bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], contentType: ContentType = ContentTypes.`application/octet-stream`, metaHeaders: MetaHeaders = MetaHeaders(Map()), cannedAcl: CannedAcl = CannedAcl.Private, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4, sse: Option[ServerSideEncryption] = None): Sink[ByteString, Future[MultipartUploadResult]]
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers
- bucket
the s3 bucket name
- key
the s3 object key
- uploadId
the upload that you want to resume
- previousParts
The previously uploaded parts ending just before when this upload will commence
- contentType
an optional ContentType
- metaHeaders
any meta-headers you want to add
- cannedAcl
a CannedAcl, defaults to CannedAcl.Private
- chunkSize
the size of the requests sent to S3, minimum MinChunkSize
- chunkingParallelism
the number of parallel requests used for the upload, defaults to 4
- returns
a Sink that accepts ByteString's and materializes to a Future of MultipartUploadResult
- def resumeMultipartUploadWithContext[C](bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], chunkUploadSink: Sink[(UploadPartResponse, Iterable[C]), _], contentType: ContentType = ContentTypes.`application/octet-stream`, metaHeaders: MetaHeaders = MetaHeaders(Map()), cannedAcl: CannedAcl = CannedAcl.Private, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4, sse: Option[ServerSideEncryption] = None): Sink[(ByteString, C), Future[MultipartUploadResult]]
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers.
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers. Unlike
resumeMultipartUpload
, this version allows you to pass in a context (typically from aSourceWithContext
/FlowWithContext
) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.Note that this version of resuming multipart upload ignores buffering
- C
The Context that is passed along with the
ByteString
- bucket
the s3 bucket name
- key
the s3 object key
- uploadId
the upload that you want to resume
- previousParts
The previously uploaded parts ending just before when this upload will commence
- chunkUploadSink
A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses
Flow.alsoTo
forchunkUploadSink
which means that backpressure is applied to the upload stream ifchunkUploadSink
is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returnedSink
.- contentType
an optional ContentType
- metaHeaders
any meta-headers you want to add
- cannedAcl
a CannedAcl, defaults to CannedAcl.Private
- chunkSize
the size of the requests sent to S3, minimum MinChunkSize
- chunkingParallelism
the number of parallel requests used for the upload, defaults to 4
- returns
a Sink that accepts (ByteString, C)'s and materializes to a Future of MultipartUploadResult
- def resumeMultipartUploadWithHeaders(bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], contentType: ContentType = ContentTypes.`application/octet-stream`, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4, s3Headers: S3Headers = S3Headers.empty): Sink[ByteString, Future[MultipartUploadResult]]
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers.
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers. Unlike
resumeMultipartUpload
, this version allows you to pass in a context (typically from aSourceWithContext
/FlowWithContext
) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.- bucket
the s3 bucket name
- key
the s3 object key
- uploadId
the upload that you want to resume
- previousParts
The previously uploaded parts ending just before when this upload will commence
- contentType
an optional ContentType
- chunkSize
the size of the requests sent to S3, minimum MinChunkSize
- chunkingParallelism
the number of parallel requests used for the upload, defaults to 4
- s3Headers
any headers you want to add
- returns
a Sink that accepts ByteString's and materializes to a Future of MultipartUploadResult
- def resumeMultipartUploadWithHeadersAndContext[C](bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], chunkUploadSink: Sink[(UploadPartResponse, Iterable[C]), _], contentType: ContentType = ContentTypes.`application/octet-stream`, chunkSize: Int = MinChunkSize, chunkingParallelism: Int = 4, s3Headers: S3Headers = S3Headers.empty): Sink[(ByteString, C), Future[MultipartUploadResult]]
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers.
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers. Unlike
resumeMultipartUploadWithHeaders
, this version allows you to pass in a context (typically from aSourceWithContext
/FlowWithContext
) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.Note that this version of resuming multipart upload ignores buffering
- C
The Context that is passed along with the
ByteString
- bucket
the s3 bucket name
- key
the s3 object key
- uploadId
the upload that you want to resume
- previousParts
The previously uploaded parts ending just before when this upload will commence
- chunkUploadSink
A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses
Flow.alsoTo
forchunkUploadSink
which means that backpressure is applied to the upload stream ifchunkUploadSink
is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returnedSink
.- contentType
an optional ContentType
- chunkSize
the size of the requests sent to S3, minimum MinChunkSize
- chunkingParallelism
the number of parallel requests used for the upload, defaults to 4
- s3Headers
any headers you want to add
- returns
a Sink that accepts (ByteString, C)'s and materializes to a Future of MultipartUploadResult
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
Deprecated Value Members
- def download(bucket: String, key: String, range: Option[ByteRange], versionId: Option[String], s3Headers: S3Headers): Source[Option[(Source[ByteString, NotUsed], ObjectMetadata)], NotUsed]
Downloads a S3 Object
Downloads a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- range
[optional] the ByteRange you want to download
- s3Headers
any headers you want to add
- returns
The source will emit an empty Option if an object can not be found. Otherwise Option will contain a tuple of object's data and metadata.
- Annotations
- @deprecated
- Deprecated
(Since version 4.0.0) Use S3.getObject instead
- def download(bucket: String, key: String, range: Option[ByteRange] = None, versionId: Option[String] = None, sse: Option[ServerSideEncryption] = None): Source[Option[(Source[ByteString, NotUsed], ObjectMetadata)], NotUsed]
Downloads a S3 Object
Downloads a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- range
[optional] the ByteRange you want to download
- sse
[optional] the server side encryption used on upload
- returns
The source will emit an empty Option if an object can not be found. Otherwise Option will contain a tuple of object's data and metadata.
- Annotations
- @deprecated
- Deprecated
(Since version 4.0.0) Use S3.getObject instead