object S3
- Alphabetic
- By Inheritance
- S3
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def checkIfBucketExists(bucketName: String, system: ClassicActorSystemProvider): CompletionStage[BucketAccess]
Checks whether the bucket exits and user has rights to perform ListBucket operation
Checks whether the bucket exits and user has rights to perform ListBucket operation
- bucketName
bucket name
- system
the actor system which provides the materializer to run with
- returns
CompletionStage of type BucketAccess
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html
- def checkIfBucketExists(bucketName: String, system: ClassicActorSystemProvider, attributes: Attributes, s3Headers: S3Headers): CompletionStage[BucketAccess]
Checks whether the bucket exists and the user has rights to perform the
ListBucket
operationChecks whether the bucket exists and the user has rights to perform the
ListBucket
operation- bucketName
bucket name
- system
the actor system which provides the materializer to run with
- attributes
attributes to run request with
- s3Headers
any headers you want to add
- returns
CompletionStage of type BucketAccess
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html
- def checkIfBucketExists(bucketName: String, system: ClassicActorSystemProvider, attributes: Attributes): CompletionStage[BucketAccess]
Checks whether the bucket exists and the user has rights to perform the
ListBucket
operationChecks whether the bucket exists and the user has rights to perform the
ListBucket
operation- bucketName
bucket name
- system
the actor system which provides the materializer to run with
- attributes
attributes to run request with
- returns
CompletionStage of type BucketAccess
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html
- def checkIfBucketExistsSource(bucketName: String, s3Headers: S3Headers): Source[BucketAccess, NotUsed]
Checks whether the bucket exits and user has rights to perform ListBucket operation
Checks whether the bucket exits and user has rights to perform ListBucket operation
- bucketName
bucket name
- s3Headers
any headers you want to add
- returns
Source of type BucketAccess
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html
- def checkIfBucketExistsSource(bucketName: String): Source[BucketAccess, NotUsed]
Checks whether the bucket exits and user has rights to perform ListBucket operation
Checks whether the bucket exits and user has rights to perform ListBucket operation
- bucketName
bucket name
- returns
Source of type BucketAccess
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- def completeMultipartUpload(bucket: String, key: String, uploadId: String, parts: Iterable[Part], s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): CompletionStage[MultipartUploadResult]
Complete a multipart upload with an already given list of parts
Complete a multipart upload with an already given list of parts
- bucket
bucket the s3 bucket name
- key
the s3 object key
- uploadId
the upload that you want to complete
- parts
A list of all of the parts for the multipart upload
- s3Headers
any headers you want to add
- returns
CompletionStage of type MultipartUploadResult
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html
- def completeMultipartUpload(bucket: String, key: String, uploadId: String, parts: Iterable[Part])(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): CompletionStage[MultipartUploadResult]
Complete a multipart upload with an already given list of parts
Complete a multipart upload with an already given list of parts
- bucket
bucket the s3 bucket name
- key
the s3 object key
- uploadId
the upload that you want to complete
- parts
A list of all of the parts for the multipart upload
- returns
CompletionStage of type MultipartUploadResult
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html
- def deleteBucket(bucketName: String, system: ClassicActorSystemProvider): CompletionStage[Done]
Delete bucket with a given name
Delete bucket with a given name
- bucketName
bucket name
- system
the actor system which provides the materializer to run with
- returns
CompletionStage of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html
- def deleteBucket(bucketName: String, system: ClassicActorSystemProvider, attributes: Attributes, s3Headers: S3Headers): CompletionStage[Done]
Delete bucket with a given name
Delete bucket with a given name
- bucketName
bucket name
- system
the actor system which provides the materializer to run with
- attributes
attributes to run request with
- s3Headers
any headers you want to add
- returns
CompletionStage of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html
- def deleteBucket(bucketName: String, system: ClassicActorSystemProvider, attributes: Attributes): CompletionStage[Done]
Delete bucket with a given name
Delete bucket with a given name
- bucketName
bucket name
- system
the actor system which provides the materializer to run with
- attributes
attributes to run request with
- returns
CompletionStage of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html
- def deleteBucketContents(bucket: String, deleteAllVersions: Boolean): Source[Done, NotUsed]
Deletes all S3 Objects within the given bucket
Deletes all S3 Objects within the given bucket
- bucket
the s3 bucket name
- deleteAllVersions
Whether to delete all object versions as well (applies to versioned buckets)
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteBucketContents(bucket: String): Source[Done, NotUsed]
Deletes all S3 Objects within the given bucket
Deletes all S3 Objects within the given bucket
- bucket
the s3 bucket name
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteBucketSource(bucketName: String, s3Headers: S3Headers): Source[Done, NotUsed]
Delete bucket with a given name
- def deleteBucketSource(bucketName: String): Source[Done, NotUsed]
Delete bucket with a given name
- def deleteObject(bucket: String, key: String, versionId: Optional[String], s3Headers: S3Headers): Source[Done, NotUsed]
Deletes a S3 Object
Deletes a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- versionId
optional version id of the object
- s3Headers
any headers you want to add
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteObject(bucket: String, key: String, versionId: Optional[String]): Source[Done, NotUsed]
Deletes a S3 Object
Deletes a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- versionId
optional version id of the object
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteObject(bucket: String, key: String): Source[Done, NotUsed]
Deletes a S3 Object
Deletes a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteObjectsByPrefix(bucket: String, prefix: Optional[String], deleteAllVersions: Boolean, s3Headers: S3Headers): Source[Done, NotUsed]
Deletes all keys which have the given prefix under the specified bucket
Deletes all keys which have the given prefix under the specified bucket
- bucket
the s3 bucket name
- prefix
optional s3 objects prefix
- s3Headers
any headers you want to add
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteObjectsByPrefix(bucket: String, prefix: Optional[String], s3Headers: S3Headers): Source[Done, NotUsed]
Deletes all keys which have the given prefix under the specified bucket
Deletes all keys which have the given prefix under the specified bucket
- bucket
the s3 bucket name
- prefix
optional s3 objects prefix
- s3Headers
any headers you want to add
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteObjectsByPrefix(bucket: String, prefix: Optional[String], deleteAllVersions: Boolean): Source[Done, NotUsed]
Deletes all keys which have the given prefix under the specified bucket
Deletes all keys which have the given prefix under the specified bucket
- bucket
the s3 bucket name
- prefix
optional s3 objects prefix
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteObjectsByPrefix(bucket: String, prefix: Optional[String]): Source[Done, NotUsed]
Deletes all keys which have the given prefix under the specified bucket
Deletes all keys which have the given prefix under the specified bucket
- bucket
the s3 bucket name
- prefix
optional s3 objects prefix
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteObjectsByPrefix(bucket: String, deleteAllVersions: Boolean): Source[Done, NotUsed]
Deletes all keys under the specified bucket
Deletes all keys under the specified bucket
- bucket
the s3 bucket name
- deleteAllVersions
Whether to delete all object versions as well (applies to versioned buckets)
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteObjectsByPrefix(bucket: String): Source[Done, NotUsed]
Deletes all keys under the specified bucket
Deletes all keys under the specified bucket
- bucket
the s3 bucket name
- returns
A Source that will emit pekko.Done when operation is completed
- def deleteUpload(bucketName: String, key: String, uploadId: String, s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): CompletionStage[Done]
Delete all existing parts for a specific upload
Delete all existing parts for a specific upload
- bucketName
Which bucket the upload is inside
- key
The key for the upload
- uploadId
Unique identifier of the upload
- s3Headers
any headers you want to add
- returns
CompletionStage of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html
- def deleteUpload(bucketName: String, key: String, uploadId: String)(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): CompletionStage[Done]
Delete all existing parts for a specific upload id
Delete all existing parts for a specific upload id
- bucketName
Which bucket the upload is inside
- key
The key for the upload
- uploadId
Unique identifier of the upload
- returns
CompletionStage of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html
- def deleteUploadSource(bucketName: String, key: String, uploadId: String, s3Headers: S3Headers): Source[Done, NotUsed]
Delete all existing parts for a specific upload
Delete all existing parts for a specific upload
- bucketName
Which bucket the upload is inside
- key
The key for the upload
- uploadId
Unique identifier of the upload
- s3Headers
any headers you want to add
- returns
Source of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html
- def deleteUploadSource(bucketName: String, key: String, uploadId: String): Source[Done, NotUsed]
Delete all existing parts for a specific upload
Delete all existing parts for a specific upload
- bucketName
Which bucket the upload is inside
- key
The key for the upload
- uploadId
Unique identifier of the upload
- returns
Source of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- def getBucketVersioning(bucketName: String, system: ClassicActorSystemProvider): CompletionStage[BucketVersioningResult]
Gets the versioning of an existing bucket
Gets the versioning of an existing bucket
- bucketName
Bucket name
- system
the actor system which provides the materializer to run with
- returns
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html
- def getBucketVersioning(bucketName: String, system: ClassicActorSystemProvider, attributes: Attributes, s3Headers: S3Headers): CompletionStage[BucketVersioningResult]
Gets the versioning of an existing bucket
Gets the versioning of an existing bucket
- bucketName
Bucket name
- system
the actor system which provides the materializer to run with
- attributes
attributes to run request with
- s3Headers
any headers you want to add
- returns
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html
- def getBucketVersioning(bucketName: String, system: ClassicActorSystemProvider, attributes: Attributes): CompletionStage[BucketVersioningResult]
Gets the versioning of an existing bucket
Gets the versioning of an existing bucket
- bucketName
Bucket name
- system
the actor system which provides the materializer to run with
- attributes
attributes to run request with
- returns
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html
- def getBucketVersioningSource(bucketName: String, s3Headers: S3Headers): Source[BucketVersioningResult, NotUsed]
Gets the versioning of an existing bucket
Gets the versioning of an existing bucket
- bucketName
Bucket name
- s3Headers
any headers you want to add
- returns
Source of type BucketVersioningResult
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html
- def getBucketVersioningSource(bucketName: String): Source[BucketVersioningResult, NotUsed]
Gets the versioning of an existing bucket
Gets the versioning of an existing bucket
- bucketName
Bucket name
- returns
Source of type BucketVersioningResult
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def getObject(bucket: String, key: String, range: ByteRange, versionId: Optional[String], s3Headers: S3Headers): Source[ByteString, CompletionStage[ObjectMetadata]]
Gets a specific byte range of a S3 Object
Gets a specific byte range of a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- range
the ByteRange you want to download
- versionId
optional version id of the object
- s3Headers
any headers you want to add
- returns
A pekko.stream.javadsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata
- def getObject(bucket: String, key: String, range: ByteRange, s3Headers: S3Headers): Source[ByteString, CompletionStage[ObjectMetadata]]
Gets a specific byte range of a S3 Object
Gets a specific byte range of a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- range
the ByteRange you want to download
- s3Headers
any headers you want to add
- returns
A pekko.stream.javadsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata
- def getObject(bucket: String, key: String, s3Headers: S3Headers): Source[ByteString, CompletionStage[ObjectMetadata]]
Gets a S3 Object
Gets a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- s3Headers
any headers you want to add
- returns
A pekko.stream.javadsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata
- def getObject(bucket: String, key: String, range: ByteRange, versionId: Optional[String], sse: ServerSideEncryption): Source[ByteString, CompletionStage[ObjectMetadata]]
Gets a specific byte range of a S3 Object
Gets a specific byte range of a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- range
the ByteRange you want to download
- versionId
optional version id of the object
- sse
the server side encryption to use
- returns
A pekko.stream.javadsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata
- def getObject(bucket: String, key: String, range: ByteRange, sse: ServerSideEncryption): Source[ByteString, CompletionStage[ObjectMetadata]]
Gets a specific byte range of a S3 Object
Gets a specific byte range of a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- range
the ByteRange you want to download
- sse
the server side encryption to use
- returns
A pekko.stream.javadsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata
- def getObject(bucket: String, key: String, range: ByteRange): Source[ByteString, CompletionStage[ObjectMetadata]]
Gets a specific byte range of a S3 Object
Gets a specific byte range of a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- range
the ByteRange you want to download
- returns
A pekko.stream.javadsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata
- def getObject(bucket: String, key: String, sse: ServerSideEncryption): Source[ByteString, CompletionStage[ObjectMetadata]]
Gets a S3 Object
Gets a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- sse
the server side encryption to use
- returns
A pekko.stream.javadsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata
- def getObject(bucket: String, key: String): Source[ByteString, CompletionStage[ObjectMetadata]]
Gets a S3 Object
Gets a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- returns
A pekko.stream.javadsl.Source containing the objects data as a pekko.util.ByteString along with a materialized value containing the pekko.stream.connectors.s3.ObjectMetadata
- def getObjectMetadata(bucket: String, key: String, s3Headers: S3Headers): Source[Optional[ObjectMetadata], NotUsed]
Gets the metadata for a S3 Object
- def getObjectMetadata(bucket: String, key: String, versionId: Optional[String], sse: ServerSideEncryption): Source[Optional[ObjectMetadata], NotUsed]
Gets the metadata for a S3 Object
- def getObjectMetadata(bucket: String, key: String, sse: ServerSideEncryption): Source[Optional[ObjectMetadata], NotUsed]
Gets the metadata for a S3 Object
- def getObjectMetadata(bucket: String, key: String): Source[Optional[ObjectMetadata], NotUsed]
Gets the metadata for a S3 Object
- def getObjectMetadataWithHeaders(bucket: String, key: String, versionId: Optional[String], s3Headers: S3Headers): Source[Optional[ObjectMetadata], NotUsed]
Gets the metadata for a S3 Object
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- def listBucket(bucket: String, delimiter: String, prefix: Optional[String], s3Headers: S3Headers): Source[ListBucketResultContents, NotUsed]
Will return a source of object metadata for a given bucket with delimiter and optional prefix using version 2 of the List Bucket API.
Will return a source of object metadata for a given bucket with delimiter and optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.
The
org.apache.pekko.stream.connectors.s3.list-bucket-api-version
can be set to 1 to use the older API version 1- bucket
Which bucket that you list object metadata for
- delimiter
Delimiter to use for listing only one level of hierarchy
- prefix
Prefix of the keys you want to list under passed bucket
- s3Headers
any headers you want to add
- returns
Source of object metadata
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)
- def listBucket(bucket: String, delimiter: String, prefix: Optional[String]): Source[ListBucketResultContents, NotUsed]
Will return a source of object metadata for a given bucket with delimiter and optional prefix using version 2 of the List Bucket API.
Will return a source of object metadata for a given bucket with delimiter and optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.
The
org.apache.pekko.stream.connectors.s3.list-bucket-api-version
can be set to 1 to use the older API version 1- bucket
Which bucket that you list object metadata for
- delimiter
Delimiter to use for listing only one level of hierarchy
- prefix
Prefix of the keys you want to list under passed bucket
- returns
Source of object metadata
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)
- def listBucket(bucket: String, prefix: Optional[String], s3Headers: S3Headers): Source[ListBucketResultContents, NotUsed]
Will return a source of object metadata for a given bucket with optional prefix using version 2 of the List Bucket API.
Will return a source of object metadata for a given bucket with optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.
The
org.apache.pekko.stream.connectors.s3.list-bucket-api-version
can be set to 1 to use the older API version 1- bucket
Which bucket that you list object metadata for
- prefix
Prefix of the keys you want to list under passed bucket
- returns
Source of object metadata
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)
- def listBucket(bucket: String, prefix: Optional[String]): Source[ListBucketResultContents, NotUsed]
Will return a source of object metadata for a given bucket with optional prefix using version 2 of the List Bucket API.
Will return a source of object metadata for a given bucket with optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.
The
org.apache.pekko.stream.connectors.s3.list-bucket-api-version
can be set to 1 to use the older API version 1- bucket
Which bucket that you list object metadata for
- prefix
Prefix of the keys you want to list under passed bucket
- returns
Source of object metadata
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)
- def listBucketAndCommonPrefixes(bucket: String, delimiter: String, prefix: Optional[String], s3Headers: S3Headers): Source[Pair[List[ListBucketResultContents], List[ListBucketResultCommonPrefixes]], NotUsed]
Will return a source of object metadata and common prefixes for a given bucket and delimiter with optional prefix using version 2 of the List Bucket API.
Will return a source of object metadata and common prefixes for a given bucket and delimiter with optional prefix using version 2 of the List Bucket API. This will automatically page through all keys with the given parameters.
The
pekko.connectors.s3.list-bucket-api-version
can be set to 1 to use the older API version 1- bucket
Which bucket that you list object metadata for
- delimiter
Delimiter to use for listing only one level of hierarchy
- prefix
Prefix of the keys you want to list under passed bucket
- s3Headers
any headers you want to add
- returns
Source of Pair of (List of ListBucketResultContents, List of ListBucketResultCommonPrefixes
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (version 2 API)
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html (version 1 API)
https://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysHierarchy.html (prefix and delimiter documentation)
- def listBuckets(s3Headers: S3Headers): Source[ListBucketsResultContents, NotUsed]
Will return a list containing all of the buckets for the current AWS account
Will return a list containing all of the buckets for the current AWS account
- returns
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html
- def listBuckets(): Source[ListBucketsResultContents, NotUsed]
Will return a list containing all of the buckets for the current AWS account
Will return a list containing all of the buckets for the current AWS account
- returns
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html
- def listMultipartUpload(bucket: String, prefix: Optional[String], s3Headers: S3Headers): Source[ListMultipartUploadResultUploads, NotUsed]
Will return in progress or aborted multipart uploads.
Will return in progress or aborted multipart uploads. This will automatically page through all keys with the given parameters.
- bucket
Which bucket that you list in-progress multipart uploads for
- prefix
Prefix of the keys you want to list under passed bucket
- s3Headers
any headers you want to add
- returns
- def listMultipartUpload(bucket: String, prefix: Optional[String]): Source[ListMultipartUploadResultUploads, NotUsed]
Will return in progress or aborted multipart uploads.
Will return in progress or aborted multipart uploads. This will automatically page through all keys with the given parameters.
- bucket
Which bucket that you list in-progress multipart uploads for
- prefix
Prefix of the keys you want to list under passed bucket
- returns
- def listMultipartUploadAndCommonPrefixes(bucket: String, delimiter: String, prefix: Optional[String], s3Headers: S3Headers = S3Headers.empty): Source[Pair[List[ListMultipartUploadResultUploads], List[CommonPrefixes]], NotUsed]
Will return in progress or aborted multipart uploads with optional prefix and delimiter.
Will return in progress or aborted multipart uploads with optional prefix and delimiter. This will automatically page through all keys with the given parameters.
- bucket
Which bucket that you list in-progress multipart uploads for
- delimiter
Delimiter to use for listing only one level of hierarchy
- prefix
Prefix of the keys you want to list under passed bucket
- s3Headers
any headers you want to add
- returns
Source of Pair of (List of ListMultipartUploadResultUploads, List of CommonPrefixes)
- def listObjectVersions(bucket: String, delimiter: String, prefix: Optional[String], s3Headers: S3Headers): Source[Pair[List[ListObjectVersionsResultVersions], List[DeleteMarkers]], NotUsed]
List all versioned objects for a bucket with optional prefix and delimiter.
List all versioned objects for a bucket with optional prefix and delimiter. This will automatically page through all keys with the given parameters.
- bucket
Which bucket that you list object versions for
- delimiter
Delimiter to use for listing only one level of hierarchy
- prefix
Prefix of the keys you want to list under passed bucket
- s3Headers
any headers you want to add
- returns
Source of Pair of (List of ListObjectVersionsResultVersions, List of DeleteMarkers)
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html
- def listObjectVersions(bucket: String, prefix: Optional[String], s3Headers: S3Headers): Source[Pair[List[ListObjectVersionsResultVersions], List[DeleteMarkers]], NotUsed]
List all versioned objects for a bucket with optional prefix.
List all versioned objects for a bucket with optional prefix. This will automatically page through all keys with the given parameters.
- bucket
Which bucket that you list object versions for
- prefix
Prefix of the keys you want to list under passed bucket
- s3Headers
any headers you want to add
- returns
Source of Pair of (List of ListObjectVersionsResultVersions, List of DeleteMarkers)
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html
- def listObjectVersions(bucket: String, prefix: Optional[String]): Source[Pair[List[ListObjectVersionsResultVersions], List[DeleteMarkers]], NotUsed]
List all versioned objects for a bucket with optional prefix.
List all versioned objects for a bucket with optional prefix. This will automatically page through all keys with the given parameters.
- bucket
Which bucket that you list object versions for
- prefix
Prefix of the keys you want to list under passed bucket
- returns
Source of Pair of (List of ListObjectVersionsResultVersions, List of DeleteMarkers)
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html
- def listObjectVersionsAndCommonPrefixes(bucket: String, delimiter: String, prefix: Option[String], s3Headers: S3Headers): Source[Tuple3[List[ListObjectVersionsResultVersions], List[DeleteMarkers], List[CommonPrefixes]], NotUsed]
List all versioned objects for a bucket with optional prefix and delimiter.
List all versioned objects for a bucket with optional prefix and delimiter. This will automatically page through all keys with the given parameters.
- bucket
Which bucket that you list object versions for
- delimiter
Delimiter to use for listing only one level of hierarchy
- prefix
Prefix of the keys you want to list under passed bucket
- s3Headers
any headers you want to add
- returns
Source of Tuple3 of (List of ListObjectVersionsResultVersions, List of DeleteMarkers, List of CommonPrefixes)
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html
- def listParts(bucket: String, key: String, uploadId: String, s3Headers: S3Headers): Source[ListPartsResultParts, NotUsed]
List uploaded parts for a specific upload.
List uploaded parts for a specific upload. This will automatically page through all keys with the given parameters.
- bucket
Under which bucket the upload parts are contained
- key
They key where the parts were uploaded to
- uploadId
Unique identifier of the upload for which you want to list the uploaded parts
- s3Headers
any headers you want to add
- returns
- def listParts(bucket: String, key: String, uploadId: String): Source[ListPartsResultParts, NotUsed]
List uploaded parts for a specific upload.
List uploaded parts for a specific upload. This will automatically page through all keys with the given parameters.
- bucket
Under which bucket the upload parts are contained
- key
They key where the parts were uploaded to
- uploadId
Unique identifier of the upload for which you want to list the uploaded parts
- returns
- def makeBucket(bucketName: String, system: ClassicActorSystemProvider, attributes: Attributes, s3Headers: S3Headers): CompletionStage[Done]
Create new bucket with a given name
Create new bucket with a given name
- bucketName
bucket name
- system
the actor system which provides the materializer to run with
- attributes
attributes to run request with
- s3Headers
any headers you want to add
- returns
CompletionStage of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html
- def makeBucket(bucketName: String, system: ClassicActorSystemProvider): CompletionStage[Done]
Create new bucket with a given name
Create new bucket with a given name
- bucketName
bucket name
- system
actor system to run with
- returns
CompletionStage of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html
- def makeBucket(bucketName: String, system: ClassicActorSystemProvider, attributes: Attributes): CompletionStage[Done]
Create new bucket with a given name
Create new bucket with a given name
- bucketName
bucket name
- system
actor system to run with
- attributes
attributes to run request with
- returns
CompletionStage of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html
- def makeBucketSource(bucketName: String, s3Headers: S3Headers): Source[Done, NotUsed]
Create new bucket with a given name
- def makeBucketSource(bucketName: String): Source[Done, NotUsed]
Create new bucket with a given name
- def multipartCopy(sourceBucket: String, sourceKey: String, targetBucket: String, targetKey: String): RunnableGraph[CompletionStage[MultipartUploadResult]]
Copy a S3 Object by making multiple requests.
Copy a S3 Object by making multiple requests.
- sourceBucket
the source s3 bucket name
- sourceKey
the source s3 key
- targetBucket
the target s3 bucket name
- targetKey
the target s3 key
- returns
the MultipartUploadResult of the uploaded S3 Object
- def multipartCopy(sourceBucket: String, sourceKey: String, targetBucket: String, targetKey: String, s3Headers: S3Headers): RunnableGraph[CompletionStage[MultipartUploadResult]]
Copy a S3 Object by making multiple requests.
Copy a S3 Object by making multiple requests.
- sourceBucket
the source s3 bucket name
- sourceKey
the source s3 key
- targetBucket
the target s3 bucket name
- targetKey
the target s3 key
- s3Headers
any headers you want to add
- returns
the MultipartUploadResult of the uploaded S3 Object
- def multipartCopy(sourceBucket: String, sourceKey: String, targetBucket: String, targetKey: String, contentType: ContentType, s3Headers: S3Headers): RunnableGraph[CompletionStage[MultipartUploadResult]]
Copy a S3 Object by making multiple requests.
Copy a S3 Object by making multiple requests.
- sourceBucket
the source s3 bucket name
- sourceKey
the source s3 key
- targetBucket
the target s3 bucket name
- targetKey
the target s3 key
- contentType
an optional ContentType
- s3Headers
any headers you want to add
- returns
the MultipartUploadResult of the uploaded S3 Object
- def multipartCopy(sourceBucket: String, sourceKey: String, targetBucket: String, targetKey: String, sourceVersionId: Optional[String], s3Headers: S3Headers): RunnableGraph[CompletionStage[MultipartUploadResult]]
Copy a S3 Object by making multiple requests.
Copy a S3 Object by making multiple requests.
- sourceBucket
the source s3 bucket name
- sourceKey
the source s3 key
- targetBucket
the target s3 bucket name
- targetKey
the target s3 key
- sourceVersionId
version id of source object, if the versioning is enabled in source bucket
- s3Headers
any headers you want to add
- returns
the MultipartUploadResult of the uploaded S3 Object
- def multipartCopy(sourceBucket: String, sourceKey: String, targetBucket: String, targetKey: String, sourceVersionId: Optional[String], contentType: ContentType, s3Headers: S3Headers): RunnableGraph[CompletionStage[MultipartUploadResult]]
Copy a S3 Object by making multiple requests.
Copy a S3 Object by making multiple requests.
- sourceBucket
the source s3 bucket name
- sourceKey
the source s3 key
- targetBucket
the target s3 bucket name
- targetKey
the target s3 key
- sourceVersionId
version id of source object, if the versioning is enabled in source bucket
- contentType
an optional ContentType
- s3Headers
any headers you want to add
- returns
the MultipartUploadResult of the uploaded S3 Object
- def multipartUpload(bucket: String, key: String): Sink[ByteString, CompletionStage[MultipartUploadResult]]
Uploads a S3 Object by making multiple requests
Uploads a S3 Object by making multiple requests
- bucket
the s3 bucket name
- key
the s3 object key
- returns
a Sink that accepts ByteString's and materializes to a CompletionStage of MultipartUploadResult
- def multipartUpload(bucket: String, key: String, contentType: ContentType): Sink[ByteString, CompletionStage[MultipartUploadResult]]
Uploads a S3 Object by making multiple requests
Uploads a S3 Object by making multiple requests
- bucket
the s3 bucket name
- key
the s3 object key
- contentType
an optional ContentType
- returns
a Sink that accepts ByteString's and materializes to a CompletionStage of MultipartUploadResult
- def multipartUpload(bucket: String, key: String, contentType: ContentType, s3Headers: S3Headers): Sink[ByteString, CompletionStage[MultipartUploadResult]]
Uploads a S3 Object by making multiple requests
Uploads a S3 Object by making multiple requests
- bucket
the s3 bucket name
- key
the s3 object key
- contentType
an optional ContentType
- s3Headers
any headers you want to add
- returns
a Sink that accepts ByteString's and materializes to a CompletionStage of MultipartUploadResult
- def multipartUploadWithContext[C](bucket: String, key: String, chunkUploadSink: Sink[Pair[UploadPartResponse, Iterable[C]], _]): Sink[Pair[ByteString, C], CompletionStage[MultipartUploadResult]]
Uploads a S3 Object by making multiple requests.
Uploads a S3 Object by making multiple requests. Unlike
multipartUpload
, this version allows you to pass in a context (typically from aSourceWithContext
/FlowWithContext
) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.Note that this version of resuming multipart upload ignores buffering
- C
The Context that is passed along with the
ByteString
- bucket
the s3 bucket name
- key
the s3 object key
- chunkUploadSink
A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses
Flow.alsoTo
forchunkUploadSink
which means that backpressure is applied to the upload stream ifchunkUploadSink
is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returnedSink
.- returns
a Sink that accepts Pair of (ByteString of C)'s and materializes to a CompletionStage of MultipartUploadResult
- def multipartUploadWithContext[C](bucket: String, key: String, chunkUploadSink: Sink[Pair[UploadPartResponse, Iterable[C]], _], contentType: ContentType): Sink[Pair[ByteString, C], CompletionStage[MultipartUploadResult]]
Uploads a S3 Object by making multiple requests.
Uploads a S3 Object by making multiple requests. Unlike
multipartUpload
, this version allows you to pass in a context (typically from aSourceWithContext
/FlowWithContext
) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.Note that this version of resuming multipart upload ignores buffering
- C
The Context that is passed along with the
ByteString
- bucket
the s3 bucket name
- key
the s3 object key
- chunkUploadSink
A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses
Flow.alsoTo
forchunkUploadSink
which means that backpressure is applied to the upload stream ifchunkUploadSink
is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returnedSink
.- contentType
an optional ContentType
- returns
a Sink that accepts Pair of (ByteString of C)'s and materializes to a CompletionStage of MultipartUploadResult
- def multipartUploadWithContext[C](bucket: String, key: String, chunkUploadSink: Sink[Pair[UploadPartResponse, Iterable[C]], _], contentType: ContentType, s3Headers: S3Headers): Sink[Pair[ByteString, C], CompletionStage[MultipartUploadResult]]
Uploads a S3 Object by making multiple requests.
Uploads a S3 Object by making multiple requests. Unlike
multipartUpload
, this version allows you to pass in a context (typically from aSourceWithContext
/FlowWithContext
) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.Note that this version of resuming multipart upload ignores buffering
- C
The Context that is passed along with the
ByteString
- bucket
the s3 bucket name
- key
the s3 object key
- chunkUploadSink
A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses
Flow.alsoTo
forchunkUploadSink
which means that backpressure is applied to the upload stream ifchunkUploadSink
is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returnedSink
.- contentType
an optional ContentType
- s3Headers
any headers you want to add
- returns
a Sink that accepts Pair of (ByteString of C)'s and materializes to a CompletionStage of MultipartUploadResult
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- def putBucketVersioning(bucketName: String, bucketVersioning: BucketVersioning, s3Headers: S3Headers)(implicit system: ClassicActorSystemProvider, attributes: Attributes): CompletionStage[Done]
Sets the versioning state of an existing bucket.
Sets the versioning state of an existing bucket.
- bucketName
Bucket name
- bucketVersioning
The state that you want to update
- s3Headers
any headers you want to add
- returns
CompletionStage of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html
- def putBucketVersioning(bucketName: String, bucketVersioning: BucketVersioning)(implicit system: ClassicActorSystemProvider, attributes: Attributes = Attributes()): CompletionStage[Done]
Sets the versioning state of an existing bucket.
Sets the versioning state of an existing bucket.
- bucketName
Bucket name
- bucketVersioning
The state that you want to update
- returns
CompletionStage of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html
- def putBucketVersioningSource(bucketName: String, bucketVersioning: BucketVersioning, s3Headers: S3Headers): Source[Done, NotUsed]
Sets the versioning state of an existing bucket.
Sets the versioning state of an existing bucket.
- bucketName
Bucket name
- bucketVersioning
The state that you want to update
- s3Headers
any headers you want to add
- returns
Source of type Done as API doesn't return any additional information
- See also
https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html
- def putBucketVersioningSource(bucketName: String, bucketVersioning: BucketVersioning): Source[Done, NotUsed]
Sets the versioning state of an existing bucket.
- def putObject(bucket: String, key: String, data: Source[ByteString, _], contentLength: Long): Source[ObjectMetadata, NotUsed]
Uploads a S3 Object, use this for small files and multipartUpload for bigger ones
Uploads a S3 Object, use this for small files and multipartUpload for bigger ones
- bucket
the s3 bucket name
- key
the s3 object key
- data
a Source of ByteString
- contentLength
the number of bytes that will be uploaded (required!)
- returns
a Source containing the ObjectMetadata of the uploaded S3 Object
- def putObject(bucket: String, key: String, data: Source[ByteString, _], contentLength: Long, contentType: ContentType): Source[ObjectMetadata, NotUsed]
Uploads a S3 Object, use this for small files and multipartUpload for bigger ones
Uploads a S3 Object, use this for small files and multipartUpload for bigger ones
- bucket
the s3 bucket name
- key
the s3 object key
- data
a Source of ByteString
- contentLength
the number of bytes that will be uploaded (required!)
- contentType
an optional ContentType
- returns
a Source containing the ObjectMetadata of the uploaded S3 Object
- def putObject(bucket: String, key: String, data: Source[ByteString, _], contentLength: Long, contentType: ContentType, s3Headers: S3Headers): Source[ObjectMetadata, NotUsed]
Uploads a S3 Object, use this for small files and multipartUpload for bigger ones
Uploads a S3 Object, use this for small files and multipartUpload for bigger ones
- bucket
the s3 bucket name
- key
the s3 object key
- data
a Source of ByteString
- contentLength
the number of bytes that will be uploaded (required!)
- contentType
an optional ContentType
- s3Headers
any additional headers for the request
- returns
a Source containing the ObjectMetadata of the uploaded S3 Object
- def request(bucket: String, key: String, versionId: Optional[String], method: HttpMethod = HttpMethods.GET, s3Headers: S3Headers = S3Headers.empty): Source[HttpResponse, NotUsed]
Use this for a low level access to S3.
Use this for a low level access to S3.
- bucket
the s3 bucket name
- key
the s3 object key
- versionId
optional versionId of source object
- method
the HttpMethod to use when making the request
- s3Headers
any headers you want to add
- returns
a raw HTTP response from S3
- def request(bucket: String, key: String, method: HttpMethod, s3Headers: S3Headers): Source[HttpResponse, NotUsed]
Use this for a low level access to S3.
Use this for a low level access to S3.
- bucket
the s3 bucket name
- key
the s3 object key
- method
the HttpMethod to use when making the request
- s3Headers
any headers you want to add
- returns
a raw HTTP response from S3
- def resumeMultipartUpload(bucket: String, key: String, uploadId: String, previousParts: Iterable[Part]): Sink[ByteString, CompletionStage[MultipartUploadResult]]
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers
- bucket
the s3 bucket name
- key
the s3 object key
- uploadId
the upload that you want to resume
- previousParts
The previously uploaded parts ending just before when this upload will commence
- returns
a Sink that accepts ByteString's and materializes to a CompletionStage of MultipartUploadResult
- def resumeMultipartUpload(bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], contentType: ContentType): Sink[ByteString, CompletionStage[MultipartUploadResult]]
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers
- bucket
the s3 bucket name
- key
the s3 object key
- uploadId
the upload that you want to resume
- previousParts
The previously uploaded parts ending just before when this upload will commence
- contentType
an optional ContentType
- returns
a Sink that accepts ByteString's and materializes to a CompletionStage of MultipartUploadResult
- def resumeMultipartUpload(bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], contentType: ContentType, s3Headers: S3Headers): Sink[ByteString, CompletionStage[MultipartUploadResult]]
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers
- bucket
the s3 bucket name
- key
the s3 object key
- uploadId
the upload that you want to resume
- previousParts
The previously uploaded parts ending just before when this upload will commence
- contentType
an optional ContentType
- s3Headers
any headers you want to add
- returns
a Sink that accepts ByteString's and materializes to a CompletionStage of MultipartUploadResult
- def resumeMultipartUploadWithContext[C](bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], chunkUploadSink: Sink[Pair[UploadPartResponse, Iterable[C]], _]): Sink[Pair[ByteString, C], CompletionStage[MultipartUploadResult]]
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers.
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers. Unlike
resumeMultipartUpload
, this version allows you to pass in a context (typically from aSourceWithContext
/FlowWithContext
) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.Note that this version of resuming multipart upload ignores buffering
- C
The Context that is passed along with the
ByteString
- bucket
the s3 bucket name
- key
the s3 object key
- uploadId
the upload that you want to resume
- previousParts
The previously uploaded parts ending just before when this upload will commence
- chunkUploadSink
A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses
Flow.alsoTo
forchunkUploadSink
which means that backpressure is applied to the upload stream ifchunkUploadSink
is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returnedSink
.- returns
a Sink that accepts Pair of (ByteString of C)'s and materializes to a CompletionStage of MultipartUploadResult
- def resumeMultipartUploadWithContext[C](bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], chunkUploadSink: Sink[Pair[UploadPartResponse, Iterable[C]], _], contentType: ContentType): Sink[Pair[ByteString, C], CompletionStage[MultipartUploadResult]]
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers.
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers. Unlike
resumeMultipartUpload
, this version allows you to pass in a context (typically from aSourceWithContext
/FlowWithContext
) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.Note that this version of resuming multipart upload ignores buffering
- C
The Context that is passed along with the
ByteString
- bucket
the s3 bucket name
- key
the s3 object key
- uploadId
the upload that you want to resume
- previousParts
The previously uploaded parts ending just before when this upload will commence
- chunkUploadSink
A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses
Flow.alsoTo
forchunkUploadSink
which means that backpressure is applied to the upload stream ifchunkUploadSink
is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returnedSink
.- contentType
an optional ContentType
- returns
a Sink that accepts Pair of (ByteString of C)'s and materializes to a CompletionStage of MultipartUploadResult
- def resumeMultipartUploadWithContext[C](bucket: String, key: String, uploadId: String, previousParts: Iterable[Part], chunkUploadSink: Sink[Pair[UploadPartResponse, Iterable[C]], _], contentType: ContentType, s3Headers: S3Headers): Sink[Pair[ByteString, C], CompletionStage[MultipartUploadResult]]
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers.
Resumes from a previously aborted multipart upload by providing the uploadId and previous upload part identifiers. Unlike
resumeMultipartUpload
, this version allows you to pass in a context (typically from aSourceWithContext
/FlowWithContext
) along with a chunkUploadSink that defines how to act whenever a chunk is uploaded.Note that this version of resuming multipart upload ignores buffering
- C
The Context that is passed along with the
ByteString
- bucket
the s3 bucket name
- key
the s3 object key
- uploadId
the upload that you want to resume
- previousParts
The previously uploaded parts ending just before when this upload will commence
- chunkUploadSink
A sink that's a callback which gets executed whenever an entire Chunk is uploaded to S3 (successfully or unsuccessfully). Since each chunk can contain more than one emitted element from the original flow/source you get provided with the list of context's. The internal implementation uses
Flow.alsoTo
forchunkUploadSink
which means that backpressure is applied to the upload stream ifchunkUploadSink
is too slow, likewise any failure will also be propagated to the upload stream. Sink Materialization is also shared with the returnedSink
.- contentType
an optional ContentType
- s3Headers
any headers you want to add
- returns
a Sink that accepts Pair of (ByteString of C)'s and materializes to a CompletionStage of MultipartUploadResult
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
Deprecated Value Members
- def download(bucket: String, key: String, range: ByteRange, versionId: Optional[String], s3Headers: S3Headers): Source[Optional[Pair[Source[ByteString, NotUsed], ObjectMetadata]], NotUsed]
Downloads a specific byte range of a S3 Object
Downloads a specific byte range of a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- range
the ByteRange you want to download
- versionId
optional version id of the object
- s3Headers
any headers you want to add
- returns
A pekko.japi.Pair with a Source of ByteString, and a Source containing the ObjectMetadata
- Annotations
- @deprecated
- Deprecated
(Since version 4.0.0) Use S3.getObject instead
- def download(bucket: String, key: String, range: ByteRange, s3Headers: S3Headers): Source[Optional[Pair[Source[ByteString, NotUsed], ObjectMetadata]], NotUsed]
Downloads a specific byte range of a S3 Object
Downloads a specific byte range of a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- range
the ByteRange you want to download
- s3Headers
any headers you want to add
- returns
A pekko.japi.Pair with a Source of ByteString, and a Source containing the ObjectMetadata
- Annotations
- @deprecated
- Deprecated
(Since version 4.0.0) Use S3.getObject instead
- def download(bucket: String, key: String, s3Headers: S3Headers): Source[Optional[Pair[Source[ByteString, NotUsed], ObjectMetadata]], NotUsed]
Downloads a S3 Object
Downloads a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- s3Headers
any headers you want to add
- returns
A pekko.japi.Pair with a Source of ByteString, and a Source containing the ObjectMetadata
- Annotations
- @deprecated
- Deprecated
(Since version 4.0.0) Use S3.getObject instead
- def download(bucket: String, key: String, range: ByteRange, versionId: Optional[String], sse: ServerSideEncryption): Source[Optional[Pair[Source[ByteString, NotUsed], ObjectMetadata]], NotUsed]
Downloads a specific byte range of a S3 Object
Downloads a specific byte range of a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- range
the ByteRange you want to download
- versionId
optional version id of the object
- sse
the server side encryption to use
- returns
A pekko.japi.Pair with a Source of ByteString, and a Source containing the ObjectMetadata
- Annotations
- @deprecated
- Deprecated
(Since version 4.0.0) Use S3.getObject instead
- def download(bucket: String, key: String, range: ByteRange, sse: ServerSideEncryption): Source[Optional[Pair[Source[ByteString, NotUsed], ObjectMetadata]], NotUsed]
Downloads a specific byte range of a S3 Object
Downloads a specific byte range of a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- range
the ByteRange you want to download
- sse
the server side encryption to use
- returns
A pekko.japi.Pair with a Source of ByteString, and a Source containing the ObjectMetadata
- Annotations
- @deprecated
- Deprecated
(Since version 4.0.0) Use S3.getObject instead
- def download(bucket: String, key: String, range: ByteRange): Source[Optional[Pair[Source[ByteString, NotUsed], ObjectMetadata]], NotUsed]
Downloads a specific byte range of a S3 Object
Downloads a specific byte range of a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- range
the ByteRange you want to download
- returns
A pekko.japi.Pair with a Source of ByteString, and a Source containing the ObjectMetadata
- Annotations
- @deprecated
- Deprecated
(Since version 4.0.0) Use S3.getObject instead
- def download(bucket: String, key: String, sse: ServerSideEncryption): Source[Optional[Pair[Source[ByteString, NotUsed], ObjectMetadata]], NotUsed]
Downloads a S3 Object
Downloads a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- sse
the server side encryption to use
- returns
A pekko.japi.Pair with a Source of ByteString, and a Source containing the ObjectMetadata
- Annotations
- @deprecated
- Deprecated
(Since version 4.0.0) Use S3.getObject instead
- def download(bucket: String, key: String): Source[Optional[Pair[Source[ByteString, NotUsed], ObjectMetadata]], NotUsed]
Downloads a S3 Object
Downloads a S3 Object
- bucket
the s3 bucket name
- key
the s3 object key
- returns
A pekko.japi.Pair with a Source of ByteString, and a Source containing the ObjectMetadata
- Annotations
- @deprecated
- Deprecated
(Since version 4.0.0) Use S3.getObject instead