object HdfsFlow
- Source
 - HdfsFlow.scala
 
- Alphabetic
 - By Inheritance
 
- HdfsFlow
 - AnyRef
 - Any
 
- Hide All
 - Show All
 
- Public
 - Protected
 
Value Members
-   final  def !=(arg0: Any): Boolean
- Definition Classes
 - AnyRef → Any
 
 -   final  def ##: Int
- Definition Classes
 - AnyRef → Any
 
 -   final  def ==(arg0: Any): Boolean
- Definition Classes
 - AnyRef → Any
 
 -   final  def asInstanceOf[T0]: T0
- Definition Classes
 - Any
 
 -    def clone(): AnyRef
- Attributes
 - protected[lang]
 - Definition Classes
 - AnyRef
 - Annotations
 - @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
 
 -    def compressed(fs: FileSystem, syncStrategy: SyncStrategy, rotationStrategy: RotationStrategy, compressionCodec: CompressionCodec, settings: HdfsWritingSettings): Flow[HdfsWriteMessage[ByteString, NotUsed], RotationMessage, NotUsed]
Scala API: creates a Flow for org.apache.hadoop.io.compress.CompressionOutputStream
Scala API: creates a Flow for org.apache.hadoop.io.compress.CompressionOutputStream
- fs
 Hadoop file system
- syncStrategy
 sync strategy
- rotationStrategy
 rotation strategy
- compressionCodec
 a streaming compression/decompression pair
- settings
 hdfs writing settings
 -    def compressedWithPassThrough[P](fs: FileSystem, syncStrategy: SyncStrategy, rotationStrategy: RotationStrategy, compressionCodec: CompressionCodec, settings: HdfsWritingSettings): Flow[HdfsWriteMessage[ByteString, P], OutgoingMessage[P], NotUsed]
Scala API: creates a Flow for org.apache.hadoop.io.compress.CompressionOutputStream with
passThroughof typeCScala API: creates a Flow for org.apache.hadoop.io.compress.CompressionOutputStream with
passThroughof typeC- fs
 Hadoop file system
- syncStrategy
 sync strategy
- rotationStrategy
 rotation strategy
- compressionCodec
 a streaming compression/decompression pair
- settings
 hdfs writing settings
 -    def data(fs: FileSystem, syncStrategy: SyncStrategy, rotationStrategy: RotationStrategy, settings: HdfsWritingSettings): Flow[HdfsWriteMessage[ByteString, NotUsed], RotationMessage, NotUsed]
Scala API: creates a Flow for org.apache.hadoop.fs.FSDataOutputStream
Scala API: creates a Flow for org.apache.hadoop.fs.FSDataOutputStream
- fs
 Hadoop file system
- syncStrategy
 sync strategy
- rotationStrategy
 rotation strategy
- settings
 hdfs writing settings
 -    def dataWithPassThrough[P](fs: FileSystem, syncStrategy: SyncStrategy, rotationStrategy: RotationStrategy, settings: HdfsWritingSettings): Flow[HdfsWriteMessage[ByteString, P], OutgoingMessage[P], NotUsed]
Scala API: creates a Flow for org.apache.hadoop.fs.FSDataOutputStream with
passThroughof typeCScala API: creates a Flow for org.apache.hadoop.fs.FSDataOutputStream with
passThroughof typeC- fs
 Hadoop file system
- syncStrategy
 sync strategy
- rotationStrategy
 rotation strategy
- settings
 hdfs writing settings
 -   final  def eq(arg0: AnyRef): Boolean
- Definition Classes
 - AnyRef
 
 -    def equals(arg0: AnyRef): Boolean
- Definition Classes
 - AnyRef → Any
 
 -   final  def getClass(): Class[_ <: AnyRef]
- Definition Classes
 - AnyRef → Any
 - Annotations
 - @IntrinsicCandidate() @native()
 
 -    def hashCode(): Int
- Definition Classes
 - AnyRef → Any
 - Annotations
 - @IntrinsicCandidate() @native()
 
 -   final  def isInstanceOf[T0]: Boolean
- Definition Classes
 - Any
 
 -   final  def ne(arg0: AnyRef): Boolean
- Definition Classes
 - AnyRef
 
 -   final  def notify(): Unit
- Definition Classes
 - AnyRef
 - Annotations
 - @IntrinsicCandidate() @native()
 
 -   final  def notifyAll(): Unit
- Definition Classes
 - AnyRef
 - Annotations
 - @IntrinsicCandidate() @native()
 
 -    def sequence[K <: Writable, V <: Writable](fs: FileSystem, syncStrategy: SyncStrategy, rotationStrategy: RotationStrategy, compressionType: CompressionType, compressionCodec: CompressionCodec, settings: HdfsWritingSettings, classK: Class[K], classV: Class[V]): Flow[HdfsWriteMessage[(K, V), NotUsed], RotationMessage, NotUsed]
Scala API: creates a Flow for org.apache.hadoop.io.SequenceFile.Writer with a compression
Scala API: creates a Flow for org.apache.hadoop.io.SequenceFile.Writer with a compression
- fs
 Hadoop file system
- syncStrategy
 sync strategy
- rotationStrategy
 rotation strategy
- compressionType
 a compression type used to compress key/value pairs in the SequenceFile
- compressionCodec
 a streaming compression/decompression pair
- settings
 hdfs writing settings
- classK
 a key class
- classV
 a value class
 -    def sequence[K <: Writable, V <: Writable](fs: FileSystem, syncStrategy: SyncStrategy, rotationStrategy: RotationStrategy, settings: HdfsWritingSettings, classK: Class[K], classV: Class[V]): Flow[HdfsWriteMessage[(K, V), NotUsed], RotationMessage, NotUsed]
Scala API: creates a Flow for org.apache.hadoop.io.SequenceFile.Writer without a compression
Scala API: creates a Flow for org.apache.hadoop.io.SequenceFile.Writer without a compression
- fs
 Hadoop file system
- syncStrategy
 sync strategy
- rotationStrategy
 rotation strategy
- settings
 hdfs writing settings
- classK
 a key class
- classV
 a value class
 -    def sequenceWithPassThrough[K <: Writable, V <: Writable, P](fs: FileSystem, syncStrategy: SyncStrategy, rotationStrategy: RotationStrategy, compressionType: CompressionType, compressionCodec: CompressionCodec, settings: HdfsWritingSettings, classK: Class[K], classV: Class[V]): Flow[HdfsWriteMessage[(K, V), P], OutgoingMessage[P], NotUsed]
Scala API: creates a Flow for org.apache.hadoop.io.SequenceFile.Writer with
passThroughof typeCand a compressionScala API: creates a Flow for org.apache.hadoop.io.SequenceFile.Writer with
passThroughof typeCand a compression- fs
 Hadoop file system
- syncStrategy
 sync strategy
- rotationStrategy
 rotation strategy
- compressionType
 a compression type used to compress key/value pairs in the SequenceFile
- compressionCodec
 a streaming compression/decompression pair
- settings
 hdfs writing settings
- classK
 a key class
- classV
 a value class
 -    def sequenceWithPassThrough[K <: Writable, V <: Writable, P](fs: FileSystem, syncStrategy: SyncStrategy, rotationStrategy: RotationStrategy, settings: HdfsWritingSettings, classK: Class[K], classV: Class[V]): Flow[HdfsWriteMessage[(K, V), P], OutgoingMessage[P], NotUsed]
Scala API: creates a Flow for org.apache.hadoop.io.SequenceFile.Writer with
passThroughof typeCand without a compressionScala API: creates a Flow for org.apache.hadoop.io.SequenceFile.Writer with
passThroughof typeCand without a compression- fs
 sync strategy
- syncStrategy
 sync strategy
- rotationStrategy
 rotation strategy
- settings
 Hdfs writing settings
- classK
 a key class
- classV
 a value class
 -   final  def synchronized[T0](arg0: => T0): T0
- Definition Classes
 - AnyRef
 
 -    def toString(): String
- Definition Classes
 - AnyRef → Any
 
 -   final  def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
 - AnyRef
 - Annotations
 - @throws(classOf[java.lang.InterruptedException])
 
 -   final  def wait(arg0: Long): Unit
- Definition Classes
 - AnyRef
 - Annotations
 - @throws(classOf[java.lang.InterruptedException]) @native()
 
 -   final  def wait(): Unit
- Definition Classes
 - AnyRef
 - Annotations
 - @throws(classOf[java.lang.InterruptedException])