org.apache.hadoop.io.BloomMapFile.Reader(FileSystem, String, Configuration) |
org.apache.hadoop.io.BloomMapFile.Reader(FileSystem, String, WritableComparator, Configuration) |
org.apache.hadoop.io.BloomMapFile.Reader(FileSystem, String, WritableComparator, Configuration, boolean) |
org.apache.hadoop.io.BloomMapFile.Writer(Configuration, FileSystem, String, Class extends WritableComparable>, Class) |
org.apache.hadoop.io.BloomMapFile.Writer(Configuration, FileSystem, String, Class extends WritableComparable>, Class extends Writable>, SequenceFile.CompressionType, CompressionCodec, Progressable) |
org.apache.hadoop.io.BloomMapFile.Writer(Configuration, FileSystem, String, Class extends WritableComparable>, Class, SequenceFile.CompressionType) |
org.apache.hadoop.io.BloomMapFile.Writer(Configuration, FileSystem, String, Class extends WritableComparable>, Class, SequenceFile.CompressionType, Progressable) |
org.apache.hadoop.io.BloomMapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class) |
org.apache.hadoop.io.BloomMapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType) |
org.apache.hadoop.io.BloomMapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType, CompressionCodec, Progressable) |
org.apache.hadoop.io.BloomMapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType, Progressable) |
org.apache.hadoop.fs.shell.CommandFormat(String, int, int, String...)
use replacement since name is an unused parameter
|
org.apache.hadoop.fs.shell.Count(String[], int, Configuration)
|
org.apache.hadoop.hdfs.DFSClient(Configuration)
Deprecated at 0.21
|
org.apache.hadoop.hdfs.DistributedFileSystem(InetSocketAddress, Configuration) |
org.apache.hadoop.mapred.FileSplit(Path, long, long, JobConf) |
org.apache.hadoop.fs.FSDataOutputStream(OutputStream) |
org.apache.hadoop.mapreduce.Job() |
org.apache.hadoop.mapreduce.Job(Configuration) |
org.apache.hadoop.mapreduce.Job(Configuration, String) |
org.apache.hadoop.mapred.JobProfile(String, String, String, String, String)
use JobProfile(String, JobID, String, String, String) instead
|
org.apache.hadoop.mapred.JobStatus(JobID, float, float, float, float, int, JobPriority) |
org.apache.hadoop.mapred.JobStatus(JobID, float, float, float, int) |
org.apache.hadoop.mapred.JobStatus(JobID, float, float, float, int, JobPriority) |
org.apache.hadoop.mapred.JobStatus(JobID, float, float, int) |
org.apache.hadoop.mapred.LocalJobRunner(JobConf) |
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent(TaskAttemptID, TaskType, String, long, long, String, String, Counters)
please use the constructor with an additional
argument, an array of splits arrays instead. See
ProgressSplitsBlock
for an explanation of the meaning of that parameter.
Create an event for successful completion of map attempts
|
org.apache.hadoop.io.MapFile.Reader(FileSystem, String, Configuration) |
org.apache.hadoop.io.MapFile.Reader(FileSystem, String, WritableComparator, Configuration) |
org.apache.hadoop.io.MapFile.Writer(Configuration, FileSystem, String, Class extends WritableComparable>, Class)
Use Writer(Configuration, Path, Option...) instead.
|
org.apache.hadoop.io.MapFile.Writer(Configuration, FileSystem, String, Class extends WritableComparable>, Class, SequenceFile.CompressionType)
Use Writer(Configuration, Path, Option...) instead.
|
org.apache.hadoop.io.MapFile.Writer(Configuration, FileSystem, String, Class extends WritableComparable>, Class, SequenceFile.CompressionType, CompressionCodec, Progressable)
Use Writer(Configuration, Path, Option...) instead.
|
org.apache.hadoop.io.MapFile.Writer(Configuration, FileSystem, String, Class extends WritableComparable>, Class, SequenceFile.CompressionType, Progressable)
Use Writer(Configuration, Path, Option...) instead.
|
org.apache.hadoop.io.MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class)
Use Writer(Configuration, Path, Option...) instead.
|
org.apache.hadoop.io.MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType)
Use Writer(Configuration, Path, Option...) instead.
|
org.apache.hadoop.io.MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType, CompressionCodec, Progressable)
Use Writer(Configuration, Path, Option...) instead.
|
org.apache.hadoop.io.MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType, Progressable)
Use Writer(Configuration, Path, Option...)} instead.
|
org.apache.hadoop.tools.rumen.MapTaskAttemptInfo(TaskStatus.State, TaskInfo, long)
please use the constructor with
(state, taskInfo, runtime,
List<List<Integer>> allSplits)
instead.
see LoggedTaskAttempt for an explanation of
allSplits .
If there are no known splits, use null .
|
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent(TaskAttemptID, TaskType, String, long, long, long, String, String, Counters)
please use the constructor with an additional
argument, an array of splits arrays instead. See
ProgressSplitsBlock
for an explanation of the meaning of that parameter.
Create an event to record completion of a reduce attempt
|
org.apache.hadoop.tools.rumen.ReduceTaskAttemptInfo(TaskStatus.State, TaskInfo, long, long, long)
please use the constructor with
(state, taskInfo, shuffleTime, mergeTime, reduceTime
List<List<Integer>> allSplits)
instead.
see LoggedTaskAttempt for an explanation of
allSplits .
If there are no known splits, use null .
|
org.apache.hadoop.io.SequenceFile.Reader(FileSystem, Path, Configuration)
Use Reader(Configuration, Option...) instead.
|
org.apache.hadoop.io.SequenceFile.Reader(FSDataInputStream, int, long, long, Configuration)
Use Reader(Configuration, Reader.Option...) instead.
|
org.apache.hadoop.io.SequenceFile.Writer(FileSystem, Configuration, Path, Class, Class)
|
org.apache.hadoop.io.SequenceFile.Writer(FileSystem, Configuration, Path, Class, Class, int, short, long, Progressable, SequenceFile.Metadata)
|
org.apache.hadoop.io.SequenceFile.Writer(FileSystem, Configuration, Path, Class, Class, Progressable, SequenceFile.Metadata)
|
org.apache.hadoop.io.SetFile.Writer(FileSystem, String, Class extends WritableComparable>)
pass a Configuration too
|
org.apache.hadoop.streaming.StreamJob(String[], boolean)
|
org.apache.hadoop.mapreduce.TaskAttemptID(String, int, boolean, int, int) |
org.apache.hadoop.mapred.TaskAttemptID(String, int, boolean, int, int)
|
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent(TaskAttemptID, TaskType, String, long, String, String)
please use the constructor with an additional
argument, an array of splits arrays instead. See
ProgressSplitsBlock
for an explanation of the meaning of that parameter.
Create an event to record the unsuccessful completion of attempts
|
org.apache.hadoop.mapreduce.TaskID(JobID, boolean, int) |
org.apache.hadoop.mapred.TaskID(JobID, boolean, int)
|
org.apache.hadoop.mapreduce.TaskID(String, int, boolean, int) |
org.apache.hadoop.mapred.TaskID(String, int, boolean, int)
|
org.apache.hadoop.ipc.WritableRpcEngine.Server(Object, Configuration, String, int)
Use #Server(Class, Object, Configuration, String, int)
|
org.apache.hadoop.ipc.WritableRpcEngine.Server(Object, Configuration, String, int, int, int, int, boolean, SecretManager extends TokenIdentifier>)
use Server#Server(Class, Object,
Configuration, String, int, int, int, int, boolean, SecretManager)
|