public interface HadoopShims
Modifier and Type | Interface and Description |
---|---|
static interface |
HadoopShims.CombineFileInputFormatShim<K,V>
CombineFileInputFormatShim.
|
static interface |
HadoopShims.HCatHadoopShims |
static interface |
HadoopShims.InputSplitShim
InputSplitShim.
|
static class |
HadoopShims.JobTrackerState |
static interface |
HadoopShims.MiniDFSShim
Shim around the functions in MiniDFSCluster that Hive uses.
|
static interface |
HadoopShims.MiniMrShim
Shim for MiniMrCluster
|
static interface |
HadoopShims.WebHCatJTShim |
Modifier and Type | Field and Description |
---|---|
static org.apache.commons.logging.Log |
LOG |
Modifier and Type | Method and Description |
---|---|
void |
closeAllForUGI(UserGroupInformation ugi) |
int |
compareText(Text a,
Text b)
We define this function here to make the code compatible between
hadoop 0.17 and hadoop 0.20.
|
Path |
createDelegationTokenFile(Configuration conf)
Get delegation token from filesystem and write the token along with
metastore tokens into a file
|
int |
createHadoopArchive(Configuration conf,
Path parentDir,
Path destDir,
java.lang.String archiveName) |
UserGroupInformation |
createProxyUser(java.lang.String userName)
Create the proxy ugi for the given userid
|
UserGroupInformation |
createRemoteUser(java.lang.String userName,
java.util.List<java.lang.String> groupNames)
Used by metastore server to creates UGI object for a remote user.
|
<T> T |
doAs(UserGroupInformation ugi,
java.security.PrivilegedExceptionAction<T> pvea)
Used by metastore server to perform requested rpc in client context.
|
boolean |
fileSystemDeleteOnExit(FileSystem fs,
Path path)
Calls fs.deleteOnExit(path) if such a function exists.
|
long |
getAccessTime(FileStatus file)
return the last access time of the given file.
|
HadoopShims.CombineFileInputFormatShim |
getCombineFileInputFormat() |
long |
getDefaultBlockSize(FileSystem fs,
Path path)
Get the default block size for the path.
|
short |
getDefaultReplication(FileSystem fs,
Path path)
Get the default replication for a path.
|
java.net.URI |
getHarUri(java.net.URI original,
java.net.URI base,
java.net.URI originalBase) |
HadoopShims.HCatHadoopShims |
getHCatShim() |
java.lang.String |
getInputFormatClassName() |
java.lang.String |
getJobLauncherHttpAddress(Configuration conf)
All references to jobtracker/resource manager http address
in the configuration should be done through this shim
|
java.lang.String |
getJobLauncherRpcAddress(Configuration conf)
All retrieval of jobtracker/resource manager rpc address
in the configuration should be done through this shim
|
HadoopShims.JobTrackerState |
getJobTrackerState(ClusterStatus clusterStatus)
Convert the ClusterStatus to its Thrift equivalent: JobTrackerState.
|
HadoopShims.MiniDFSShim |
getMiniDfs(Configuration conf,
int numDataNodes,
boolean format,
java.lang.String[] racks)
Returns a shim to wrap MiniDFSCluster.
|
HadoopShims.MiniMrShim |
getMiniMrCluster(Configuration conf,
int numberOfTaskTrackers,
java.lang.String nameNode,
int numDir)
Returns a shim to wrap MiniMrCluster
|
java.lang.String |
getShortUserName(UserGroupInformation ugi)
Get the short name corresponding to the subject in the passed UGI
In secure versions of Hadoop, this returns the short name (after
undergoing the translation in the kerberos name rule mapping).
|
java.lang.String |
getTaskAttemptLogUrl(JobConf conf,
java.lang.String taskTrackerHttpAddress,
java.lang.String taskAttemptId)
Constructs and Returns TaskAttempt Log Url
or null if the TaskLogServlet is not available
|
java.lang.String[] |
getTaskJobIDs(TaskCompletionEvent t)
getTaskJobIDs returns an array of String with two elements.
|
java.lang.String |
getTokenFileLocEnvName()
Once a delegation token is stored in a file, the location is specified
for a child process that runs hadoop operations, using an environment
variable .
|
java.lang.String |
getTokenStrForm(java.lang.String tokenSignature)
Get the string form of the token given a token signature.
|
UserGroupInformation |
getUGIForConf(Configuration conf)
Get the UGI that the given job configuration will run as.
|
HadoopShims.WebHCatJTShim |
getWebHCatShim(Configuration conf,
UserGroupInformation ugi)
Provides a Hadoop JobTracker shim.
|
void |
inputFormatValidateInput(InputFormat fmt,
JobConf conf)
Calls fmt.validateInput(conf) if such a function exists.
|
boolean |
isJobPreparing(RunningJob job)
Return true if the job has not switched to RUNNING state yet
and is still in PREP state
|
boolean |
isLocalMode(Configuration conf)
Check wether MR is configured to run in local-mode
|
boolean |
isSecureShimImpl()
Return true if the Shim is based on Hadoop Security APIs.
|
boolean |
isSecurityEnabled()
Return true if the hadoop configuration has security enabled
|
void |
loginUserFromKeytab(java.lang.String principal,
java.lang.String keytabFile)
Perform kerberos login using the given principal and keytab
|
boolean |
moveToAppropriateTrash(FileSystem fs,
Path path,
Configuration conf)
Move the directory/file to trash.
|
JobContext |
newJobContext(Job job) |
TaskAttemptContext |
newTaskAttemptContext(Configuration conf,
Progressable progressable) |
void |
prepareJobOutput(JobConf conf)
Hive uses side effect files exclusively for it's output.
|
void |
reLoginUserFromKeytab()
Perform kerberos re-login using the given principal and keytab, to renew
the credentials
|
void |
setFloatConf(Configuration conf,
java.lang.String varName,
float val)
Wrapper for Configuration.setFloat, which was not introduced
until 0.20.
|
void |
setJobLauncherRpcAddress(Configuration conf,
java.lang.String val)
All updates to jobtracker/resource manager rpc address
in the configuration should be done through this shim
|
void |
setTmpFiles(java.lang.String prop,
java.lang.String files)
If JobClient.getCommandLineConfig exists, sets the given
property/value pair in that Configuration object.
|
void |
setTokenStr(UserGroupInformation ugi,
java.lang.String tokenStr,
java.lang.String tokenService)
Add a delegation token to the given ugi
|
void |
setTotalOrderPartitionFile(JobConf jobConf,
Path partition)
The method sets to set the partition file has a different signature between
hadoop versions.
|
java.lang.String |
unquoteHtmlChars(java.lang.String item)
Used by TaskLogProcessor to Remove HTML quoting from a string
|
boolean |
usesJobShell()
Return true if the current version of Hadoop uses the JobShell for
command line interpretation.
|
boolean usesJobShell()
java.lang.String getTaskAttemptLogUrl(JobConf conf, java.lang.String taskTrackerHttpAddress, java.lang.String taskAttemptId) throws java.net.MalformedURLException
java.net.MalformedURLException
boolean isJobPreparing(RunningJob job) throws java.io.IOException
java.io.IOException
boolean fileSystemDeleteOnExit(FileSystem fs, Path path) throws java.io.IOException
java.io.IOException
void inputFormatValidateInput(InputFormat fmt, JobConf conf) throws java.io.IOException
java.io.IOException
void setTmpFiles(java.lang.String prop, java.lang.String files)
long getAccessTime(FileStatus file)
file
- HadoopShims.MiniMrShim getMiniMrCluster(Configuration conf, int numberOfTaskTrackers, java.lang.String nameNode, int numDir) throws java.io.IOException
java.io.IOException
HadoopShims.MiniDFSShim getMiniDfs(Configuration conf, int numDataNodes, boolean format, java.lang.String[] racks) throws java.io.IOException
java.io.IOException
int compareText(Text a, Text b)
HadoopShims.CombineFileInputFormatShim getCombineFileInputFormat()
java.lang.String getInputFormatClassName()
void setFloatConf(Configuration conf, java.lang.String varName, float val)
java.lang.String[] getTaskJobIDs(TaskCompletionEvent t)
int createHadoopArchive(Configuration conf, Path parentDir, Path destDir, java.lang.String archiveName) throws java.lang.Exception
java.lang.Exception
java.net.URI getHarUri(java.net.URI original, java.net.URI base, java.net.URI originalBase) throws java.net.URISyntaxException
java.net.URISyntaxException
void prepareJobOutput(JobConf conf)
java.lang.String unquoteHtmlChars(java.lang.String item)
item
- the string to unquotevoid closeAllForUGI(UserGroupInformation ugi)
UserGroupInformation getUGIForConf(Configuration conf) throws javax.security.auth.login.LoginException, java.io.IOException
javax.security.auth.login.LoginException
java.io.IOException
<T> T doAs(UserGroupInformation ugi, java.security.PrivilegedExceptionAction<T> pvea) throws java.io.IOException, java.lang.InterruptedException
T
- ugi
- pvea
- java.io.IOException
java.lang.InterruptedException
java.lang.String getTokenFileLocEnvName()
Path createDelegationTokenFile(Configuration conf) throws java.io.IOException
conf
- java.io.IOException
UserGroupInformation createRemoteUser(java.lang.String userName, java.util.List<java.lang.String> groupNames)
userName
- remote User NamegroupNames
- group names associated with remote user namejava.lang.String getShortUserName(UserGroupInformation ugi)
boolean isSecureShimImpl()
boolean isSecurityEnabled()
java.lang.String getTokenStrForm(java.lang.String tokenSignature) throws java.io.IOException
tokenSignature
- java.io.IOException
void setTokenStr(UserGroupInformation ugi, java.lang.String tokenStr, java.lang.String tokenService) throws java.io.IOException
ugi
- tokenStr
- tokenService
- java.io.IOException
HadoopShims.JobTrackerState getJobTrackerState(ClusterStatus clusterStatus) throws java.lang.Exception
clusterStatus
- java.lang.Exception
- if no equivalent JobTrackerState existsTaskAttemptContext newTaskAttemptContext(Configuration conf, Progressable progressable)
JobContext newJobContext(Job job)
boolean isLocalMode(Configuration conf)
conf
- java.lang.String getJobLauncherRpcAddress(Configuration conf)
conf
- void setJobLauncherRpcAddress(Configuration conf, java.lang.String val)
conf
- java.lang.String getJobLauncherHttpAddress(Configuration conf)
conf
- void loginUserFromKeytab(java.lang.String principal, java.lang.String keytabFile) throws java.io.IOException
java.io.IOException
void reLoginUserFromKeytab() throws java.io.IOException
java.io.IOException
boolean moveToAppropriateTrash(FileSystem fs, Path path, Configuration conf) throws java.io.IOException
fs
- path
- conf
- java.io.IOException
long getDefaultBlockSize(FileSystem fs, Path path)
fs
- path
- short getDefaultReplication(FileSystem fs, Path path)
fs
- path
- UserGroupInformation createProxyUser(java.lang.String userName) throws java.io.IOException
userName
- java.io.IOException
void setTotalOrderPartitionFile(JobConf jobConf, Path partition)
jobConf
- partition
- HadoopShims.HCatHadoopShims getHCatShim()
HadoopShims.WebHCatJTShim getWebHCatShim(Configuration conf, UserGroupInformation ugi) throws java.io.IOException
conf
- not null
java.io.IOException
Copyright © 2012 The Apache Software Foundation