|
||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||
java.lang.Objectorg.apache.pig.LoadFunc
org.apache.hadoop.zebra.pig.TableLoader
public class TableLoader
Pig IndexableLoadFunc and Slicer for Zebra Table
| Nested Class Summary |
|---|
| Nested classes/interfaces inherited from interface org.apache.pig.LoadPushDown |
|---|
LoadPushDown.OperatorSet, LoadPushDown.RequiredField, LoadPushDown.RequiredFieldList, LoadPushDown.RequiredFieldResponse |
| Constructor Summary | |
|---|---|
TableLoader()
default constructor |
|
TableLoader(String projectionStr)
|
|
TableLoader(String projectionStr,
String sorted)
|
|
| Method Summary | |
|---|---|
void |
close()
A method called by the Pig runtime to give an opportunity for implementations to perform cleanup actions like closing the underlying input stream. |
void |
ensureAllKeyInstancesInSameSplit()
When this method is called, Pig is communicating to the Loader that it must load data such that all instances of a key are in same split. |
List<LoadPushDown.OperatorSet> |
getFeatures()
Determine the operators that can be pushed to the loader. |
org.apache.hadoop.mapreduce.InputFormat |
getInputFormat()
This will be called during planning on the front end. |
Tuple |
getNext()
Retrieves the next tuple to be processed. |
String[] |
getPartitionKeys(String location,
org.apache.hadoop.mapreduce.Job job)
Find what columns are partition keys for this input. |
ResourceSchema |
getSchema(String location,
org.apache.hadoop.mapreduce.Job job)
Get a schema for the data to be loaded. |
org.apache.hadoop.io.WritableComparable<?> |
getSplitComparable(org.apache.hadoop.mapreduce.InputSplit split)
The WritableComparable object returned will be used to compare the position of different splits in an ordered stream |
ResourceStatistics |
getStatistics(String location,
org.apache.hadoop.mapreduce.Job job)
Get statistics about the data to be loaded. |
void |
initialize(org.apache.hadoop.conf.Configuration conf)
This method is called by Pig run time to allow the IndexableLoadFunc to perform any initialization actions |
void |
prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader,
PigSplit split)
Initializes LoadFunc for reading data. |
LoadPushDown.RequiredFieldResponse |
pushProjection(LoadPushDown.RequiredFieldList requiredFieldList)
Indicate to the loader fields that will be needed. |
void |
seekNear(Tuple tuple)
This method is called only once. |
void |
setLocation(String location,
org.apache.hadoop.mapreduce.Job job)
This method is called by pig on both frontend and backend. |
void |
setPartitionFilter(Expression partitionFilter)
Set the filter for partitioning. |
void |
setUDFContextSignature(String signature)
This method will be called by Pig both in the front end and back end to pass a unique signature to the LoadFunc. |
| Methods inherited from class org.apache.pig.LoadFunc |
|---|
getAbsolutePath, getLoadCaster, getPathStrings, join, relativeToAbsolutePath |
| Methods inherited from class java.lang.Object |
|---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
| Constructor Detail |
|---|
public TableLoader()
public TableLoader(String projectionStr)
projectionStr - projection string passed from pig query.
public TableLoader(String projectionStr,
String sorted)
throws IOException
projectionStr - projection string passed from pig query.sorted - need sorted table(s)?
IOException| Method Detail |
|---|
public void initialize(org.apache.hadoop.conf.Configuration conf)
throws IOException
IndexableLoadFunc
initialize in interface IndexableLoadFuncconf - The job configuration object
IOException
public void seekNear(Tuple tuple)
throws IOException
seekNear in interface IndexableLoadFunctuple - Tuple with join keys (which are a prefix of the sort
keys of the input data). For example if the data is sorted on
columns in position 2,4,5 any of the following Tuples are
valid as an argument value:
(fieldAt(2))
(fieldAt(2), fieldAt(4))
(fieldAt(2), fieldAt(4), fieldAt(5))
The following are some invalid cases:
(fieldAt(4))
(fieldAt(2), fieldAt(5))
(fieldAt(4), fieldAt(5))
IOException - When the loadFunc is unable to position
to the required point in its input stream
public Tuple getNext()
throws IOException
LoadFunc
getNext in class LoadFuncIOException - if there is an exception while retrieving the next
tuple
public void close()
throws IOException
IndexableLoadFunc
close in interface IndexableLoadFuncIOException - if the loadfunc is unable to perform
its close actions.
public void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader,
PigSplit split)
throws IOException
LoadFunc
prepareToRead in class LoadFuncreader - RecordReader to be used by this instance of the LoadFuncsplit - The input PigSplit to process
IOException - if there is an exception during initialization
public void setLocation(String location,
org.apache.hadoop.mapreduce.Job job)
throws IOException
setLocation in class LoadFunclocation - Location as returned by
LoadFunc.relativeToAbsolutePath(String, Path)job - the Job object
store or retrieve earlier stored information from the UDFContext
IOException - if the location is not valid.
public org.apache.hadoop.mapreduce.InputFormat getInputFormat()
throws IOException
LoadFunc
getInputFormat in class LoadFuncIOException - if there is an exception during InputFormat
construction
public String[] getPartitionKeys(String location,
org.apache.hadoop.mapreduce.Job job)
throws IOException
LoadMetadata
getPartitionKeys in interface LoadMetadatalocation - Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)job - The Job object - this should be used only to obtain
cluster properties through JobContext.getConfiguration() and not to set/query
any runtime job information.
IOException - if an exception occurs while retrieving partition keys
public ResourceSchema getSchema(String location,
org.apache.hadoop.mapreduce.Job job)
throws IOException
LoadMetadata
getSchema in interface LoadMetadatalocation - Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)job - The Job object - this should be used only to obtain
cluster properties through JobContext.getConfiguration() and not to set/query
any runtime job information.
IOException - if an exception occurs while determining the schema
public ResourceStatistics getStatistics(String location,
org.apache.hadoop.mapreduce.Job job)
throws IOException
LoadMetadata
getStatistics in interface LoadMetadatalocation - Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)job - The Job object - this should be used only to obtain
cluster properties through JobContext.getConfiguration() and not to set/query
any runtime job information.
IOException - if an exception occurs while retrieving statistics
public void setPartitionFilter(Expression partitionFilter)
throws IOException
LoadMetadataLoadMetadata.getPartitionKeys(String, Job), then this method is not
called by Pig runtime. This method is also not called by the Pig runtime
if there are no partition filter conditions.
setPartitionFilter in interface LoadMetadatapartitionFilter - that describes filter for partitioning
IOException - if the filter is not compatible with the storage
mechanism or contains non-partition fields.public List<LoadPushDown.OperatorSet> getFeatures()
LoadPushDown
getFeatures in interface LoadPushDown
public LoadPushDown.RequiredFieldResponse pushProjection(LoadPushDown.RequiredFieldList requiredFieldList)
throws FrontendException
LoadPushDown
pushProjection in interface LoadPushDownrequiredFieldList - RequiredFieldList indicating which columns will be needed.
This structure is read only. User cannot make change to it inside pushProjection.
FrontendExceptionpublic void setUDFContextSignature(String signature)
LoadFuncLoadFunc. The signature can be used
to store into the UDFContext any information which the
LoadFunc needs to store between various method invocations in the
front end and back end. A use case is to store LoadPushDown.RequiredFieldList
passed to it in LoadPushDown.pushProjection(RequiredFieldList) for
use in the back end before returning tuples in LoadFunc.getNext().
This method will be call before other methods in LoadFunc
setUDFContextSignature in class LoadFuncsignature - a unique signature to identify this LoadFunc
public org.apache.hadoop.io.WritableComparable<?> getSplitComparable(org.apache.hadoop.mapreduce.InputSplit split)
throws IOException
OrderedLoadFunc
getSplitComparable in interface OrderedLoadFuncsplit - An InputSplit from the InputFormat underlying this loader.
IOException
public void ensureAllKeyInstancesInSameSplit()
throws IOException
CollectableLoadFunc
ensureAllKeyInstancesInSameSplit in interface CollectableLoadFuncIOException
|
||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||