本文整理汇总了Java中org.apache.hadoop.io.serializer.JavaSerialization类的典型用法代码示例。如果您正苦于以下问题:Java JavaSerialization类的具体用法?Java JavaSerialization怎么用?Java JavaSerialization使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
JavaSerialization类属于org.apache.hadoop.io.serializer包,在下文中一共展示了JavaSerialization类的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: testTotalOrderWithCustomSerialization
import org.apache.hadoop.io.serializer.JavaSerialization; //导入依赖的package包/类
public void testTotalOrderWithCustomSerialization() throws Exception {
TotalOrderPartitioner<String, NullWritable> partitioner =
new TotalOrderPartitioner<String, NullWritable>();
Configuration conf = new Configuration();
conf.setStrings(CommonConfigurationKeys.IO_SERIALIZATIONS_KEY,
JavaSerialization.class.getName(),
WritableSerialization.class.getName());
conf.setClass(MRJobConfig.KEY_COMPARATOR,
JavaSerializationComparator.class,
Comparator.class);
Path p = TestTotalOrderPartitioner.<String>writePartitionFile(
"totalordercustomserialization", conf, splitJavaStrings);
conf.setClass(MRJobConfig.MAP_OUTPUT_KEY_CLASS, String.class, Object.class);
try {
partitioner.setConf(conf);
NullWritable nw = NullWritable.get();
for (Check<String> chk : testJavaStrings) {
assertEquals(chk.data.toString(), chk.part,
partitioner.getPartition(chk.data, nw, splitJavaStrings.length + 1));
}
} finally {
p.getFileSystem(conf).delete(p, true);
}
}
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:25,代码来源:TestTotalOrderPartitioner.java
示例2: testIntJavaSerialization
import org.apache.hadoop.io.serializer.JavaSerialization; //导入依赖的package包/类
/**
* Tests read/write of Integer via native JavaleSerialization.
* @throws Exception If fails.
*/
public void testIntJavaSerialization() throws Exception {
HadoopSerialization ser = new HadoopSerializationWrapper(new JavaSerialization(), Integer.class);
ByteArrayOutputStream buf = new ByteArrayOutputStream();
DataOutput out = new DataOutputStream(buf);
ser.write(out, 3);
ser.write(out, -5);
ser.close();
DataInput in = new DataInputStream(new ByteArrayInputStream(buf.toByteArray()));
assertEquals(3, ((Integer)ser.read(in, null)).intValue());
assertEquals(-5, ((Integer)ser.read(in, null)).intValue());
}
开发者ID:apache,项目名称:ignite,代码行数:21,代码来源:HadoopSerializationWrapperSelfTest.java
示例3: AppWorkerContainer
import org.apache.hadoop.io.serializer.JavaSerialization; //导入依赖的package包/类
public AppWorkerContainer(AppConfig config) {
this.config = config ;
this.appContainerInfoHolder = new AppContainerInfoHolder(config.getAppWorkerContainerId()) ;
try {
Configuration rpcConf = new Configuration() ;
rpcConf.set(
CommonConfigurationKeys.IO_SERIALIZATIONS_KEY,
JavaSerialization.class.getName() + "," +
WritableSerialization.class.getName() + "," +
AvroSerialization.class.getName()
) ;
rpcClient = new RPCClient(config.appHostName, config.appRpcPort) ;
ipcService = IPCService.newBlockingStub(rpcClient.getRPCChannel()) ;
Class<AppWorker> appWorkerClass = (Class<AppWorker>) Class.forName(config.worker) ;
worker = appWorkerClass.newInstance() ;
} catch(Throwable error) {
LOGGER.error("Error" , error);
onDestroy() ;
}
}
开发者ID:DemandCube,项目名称:NeverwinterDP-Commons,代码行数:23,代码来源:AppWorkerContainer.java
示例4: sweep
import org.apache.hadoop.io.serializer.JavaSerialization; //导入依赖的package包/类
/**
* Runs map reduce to do the sweeping on the mob files.
* The running of the sweep tool on the same column family are mutually exclusive.
* The HBase major compaction and running of the sweep tool on the same column family
* are mutually exclusive.
* These synchronization is done by the Zookeeper.
* So in the beginning of the running, we need to make sure only this sweep tool is the only one
* that is currently running in this column family, and in this column family there're no major
* compaction in progress.
* @param tn The current table name.
* @param family The descriptor of the current column family.
* @throws IOException
* @throws ClassNotFoundException
* @throws InterruptedException
* @throws KeeperException
*/
public void sweep(TableName tn, HColumnDescriptor family) throws IOException, ClassNotFoundException,
InterruptedException, KeeperException {
Configuration conf = new Configuration(this.conf);
// check whether the current user is the same one with the owner of hbase root
String currentUserName = UserGroupInformation.getCurrentUser().getShortUserName();
FileStatus[] hbaseRootFileStat = fs.listStatus(new Path(conf.get(HConstants.HBASE_DIR)));
if (hbaseRootFileStat.length > 0) {
String owner = hbaseRootFileStat[0].getOwner();
if (!owner.equals(currentUserName)) {
String errorMsg = "The current user[" + currentUserName + "] doesn't have the privilege."
+ " Please make sure the user is the root of the target HBase";
LOG.error(errorMsg);
throw new IOException(errorMsg);
}
} else {
LOG.error("The target HBase doesn't exist");
throw new IOException("The target HBase doesn't exist");
}
String familyName = family.getNameAsString();
Job job = null;
try {
Scan scan = new Scan();
// Do not retrieve the mob data when scanning
scan.setAttribute(MobConstants.MOB_SCAN_RAW, Bytes.toBytes(Boolean.TRUE));
scan.setFilter(new ReferenceOnlyFilter());
scan.setCaching(10000);
scan.setCacheBlocks(false);
scan.setMaxVersions(family.getMaxVersions());
conf.set(CommonConfigurationKeys.IO_SERIALIZATIONS_KEY,
JavaSerialization.class.getName() + "," + WritableSerialization.class.getName());
job = prepareJob(tn, familyName, scan, conf);
job.getConfiguration().set(TableInputFormat.SCAN_COLUMN_FAMILY, familyName);
// Record the compaction start time.
// In the sweep tool, only the mob file whose modification time is older than
// (startTime - delay) could be handled by this tool.
// The delay is one day. It could be configured as well, but this is only used
// in the test.
job.getConfiguration().setLong(MobConstants.MOB_COMPACTION_START_DATE,
compactionStartTime);
job.setPartitionerClass(MobFilePathHashPartitioner.class);
submit(job, tn, familyName);
if (job.waitForCompletion(true)) {
// Archive the unused mob files.
removeUnusedFiles(job, tn, family);
}
} finally {
cleanup(job, tn, familyName);
}
}
开发者ID:intel-hadoop,项目名称:HBase-LOB,代码行数:67,代码来源:SweepJob.java
示例5: testRun
import org.apache.hadoop.io.serializer.JavaSerialization; //导入依赖的package包/类
@Test
public void testRun() throws Exception {
byte[] mobValueBytes = new byte[100];
//get the path where mob files lie in
Path mobFamilyPath = MobUtils.getMobFamilyPath(TEST_UTIL.getConfiguration(),
TableName.valueOf(tableName), family);
Put put = new Put(Bytes.toBytes(row));
put.add(Bytes.toBytes(family), Bytes.toBytes(qf), 1, mobValueBytes);
Put put2 = new Put(Bytes.toBytes(row + "ignore"));
put2.add(Bytes.toBytes(family), Bytes.toBytes(qf), 1, mobValueBytes);
table.put(put);
table.put(put2);
table.flushCommits();
admin.flush(tableName);
FileStatus[] fileStatuses = TEST_UTIL.getTestFileSystem().listStatus(mobFamilyPath);
//check the generation of a mob file
assertEquals(1, fileStatuses.length);
String mobFile1 = fileStatuses[0].getPath().getName();
Configuration configuration = new Configuration(TEST_UTIL.getConfiguration());
configuration.setFloat(MobConstants.MOB_COMPACTION_INVALID_FILE_RATIO, 0.1f);
configuration.setStrings(TableInputFormat.INPUT_TABLE, tableName);
configuration.setStrings(TableInputFormat.SCAN_COLUMN_FAMILY, family);
configuration.setStrings("mob.compaction.visited.dir", "jobWorkingNamesDir");
configuration.setStrings(SweepJob.WORKING_FILES_DIR_KEY, "compactionFileDir");
configuration.setStrings(CommonConfigurationKeys.IO_SERIALIZATIONS_KEY,
JavaSerialization.class.getName());
configuration.set("mob.compaction.visited.dir", "compactionVisitedDir");
configuration.setLong(MobConstants.MOB_COMPACTION_START_DATE,
System.currentTimeMillis() + 24 * 3600 * 1000);
//use the same counter when mocking
Counter counter = new GenericCounter();
Reducer<Text, KeyValue, Writable, Writable>.Context ctx =
mock(Reducer.Context.class);
when(ctx.getConfiguration()).thenReturn(configuration);
when(ctx.getCounter(Matchers.any(SweepCounter.class))).thenReturn(counter);
when(ctx.nextKey()).thenReturn(true).thenReturn(false);
when(ctx.getCurrentKey()).thenReturn(new Text(mobFile1));
byte[] refBytes = Bytes.toBytes(mobFile1);
long valueLength = refBytes.length;
byte[] newValue = Bytes.add(Bytes.toBytes(valueLength), refBytes);
KeyValue kv2 = new KeyValue(Bytes.toBytes(row), Bytes.toBytes(family),
Bytes.toBytes(qf), 1, KeyValue.Type.Put, newValue);
List<KeyValue> list = new ArrayList<KeyValue>();
list.add(kv2);
when(ctx.getValues()).thenReturn(list);
SweepReducer reducer = new SweepReducer();
reducer.run(ctx);
FileStatus[] filsStatuses2 = TEST_UTIL.getTestFileSystem().listStatus(mobFamilyPath);
String mobFile2 = filsStatuses2[0].getPath().getName();
//new mob file is generated, old one has been archived
assertEquals(1, filsStatuses2.length);
assertEquals(false, mobFile2.equalsIgnoreCase(mobFile1));
//test sequence file
String workingPath = configuration.get("mob.compaction.visited.dir");
FileStatus[] statuses = TEST_UTIL.getTestFileSystem().listStatus(new Path(workingPath));
Set<String> files = new TreeSet<String>();
for (FileStatus st : statuses) {
files.addAll(getKeyFromSequenceFile(TEST_UTIL.getTestFileSystem(),
st.getPath(), configuration));
}
assertEquals(1, files.size());
assertEquals(true, files.contains(mobFile1));
}
开发者ID:intel-hadoop,项目名称:HBase-LOB,代码行数:77,代码来源:TestSweepReducer.java
注:本文中的org.apache.hadoop.io.serializer.JavaSerialization类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论