• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java SimpleHBaseMapper类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.storm.hbase.bolt.mapper.SimpleHBaseMapper的典型用法代码示例。如果您正苦于以下问题:Java SimpleHBaseMapper类的具体用法?Java SimpleHBaseMapper怎么用?Java SimpleHBaseMapper使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



SimpleHBaseMapper类属于org.apache.storm.hbase.bolt.mapper包,在下文中一共展示了SimpleHBaseMapper类的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: getHBaseBolt

import org.apache.storm.hbase.bolt.mapper.SimpleHBaseMapper; //导入依赖的package包/类
public static HBaseBolt getHBaseBolt(String hBaseUrl, String tableName, String rowKeyField, String columnFamily,
                                     List<String> columnFields, List<String> counterFields, Map topologyConfig) {
    Map<String, Object> hbConf = new HashMap<String, Object>();
    hbConf.put("hbase.rootdir",hBaseUrl);

    topologyConfig.put("hbase.conf", hbConf);

    SimpleHBaseMapper mapper = new SimpleHBaseMapper()
            .withRowKeyField(rowKeyField)
            .withColumnFields(new Fields(columnFields))
            .withCounterFields(new Fields(counterFields))
            .withColumnFamily(columnFamily);

    return new HBaseBolt(tableName, mapper)
            .withConfigKey("hbase.conf");
}
 
开发者ID:Parth-Brahmbhatt,项目名称:storm-smoke-test,代码行数:17,代码来源:ConnectorUtil.java


示例2: buildTopology

import org.apache.storm.hbase.bolt.mapper.SimpleHBaseMapper; //导入依赖的package包/类
public StormTopology buildTopology(Properties properties) {
	
	// Load properties for the storm topoology
	String kafkaTopic = properties.getProperty("kafka.topic");
	String hbaseTable = properties.getProperty("hbase.table.name");
	String hbaseColumnFamily = properties.getProperty("hbase.column.family");
	
	SpoutConfig kafkaConfig = new SpoutConfig(kafkaBrokerHosts, kafkaTopic, "",
			"storm");
	kafkaConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
	TopologyBuilder builder = new TopologyBuilder();
	
	SimpleHBaseMapper hBaseMapper = new SimpleHBaseMapper()
			.withRowKeyField("host-user")
			.withCounterFields(new Fields("count"))
			.withColumnFamily(hbaseColumnFamily);

	// HbaseBolt(tableName, hbaseMapper)
	HBaseBolt hbaseBolt = new HBaseBolt(hbaseTable, hBaseMapper);
	AuditLoginsCounterBolt loginCounterbolt = new AuditLoginsCounterBolt(hbaseTable);
	AuditBolt auditParserBolt = new AuditBolt();

	builder.setSpout("KafkaSpout", new KafkaSpout(kafkaConfig), 1);
	builder.setBolt("ParseBolt", auditParserBolt, 1).shuffleGrouping("KafkaSpout");
	builder.setBolt("CountBolt", loginCounterbolt, 1).shuffleGrouping("ParseBolt");
	builder.setBolt("HBaseBolt", hbaseBolt, 1).fieldsGrouping("CountBolt",
			new Fields("host-user"));

	return builder.createTopology();
}
 
开发者ID:mvalleavila,项目名称:StormTopology-AuditActiveLogins,代码行数:31,代码来源:AuditActiveLoginsTopology.java


示例3: main

import org.apache.storm.hbase.bolt.mapper.SimpleHBaseMapper; //导入依赖的package包/类
public static void main(String[] args) throws Exception {
    Config config = new Config();

    Map<String, Object> hbConf = new HashMap<String, Object>();
    if(args.length > 0){
        hbConf.put("hbase.rootdir", args[0]);
    }
    config.put("hbase.conf", hbConf);

    WordSpout spout = new WordSpout();
    WordCounter bolt = new WordCounter();

    SimpleHBaseMapper mapper = new SimpleHBaseMapper()
            .withRowKeyField("word")
            .withColumnFields(new Fields("word"))
            .withCounterFields(new Fields("count"))
            .withColumnFamily("cf");

    HBaseBolt hbase = new HBaseBolt("WordCount", mapper)
            .withConfigKey("hbase.conf");


    // wordSpout ==> countBolt ==> HBaseBolt
    TopologyBuilder builder = new TopologyBuilder();

    builder.setSpout(WORD_SPOUT, spout, 1);
    builder.setBolt(COUNT_BOLT, bolt, 1).shuffleGrouping(WORD_SPOUT);
    builder.setBolt(HBASE_BOLT, hbase, 1).fieldsGrouping(COUNT_BOLT, new Fields("word"));


    if (args.length == 1) {
        LocalCluster cluster = new LocalCluster();
        cluster.submitTopology("test", config, builder.createTopology());
        Thread.sleep(30000);
        cluster.killTopology("test");
        cluster.shutdown();
        System.exit(0);
    } else if (args.length == 2) {
        StormSubmitter.submitTopology(args[1], config, builder.createTopology());
    } else{
        System.out.println("Usage: HdfsFileTopology <hdfs url> [topology name]");
    }
}
 
开发者ID:ptgoetz,项目名称:storm-hbase,代码行数:44,代码来源:PersistentWordCount.java


示例4: main

import org.apache.storm.hbase.bolt.mapper.SimpleHBaseMapper; //导入依赖的package包/类
public static void main(String[] args) throws Exception {
    Config config = new Config();

    Map<String, Object> hbConf = new HashMap<String, Object>();
    if(args.length > 0){
        hbConf.put("hbase.rootdir", args[0]);
    }
    config.put("hbase.conf", hbConf);

    WordSpout spout = new WordSpout();
    TotalWordCounter totalBolt = new TotalWordCounter();

    SimpleHBaseMapper mapper = new SimpleHBaseMapper().withRowKeyField("word");
    HBaseProjectionCriteria projectionCriteria = new HBaseProjectionCriteria();
    projectionCriteria.addColumn(new HBaseProjectionCriteria.ColumnMetaData("cf", "count"));

    WordCountValueMapper rowToTupleMapper = new WordCountValueMapper();

    HBaseLookupBolt hBaseLookupBolt = new HBaseLookupBolt("WordCount", mapper, rowToTupleMapper)
            .withConfigKey("hbase.conf")
            .withProjectionCriteria(projectionCriteria);

    //wordspout -> lookupbolt -> totalCountBolt
    TopologyBuilder builder = new TopologyBuilder();
    builder.setSpout(WORD_SPOUT, spout, 1);
    builder.setBolt(LOOKUP_BOLT, hBaseLookupBolt, 1).shuffleGrouping(WORD_SPOUT);
    builder.setBolt(TOTAL_COUNT_BOLT, totalBolt, 1).fieldsGrouping(LOOKUP_BOLT, new Fields("columnName"));

    if (args.length == 1) {
        LocalCluster cluster = new LocalCluster();
        cluster.submitTopology("test", config, builder.createTopology());
        Thread.sleep(30000);
        cluster.killTopology("test");
        cluster.shutdown();
        System.exit(0);
    } else if (args.length == 2) {
        StormSubmitter.submitTopology(args[1], config, builder.createTopology());
    } else{
        System.out.println("Usage: LookupWordCount <hbase.rootdir>");
    }
}
 
开发者ID:mengzhiyi,项目名称:storm-hbase-1.0.x,代码行数:42,代码来源:LookupWordCount.java


示例5: main

import org.apache.storm.hbase.bolt.mapper.SimpleHBaseMapper; //导入依赖的package包/类
public static void main(String[] args) throws Exception {
    Config config = new Config();

    Map<String, Object> hbConf = new HashMap<String, Object>();
    if(args.length > 0){
        hbConf.put("hbase.rootdir", args[0]);
    }
    config.put("hbase.conf", hbConf);

    WordSpout spout = new WordSpout();
    WordCounter bolt = new WordCounter();

    SimpleHBaseMapper mapper = new SimpleHBaseMapper()
            .withRowKeyField("word")
            .withColumnFields(new Fields("word"))
            .withCounterFields(new Fields("count"))
            .withColumnFamily("cf");

    HBaseBolt hbase = new HBaseBolt("WordCount", mapper)
            .withConfigKey("hbase.conf");


    // wordSpout ==> countBolt ==> HBaseBolt
    TopologyBuilder builder = new TopologyBuilder();

    builder.setSpout(WORD_SPOUT, spout, 1);
    builder.setBolt(COUNT_BOLT, bolt, 1).shuffleGrouping(WORD_SPOUT);
    builder.setBolt(HBASE_BOLT, hbase, 1).fieldsGrouping(COUNT_BOLT, new Fields("word"));


    if (args.length == 0) {
    	System.setProperty("hadoop.home.dir", "/tmp");
        LocalCluster cluster = new LocalCluster();
        cluster.submitTopology("test", config, builder.createTopology());
        Thread.sleep(30000);
        cluster.killTopology("test");
        cluster.shutdown();
        System.exit(0);
    } else if (args.length == 2) {
        StormSubmitter.submitTopology(args[1], config, builder.createTopology());
    } else{
        System.out.println("Usage: HdfsFileTopology <hdfs url> [topology name]");
    }
}
 
开发者ID:desp0916,项目名称:LearnStorm,代码行数:45,代码来源:PersistentWordCount.java



注:本文中的org.apache.storm.hbase.bolt.mapper.SimpleHBaseMapper类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ParserConfigurationSettings类代码示例发布时间:2022-05-22
下一篇:
Java XAxisRendererRadarChart类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap