• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java WorkerData类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中com.alibaba.jstorm.daemon.worker.WorkerData的典型用法代码示例。如果您正苦于以下问题:Java WorkerData类的具体用法?Java WorkerData怎么用?Java WorkerData使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



WorkerData类属于com.alibaba.jstorm.daemon.worker包,在下文中一共展示了WorkerData类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: MkLocalShuffer

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
public MkLocalShuffer(List<Integer> workerTasks, List<Integer> allOutTasks, WorkerData workerData) {
    super(workerData);
	List<Integer> localOutTasks = new ArrayList<Integer>();

	for (Integer outTask : allOutTasks) {
		if (workerTasks.contains(outTask)) {
			localOutTasks.add(outTask);
		}
	}

	if (localOutTasks.size() != 0) {
		this.outTasks = localOutTasks;
		isLocal = true;
	} else {
		this.outTasks = new ArrayList<Integer>() ;
		this.outTasks.addAll(allOutTasks);
		isLocal = false;
	}

	randomrange = new RandomRange(outTasks.size());
}
 
开发者ID:zhangjunfang,项目名称:jstorm-0.9.6.3-,代码行数:22,代码来源:MkLocalShuffer.java


示例2: MkGrouper

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
public MkGrouper(TopologyContext _topology_context, Fields _out_fields,
		Grouping _thrift_grouping, List<Integer> _outTasks, 
		String streamId, WorkerData workerData) {
	this.topology_context = _topology_context;
	this.out_fields = _out_fields;
	this.thrift_grouping = _thrift_grouping;
	this.streamId = streamId;

	this.out_tasks = new ArrayList<Integer>();
	this.out_tasks.addAll(_outTasks);
	Collections.sort(this.out_tasks);

	this.local_tasks = _topology_context.getThisWorkerTasks();
	this.fields = Thrift.groupingType(thrift_grouping);
	this.grouptype = this.parseGroupType(workerData);
	
	String id = _topology_context.getThisTaskId() + ":" + streamId;
	LOG.info(id + " grouptype is " + grouptype);

}
 
开发者ID:zhangjunfang,项目名称:jstorm-0.9.6.3-,代码行数:21,代码来源:MkGrouper.java


示例3: WorkerHeartbeatRunable

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
public WorkerHeartbeatRunable(WorkerData workerData) {

        this.workerData = workerData;

        this.conf = workerData.getStormConf();
        this.worker_id = workerData.getWorkerId();
        this.port = workerData.getPort();
        this.topologyId = workerData.getTopologyId();
        this.task_ids = new CopyOnWriteArraySet<Integer>(workerData.getTaskids());
        this.shutdown = workerData.getShutdown();

        String key = Config.WORKER_HEARTBEAT_FREQUENCY_SECS;
        frequence = JStormUtils.parseInt(conf.get(key), 10);

        this.workerStates = new HashMap<String, LocalState>();
    }
 
开发者ID:kkllwww007,项目名称:jstrom,代码行数:17,代码来源:WorkerHeartbeatRunable.java


示例4: MkLocalShuffer

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
public MkLocalShuffer(List<Integer> workerTasks, List<Integer> allOutTasks,
                      WorkerData workerData) {
    super(workerData);
    List<Integer> localOutTasks = new ArrayList<Integer>();

    for (Integer outTask : allOutTasks) {
        if (workerTasks.contains(outTask)) {
            localOutTasks.add(outTask);
        }
    }
    this.workerData = workerData;
    intervalCheck = new IntervalCheck();
    intervalCheck.setInterval(60);

    if (localOutTasks.size() != 0) {
        this.outTasks = localOutTasks;
        isLocal = true;
    } else {
        this.outTasks = new ArrayList<Integer>();
        this.outTasks.addAll(allOutTasks);
        refreshLocalNodeTasks();
        isLocal = false;
    }
    randomrange = new RandomRange(outTasks.size());
}
 
开发者ID:kkllwww007,项目名称:jstrom,代码行数:26,代码来源:MkLocalShuffer.java


示例5: MkGrouper

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
public MkGrouper(TopologyContext _topology_context, Fields _out_fields, Grouping _thrift_grouping, List<Integer> _outTasks, String streamId,
        WorkerData workerData) {
    this.topology_context = _topology_context;
    this.out_fields = _out_fields;
    this.thrift_grouping = _thrift_grouping;
    this.streamId = streamId;

    this.out_tasks = new ArrayList<Integer>();
    this.out_tasks.addAll(_outTasks);
    Collections.sort(this.out_tasks);

    this.local_tasks = _topology_context.getThisWorkerTasks();
    this.fields = Thrift.groupingType(thrift_grouping);
    this.grouptype = this.parseGroupType(workerData);

    String id = _topology_context.getThisTaskId() + ":" + streamId;
    LOG.info(id + " grouptype is " + grouptype + ", out_tasks is " + out_tasks + ", local_tasks" + local_tasks);

}
 
开发者ID:kkllwww007,项目名称:jstrom,代码行数:20,代码来源:MkGrouper.java


示例6: MkLocalShuffer

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
public MkLocalShuffer(List<Integer> workerTasks, List<Integer> allOutTasks, WorkerData workerData) {
    super(workerData);
    List<Integer> localOutTasks = new ArrayList<>();
    allTargetTasks.addAll(allOutTasks);

    for (Integer outTask : allOutTasks) {
        if (workerTasks.contains(outTask)) {
            localOutTasks.add(outTask);
        }
    }
    this.workerData = workerData;
    intervalCheck = new IntervalCheck();
    intervalCheck.setInterval(60);

    if (localOutTasks.size() != 0) {
        this.outTasks = localOutTasks;
        isLocal = true;
    } else {
        this.outTasks = new ArrayList<>();
        this.outTasks.addAll(allOutTasks);
        refreshLocalNodeTasks();
        isLocal = false;
    }
    randomrange = new RandomRange(outTasks.size());
}
 
开发者ID:alibaba,项目名称:jstorm,代码行数:26,代码来源:MkLocalShuffer.java


示例7: MkGrouper

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
public MkGrouper(TopologyContext _topology_context, Fields _out_fields, Grouping _thrift_grouping,
                 String targetComponent, String streamId, WorkerData workerData) {
    this.topologyContext = _topology_context;
    this.outFields = _out_fields;
    this.thriftGrouping = _thrift_grouping;
    this.streamId = streamId;
    this.targetComponent = targetComponent;

    List<Integer> outTasks = topologyContext.getComponentTasks(targetComponent);
    this.outTasks = new ArrayList<>();
    this.outTasks.addAll(outTasks);
    Collections.sort(this.outTasks);

    this.localTasks = _topology_context.getThisWorkerTasks();
    this.fields = Thrift.groupingType(thriftGrouping);
    this.groupType = this.parseGroupType(workerData);

    String id = _topology_context.getThisTaskId() + ":" + streamId;
    LOG.info(id + " groupType is " + groupType + ", outTasks is " + this.outTasks + ", localTasks" + localTasks);

}
 
开发者ID:alibaba,项目名称:jstorm,代码行数:22,代码来源:MkGrouper.java


示例8: StormMetricReporter

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
private StormMetricReporter(MetricRegistry registry,
                            Logger logger,
                            Marker marker,
                            TimeUnit rateUnit,
                            TimeUnit durationUnit,
                            MetricFilter filter,
                            WorkerData workerData) {
    super(registry, "logger-reporter", filter, rateUnit, durationUnit);
    this.logger = logger;
    this.marker = marker;
    this.workerData = workerData;
}
 
开发者ID:zhangjunfang,项目名称:jstorm-0.9.6.3-,代码行数:13,代码来源:StormMetricReporter.java


示例9: MetricReporter

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
public MetricReporter(WorkerData workerData) {
	this.workerData = workerData;

	reporter1Minute = StormMetricReporter.forRegistry(Metrics.getMetrics())
			.outputTo(LoggerFactory.getLogger(MetricReporter.class))
			.convertRatesTo(TimeUnit.SECONDS)
			.convertDurationsTo(TimeUnit.MILLISECONDS)
			.setWorkerData(workerData).build();

	reporter10Minute = Slf4jReporter.forRegistry(Metrics.getJstack())
			.outputTo(LoggerFactory.getLogger(MetricReporter.class))
			.convertRatesTo(TimeUnit.SECONDS)
			.convertDurationsTo(TimeUnit.MILLISECONDS).build();

}
 
开发者ID:zhangjunfang,项目名称:jstorm-0.9.6.3-,代码行数:16,代码来源:MetricReporter.java


示例10: WorkerHeartbeatRunable

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
public WorkerHeartbeatRunable(WorkerData workerData) {

		this.workerData = workerData;

		this.conf = workerData.getConf();
		this.worker_id = workerData.getWorkerId();
		this.port = workerData.getPort();
		this.topologyId = workerData.getTopologyId();
		this.task_ids = new CopyOnWriteArraySet<Integer>(
				workerData.getTaskids());
		this.active = workerData.getActive();

		String key = Config.WORKER_HEARTBEAT_FREQUENCY_SECS;
		frequence = JStormUtils.parseInt(conf.get(key), 10);
	}
 
开发者ID:zhangjunfang,项目名称:jstorm-0.9.6.3-,代码行数:16,代码来源:WorkerHeartbeatRunable.java


示例11: TaskHeartbeatRunable

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
public TaskHeartbeatRunable(WorkerData workerData) {
	
	
	this.zkCluster = workerData.getZkCluster();
	this.topology_id = workerData.getTopologyId();
	this.uptime = new UptimeComputer();;
	this.storm_conf = workerData.getStormConf();
	this.active = workerData.getActive();

	String key = Config.TASK_HEARTBEAT_FREQUENCY_SECS;
	Object time = storm_conf.get(key);
	frequence = JStormUtils.parseInt(time, 10);

}
 
开发者ID:zhangjunfang,项目名称:jstorm-0.9.6.3-,代码行数:15,代码来源:TaskHeartbeatRunable.java


示例12: Task

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
public Task(WorkerData workerData, int taskId) throws Exception {
	openOrPrepareWasCalled = new Atom(Boolean.valueOf(false));

	this.workerData = workerData;
	this.topologyContext = workerData.getContextMaker()
			.makeTopologyContext(workerData.getSysTopology(), taskId,
					openOrPrepareWasCalled);
	this.userContext = workerData.getContextMaker().makeTopologyContext(
			workerData.getRawTopology(), taskId, openOrPrepareWasCalled);
	this.taskid = taskId;
	this.componentid = topologyContext.getThisComponentId();

	this.taskStatus = new TaskStatus();
	this.taskTransfer = getSendingTransfer(workerData);
	this.innerTaskTransfer = workerData.getInnerTaskTransfer();
	this.deserializeQueues = workerData.getDeserializeQueues();
	this.topologyid = workerData.getTopologyId();
	this.context = workerData.getContext();
	this.workHalt = workerData.getWorkHalt();
	this.zkCluster = new StormZkClusterState(workerData.getZkClusterstate());

	this.stormConf = Common.component_conf(workerData.getStormConf(),
			topologyContext, componentid);

	WorkerClassLoader.switchThreadContext();
	// get real task object -- spout/bolt/spoutspec
	this.taskObj = Common.get_task_object(topologyContext.getRawTopology(),
			componentid, WorkerClassLoader.getInstance());
	WorkerClassLoader.restoreThreadContext();
	int samplerate = StormConfig.sampling_rate(stormConf);
	this.taskStats = new CommonStatsRolling(samplerate);

	LOG.info("Loading task " + componentid + ":" + taskid);
}
 
开发者ID:zhangjunfang,项目名称:jstorm-0.9.6.3-,代码行数:35,代码来源:Task.java


示例13: getSendingTransfer

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
private TaskTransfer getSendingTransfer(WorkerData workerData) {

		// sending tuple's serializer
		KryoTupleSerializer serializer = new KryoTupleSerializer(
				workerData.getStormConf(), topologyContext);

		String taskName = JStormServerUtils.getName(componentid, taskid);
		// Task sending all tuples through this Object
		return new TaskTransfer(taskName, serializer, taskStatus, workerData);
	}
 
开发者ID:zhangjunfang,项目名称:jstorm-0.9.6.3-,代码行数:11,代码来源:Task.java


示例14: TaskTransfer

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
public TaskTransfer(String taskName, 
		KryoTupleSerializer serializer, TaskStatus taskStatus,
		WorkerData workerData) {
	this.taskName = taskName;
	this.serializer = serializer;
	this.taskStatus = taskStatus;
	this.storm_conf = workerData.getConf();
	this.transferQueue = workerData.getTransferQueue();
	this.innerTaskTransfer = workerData.getInnerTaskTransfer();

	int queue_size = Utils.getInt(storm_conf
			.get(Config.TOPOLOGY_EXECUTOR_SEND_BUFFER_SIZE));
	WaitStrategy waitStrategy = (WaitStrategy) Utils
			.newInstance((String) storm_conf
					.get(Config.TOPOLOGY_DISRUPTOR_WAIT_STRATEGY));
	this.serializeQueue = DisruptorQueue.mkInstance(taskName, ProducerType.MULTI, 
			queue_size, waitStrategy);
	this.serializeQueue.consumerStarted();
	
	String taskId = taskName.substring(taskName.indexOf(":") + 1);
	Metrics.registerQueue(taskName, MetricDef.SERIALIZE_QUEUE, serializeQueue, taskId, Metrics.MetricType.TASK);
	timer = Metrics.registerTimer(taskName, MetricDef.SERIALIZE_TIME, taskId, Metrics.MetricType.TASK); 

	serializeThread = new AsyncLoopThread(new TransferRunnable());
	LOG.info("Successfully start TaskTransfer thread");

}
 
开发者ID:zhangjunfang,项目名称:jstorm-0.9.6.3-,代码行数:28,代码来源:TaskTransfer.java


示例15: Task

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
@SuppressWarnings("rawtypes")
public Task(WorkerData workerData, int taskId) throws Exception {
	openOrPrepareWasCalled = new Atom(Boolean.valueOf(false));

	this.workerData = workerData;
	this.topologyContext = workerData.getContextMaker()
			.makeTopologyContext(workerData.getSysTopology(), taskId,
					openOrPrepareWasCalled);
	this.userContext = workerData.getContextMaker().makeTopologyContext(
			workerData.getRawTopology(), taskId, openOrPrepareWasCalled);
	this.taskid = taskId;
	this.componentid = topologyContext.getThisComponentId();

	this.taskStatus = new TaskStatus();
	this.taskTransfer = getSendingTransfer(workerData);
	this.innerTaskTransfer = workerData.getInnerTaskTransfer();
	this.deserializeQueues = workerData.getDeserializeQueues();
	this.topologyid = workerData.getTopologyId();
	this.context = workerData.getContext();
	this.workHalt = workerData.getWorkHalt();
	this.zkCluster = new StormZkClusterState(workerData.getZkClusterstate());

	this.stormConf = Common.component_conf(workerData.getStormConf(),
			topologyContext, componentid);

	WorkerClassLoader.switchThreadContext();
	// get real task object -- spout/bolt/spoutspec
	this.taskObj = Common.get_task_object(topologyContext.getRawTopology(),
			componentid, WorkerClassLoader.getInstance());
	WorkerClassLoader.restoreThreadContext();
	int samplerate = StormConfig.sampling_rate(stormConf);
	this.taskStats = new CommonStatsRolling(samplerate);

	LOG.info("Loading task " + componentid + ":" + taskid);
}
 
开发者ID:songtk,项目名称:learn_jstorm,代码行数:36,代码来源:Task.java


示例16: JStormMetricsReporter

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
public JStormMetricsReporter(Object role) {
    LOG.info("starting jstorm metrics reporter");
    if (role instanceof WorkerData) {
        WorkerData workerData = (WorkerData) role;
        this.conf = workerData.getStormConf();
        this.topologyId = (String) conf.get(Config.TOPOLOGY_ID);
        this.port = workerData.getPort();
        this.isInWorker = true;
    } else if (role instanceof NimbusData) {
        NimbusData nimbusData = (NimbusData) role;
        this.conf = nimbusData.getConf();
        this.topologyId = JStormMetrics.NIMBUS_METRIC_KEY;
    }
    this.host = JStormMetrics.getHost();
    this.enableMetrics = JStormMetrics.isEnabled();
    if (!enableMetrics) {
        LOG.warn("***** topology metrics is disabled! *****");
    } else {
        LOG.info("topology metrics is enabled.");
    }

    this.checkMetaThreadCycle = 30;
    // flush metric snapshots when time is aligned, check every sec.
    this.flushMetricThreadCycle = 1;

    LOG.info("check meta thread freq:{}, flush metrics thread freq:{}", checkMetaThreadCycle, flushMetricThreadCycle);

    this.localMode = StormConfig.local_mode(conf);
    this.clusterName = ConfigExtension.getClusterName(conf);
    LOG.info("done.");
}
 
开发者ID:kkllwww007,项目名称:jstrom,代码行数:32,代码来源:JStormMetricsReporter.java


示例17: outbound_components

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
/**
 * get current task's output <Stream_id, <componentId, MkGrouper>>
 * 
 * @param topology_context
 * @return
 */
public static Map<String, Map<String, MkGrouper>> outbound_components(TopologyContext topology_context, WorkerData workerData) {
    Map<String, Map<String, MkGrouper>> rr = new HashMap<String, Map<String, MkGrouper>>();

    // <Stream_id,<component,Grouping>>
    Map<String, Map<String, Grouping>> output_groupings = topology_context.getThisTargets();

    for (Entry<String, Map<String, Grouping>> entry : output_groupings.entrySet()) {

        String stream_id = entry.getKey();
        Map<String, Grouping> component_grouping = entry.getValue();

        Fields out_fields = topology_context.getThisOutputFields(stream_id);

        Map<String, MkGrouper> componentGrouper = new HashMap<String, MkGrouper>();

        for (Entry<String, Grouping> cg : component_grouping.entrySet()) {

            String component = cg.getKey();
            Grouping tgrouping = cg.getValue();

            List<Integer> outTasks = topology_context.getComponentTasks(component);
            // ATTENTION: If topology set one component parallelism as 0
            // so we don't need send tuple to it
            if (outTasks.size() > 0) {
                MkGrouper grouper = new MkGrouper(topology_context, out_fields, tgrouping, outTasks, stream_id, workerData);
                componentGrouper.put(component, grouper);
            }
            LOG.info("outbound_components, outTasks=" + outTasks + " for task-" + topology_context.getThisTaskId());
        }
        if (componentGrouper.size() > 0) {
            rr.put(stream_id, componentGrouper);
        }
    }
    return rr;
}
 
开发者ID:kkllwww007,项目名称:jstrom,代码行数:42,代码来源:Common.java


示例18: MkLocalFirst

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
public MkLocalFirst(List<Integer> workerTasks, List<Integer> allOutTasks, WorkerData workerData) {
    super(workerData);

    intervalCheck = new IntervalCheck();
    intervalCheck.setInterval(10);

    this.allOutTasks.addAll(allOutTasks);
    this.workerData = workerData;

    List<Integer> localWorkerOutTasks = new ArrayList<Integer>();

    for (Integer outTask : allOutTasks) {
        if (workerTasks.contains(outTask)) {
            localWorkerOutTasks.add(outTask);
        }
    }

    remoteOutTasks.addAll(allOutTasks);
    if (localWorkerOutTasks.size() != 0) {
        isLocalWorkerAvail = true;
        localOutTasks.addAll(localWorkerOutTasks);
    } else {
        isLocalWorkerAvail = false;
    }
    randomrange = new RandomRange(localOutTasks.size());
    remoteRandomRange = new RandomRange(remoteOutTasks.size());

    LOG.info("Local out tasks:" + localOutTasks + ", Remote out tasks:" + remoteOutTasks);
}
 
开发者ID:kkllwww007,项目名称:jstrom,代码行数:30,代码来源:MkLocalFirst.java


示例19: Task

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
@SuppressWarnings("rawtypes")
public Task(WorkerData workerData, int taskId) throws Exception {
    openOrPrepareWasCalled = new Atom(Boolean.valueOf(false));

    this.workerData = workerData;
    this.topologyContext = workerData.getContextMaker().makeTopologyContext(workerData.getSysTopology(), taskId, openOrPrepareWasCalled);
    this.userContext = workerData.getContextMaker().makeTopologyContext(workerData.getRawTopology(), taskId, openOrPrepareWasCalled);
    this.taskId = taskId;
    this.componentId = topologyContext.getThisComponentId();
    this.stormConf = Common.component_conf(workerData.getStormConf(), topologyContext, componentId);

    this.taskStatus = new TaskStatus();

    this.innerTaskTransfer = workerData.getInnerTaskTransfer();
    this.deserializeQueues = workerData.getDeserializeQueues();
    this.topologyId = workerData.getTopologyId();
    this.context = workerData.getContext();
    this.workHalt = workerData.getWorkHalt();
    this.zkCluster =workerData.getZkCluster();
    this.taskStats = new TaskBaseMetric(topologyId, componentId, taskId,
            ConfigExtension.isEnableMetrics(workerData.getStormConf()));

    LOG.info("Begin to deserialize taskObj " + componentId + ":" + this.taskId);

    WorkerClassLoader.switchThreadContext();
    // get real task object -- spout/bolt/spoutspec
    this.taskObj = Common.get_task_object(topologyContext.getRawTopology(), componentId, WorkerClassLoader.getInstance());
    WorkerClassLoader.restoreThreadContext();

    isTaskBatchTuple = ConfigExtension.isTaskBatchTuple(stormConf);
    LOG.info("Transfer/receive in batch mode :" + isTaskBatchTuple);

    LOG.info("Loading task " + componentId + ":" + this.taskId);
}
 
开发者ID:kkllwww007,项目名称:jstrom,代码行数:35,代码来源:Task.java


示例20: mkTaskSending

import com.alibaba.jstorm.daemon.worker.WorkerData; //导入依赖的package包/类
private TaskTransfer mkTaskSending(WorkerData workerData) {
    // sending tuple's serializer
    KryoTupleSerializer serializer = new KryoTupleSerializer(workerData.getStormConf(), topologyContext);

    String taskName = JStormServerUtils.getName(componentId, taskId);
    // Task sending all tuples through this Object
    TaskTransfer taskTransfer;
    if (isTaskBatchTuple)
        taskTransfer = new TaskBatchTransfer(this, taskName, serializer, taskStatus, workerData);
    else
        taskTransfer = new TaskTransfer(this, taskName, serializer, taskStatus, workerData);
    return taskTransfer;
}
 
开发者ID:kkllwww007,项目名称:jstrom,代码行数:14,代码来源:Task.java



注:本文中的com.alibaba.jstorm.daemon.worker.WorkerData类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java BeanProcessor类代码示例发布时间:2022-05-15
下一篇:
Java SnapshotDiffInfo类代码示例发布时间:2022-05-15
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap