• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

DeepFL/DeepFaultLocalization

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

DeepFL/DeepFaultLocalization

开源软件地址(OpenSource Url):

https://github.com/DeepFL/DeepFaultLocalization

开源编程语言(OpenSource Language):

Python 96.6%

开源软件介绍(OpenSource Introduction):

DeepFL

DeepFL is a deep-learning-based fault localization technique. It implements two multi-layer perceptron variants (MLP and MLP2), one recurrent neural networks variant (BiRNN) and two tailored MLP variants by Tensorflow. The benchmark subject is from Defects4j, which is an open source repository, providing some buggy versions and corresponding fixed versions of different projects. Features of the dataset include different dimensions, e.g., spectrum-based, mutation-based, complexity-based (code metrics) and textual-similarity-based information.

Requirements

  • Python 2/3 with Pandas and Numpy
  • Tensorflow

DataSet

The dataset can be downloaded from an online Cloud Drive. There are seven .gz files, each of them represents one setting in our paper as follows:

  • DeepFL.tar.gz: Dataset with all of four dimensions features above.

  • CrossDeepFL.tar.gz: Dataset with all of four dimensions features in the cross-project scenario.

  • CrossValidation.tar.gz: Dataset with all of four dimensions features for implementing 10-fold cross validation evaluation.

  • DeepFL-Spectrum.tar.gz: Dataset with three dimensions features, i.e.,mutation-based, complexity-based and textual-similarity-based information.

  • DeepFL-Mutation.tar.gz: Dataset with three dimensions features, i.e.,spectrum-based, complexity-based and textual-similarity-based information.

  • DeepFL-Metrics.tar.gz: Dataset with three dimensions features, i.e.,spectrum-based, mutation-based and textual-similarity-based information.

  • DeepFL-Textual.tar.gz: Dataset with three dimensions features, i.e.,spectrum-based, mutation-based and complexity-based information.

Please uncompress .gz files and put them into a user-created folder which can be passed as a parameter for the following command.

Running DeepFL

The command to run DeepFL for each version is as follows:

$ cd DeepFaultLocalization
$python main.py /absolute/path/to/ParentDirofDataset /absolute/path/to/Result $subject $version $model $tech $loss $epoch $dump_step

Each parameter can be explained as follows:

  • /absolute/path/to/ParentDirofDataset: The absolute path of the parent directory including all datasets, for example, if the dataset is DeepFL, its directory can be /home/DeepLearningData/DeepFL ("/home/DeepLearningData/" is created by users, and "DeepFL" is put into it)
  • /absolute/path/to/Result: The directory of the results.
  • $subject: The subject name, which can be Time, Chart, Lang, Math, Mockito or Closure.
  • $version: The version number of the subject. Note that, the maximum numbers of subjects above are 27, 26, 65, 106, 38, 133, respectively.
  • $model: The implemented model name, which can be mlp, mlp2, birnn, mlp_dfl_1, mlp_dfl_2, representing multi-layer perceptron with one hidden layer, multi-layer perceptron with two hidden layers, Bidirectional recurrent neural network, two variants of tailored mlp, respectively.
  • $tech: The different dimensions of features, corresponding to the name of dataset, can be DeepFL, DeepFL-Metrics, DeepFL-Mutation, DeepFL-Spectrum, DeepFL-Textual, CrossDeepFL.
  • $loss: The name of loss function, which can be softmax, epairwise.
  • $epoch: The number of training epochs.
  • $dupm_step: The interval number of epoch in which the result will be stored into the result file. For example, if $dump_step = 10, the results in epochs 10, 20, 30... will be written into the files.

Please note that CrossValidation is slightly different with others since the dataset of all subjects has been mixed and then splitted into 10-fold. To easily use the command above, just set the parameter $subject as "10fold", $version as 1 to 10, and $tech as "CrossValidation". Also, please only use mlp_dfl_2 model and softmax loss function to run on CrossValidation according to the research question in our paper.

Results statistics

After running all subject versions, run the following command to calculate the five measurements Top-1, Top-3, Top-5, MFR, MAR:

python rank_parser.py /absolute/path/to/ParentDirofDataset /absolute/path/to/Result $tech $model $loss $epoch

Please note that due to the randomly initialized parameters, the results may be slightly different from our paper.




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap