Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
954 views
in Technique[技术] by (71.8m points)

reading csv in Julia is slow compared to Python

reading large text / csv files in Julia takes a long time compared to Python. Here are the times to read a file whose size is 486.6 MB and has 153895 rows and 644 columns.

python 3.3 example

import pandas as pd
import time
start=time.time()
myData=pd.read_csv("C:\myFile.txt",sep="|",header=None,low_memory=False)
print(time.time()-start)

Output: 19.90

R 3.0.2 example

system.time(myData<-read.delim("C:/myFile.txt",sep="|",header=F,
   stringsAsFactors=F,na.strings=""))

Output:
User    System  Elapsed
181.13  1.07    182.32

Julia 0.2.0 (Julia Studio 0.4.4) example # 1

using DataFrames
timing = @time myData = readtable("C:/myFile.txt",separator='|',header=false)

Output:
elapsed time: 80.35 seconds (10319624244 bytes allocated)

Julia 0.2.0 (Julia Studio 0.4.4) example # 2

timing = @time myData = readdlm("C:/myFile.txt",'|',header=false)

Output:
elapsed time: 65.96 seconds (9087413564 bytes allocated)
  1. Julia is faster than R, but quite slow compared to Python. What can I do differently to speed up reading a large text file?

  2. a separate issue is the size in memory is 18 x size of hard disk file size in Julia, but only 2.5 x size for python. in Matlab, which I have found to be most memory efficient for large files, it is 2 x size of hard disk file size. Any particular reason for the large file size in memory in Julia?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

The best answer is probably that I'm not as a good a programmer as Wes.

In general, the code in DataFrames is much less well-optimized than the code in Pandas. I'm confident that we can catch up, but it will take some time as there's a lot of basic functionality that we need to implement first. Since there's so much that needs to be built in Julia, I tend to focus on doing things in three parts: (1) build any version, (2) build a correct version, (3) build a fast, correct version. For the work I do, Julia often doesn't offer any versions of essential functionality, so my work gets focused on (1) and (2). As more of the tools I need get built, it'll be easier to focus on performance.

As for memory usage, I think the answer is that we use a set of data structures when parsing tabular data that's much less efficient than those used by Pandas. If I knew the internals of Pandas better, I could list off places where we're less efficient, but for now I'll just speculate that one obvious failing is that we're reading the whole dataset into memory rather than grabbing chunks from disk. This certainly can be avoided and there are issues open for doing so. It's just a matter of time.

On that note, the readtable code is fairly easy to read. The most certain way to get readtable to be faster is to whip out the Julia profiler and start fixing the performance flaws it uncovers.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...