Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
193 views
in Technique[技术] by (71.8m points)

python - Keras LSTM predicted timeseries squashed and shifted

I'm trying to get some hands on experience with Keras during the holidays, and I thought I'd start out with the textbook example of timeseries prediction on stock data. So what I'm trying to do is given the last 48 hours worth of average price changes (percent since previous), predict what the average price chanege of the coming hour is.

However, when verifying against the test set (or even the training set) the amplitude of the predicted series is way off, and sometimes is shifted to be either always positive or always negative, i.e., shifted away from the 0% change, which I think would be correct for this kind of thing.

I came up with the following minimal example to show the issue:

df = pandas.DataFrame.from_csv('test-data-01.csv', header=0)
df['pct'] = df.value.pct_change(periods=1)

seq_len=48
vals = df.pct.values[1:] # First pct change is NaN, skip it
sequences = []
for i in range(0, len(vals) - seq_len):
    sx = vals[i:i+seq_len].reshape(seq_len, 1)
    sy = vals[i+seq_len]
    sequences.append((sx, sy))

row = -24
trainSeqs = sequences[:row]
testSeqs = sequences[row:]

trainX = np.array([i[0] for i in trainSeqs])
trainy = np.array([i[1] for i in trainSeqs])

model = Sequential()
model.add(LSTM(25, batch_input_shape=(1, seq_len, 1)))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
model.fit(trainX, trainy, epochs=1, batch_size=1, verbose=1, shuffle=True)

pred = []
for s in trainSeqs:
    pred.append(model.predict(s[0].reshape(1, seq_len, 1)))
pred = np.array(pred).flatten()

plot(pred)
plot([i[1] for i in trainSeqs])
axis([2500, 2550,-0.03, 0.03])

As you can see, I create training and testing sequences, by selecting the last 48 hours, and the next step into a tuple, and then advancing 1 hour, repeating the procedure. The model is a very simple 1 LSTM and 1 dense layer.

I would have expected the plot of individual predicted points to overlap pretty nicely the plot of training sequences (after all this is the same set they were trained on), and sort of match for the test sequences. However I get the following result on training data:

  • Orange: true data
  • Blue: predicted data

enter image description here

Any idea what might be going on? Did I misunderstand something?

Update: to better show what I mean by shifted and squashed I also plotted the predicted values by shifting it back to match the real data and multiplied to match the amplitude.

plot(pred*12-0.03)
plot([i[1] for i in trainSeqs])
axis([2500, 2550,-0.03, 0.03])

enter image description here

As you can see the prediction nicely fits the real data, it's just squashed and offset somehow, and I can't figure out why.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I presume you are overfitting, since the dimensionality of your data is 1, and a LSTM with 25 units seems rather complex for such a low-dimensional dataset. Here's a list of things that I would try:

  • Decreasing the LSTM dimension.
  • Adding some form of regularization to combat overfitting. For example, dropout might be a good choice.
  • Training for more epochs or changing the learning rate. The model might need more epochs or bigger updates to find the appropriate parameters.

UPDATE. Let me summarize what we discussed in the comments section.

Just for clarification, the first plot doesn't show the predicted series for a validation set, but for the training set. Therefore, my first overfitting interpretation might be inaccurate. I think an appropriate question to ask would be: is it actually possible to predict the future price change from such a low-dimensional dataset? Machine learning algorithms aren't magical: they'll find patterns in the data only if they exist.

If the past price change alone is indeed not very informative of the future price change then:

  • Your model will learn to predict the mean of the price changes (probably something around 0), since that's the value that produces the lowest loss in absence of informative features.
  • The predictions might appear to be slightly "shifted" because the price change at timestep t+1 is slightly correlated with the price change at timestep t (but still, predicting something close to 0 is the safest choice). That is indeed the only pattern that I, as an inexpert, would be able to observe (i.e. that the value at timestep t+1 is sometimes similar to the one at timestep t).

If values at timesteps t and t+1 happened to be more correlated in general, then I presume that the model would be more confident about this correlation and the amplitude of the prediction would be bigger.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

56.9k users

...