I found out that on my code base Keras is much faster then tf.keras. In my eyes tf.keras is unacceptable slower.
I create a similar neural network, one time with tf.keras, the second time with Kears.
Then running a simplified loop with only a predict function on the OpenAI Gym Mountain-Car-v0 enviroment.
So my question is, if i make a huge mistake in the usage of the frameworks or is there something different in the underlaying code basis behind it?
Results:
Tf.Keras: 10000/10000 [06:53<00:00, 24.21it/s]
Keras: 10000/10000 [00:04<00:00, 2274.80it/s]
Code Base:
Keras version: 2.3.1
import keras
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.optimizers import Adam
model = Sequential()
model.add(Dense(24, input_dim=env.observation_space.shape[0], activation="relu"))
model.add(Dense(24, activation="relu"))
model.add(Dense(env.action_space.n, activation='linear'))
model.compile(loss='mse', optimizer=Adam(lr=0.001))
print("Keras version: ",keras.__version__)
tf.keras version: 2.2.4-tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.optimizers import Adam
model = keras.Sequential([
Dense(24, input_dim=env.observation_space.shape[0], activation = 'relu'),
Dense(24, activation = 'relu'),
Dense(env.action_space.n, activation='linear')
])
model.compile(loss='mse', optimizer=Adam(lr=0.001))
print("tf.keras version: ",keras.__version__)
Test loop:
from tqdm import tqdm
for a in tqdm(range(10000)):
state = env.reset()
model.predict(state.reshape(-1, env.observation_space.shape[0]))
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…