I'm not sure how to approach this but I have an existing model that seems to work but I wanted to try to see if I can improve results using embedding. I made another model that I train and create embeddings for then join the outputted numpy array to a data frame.
But I'm not sure how to make it work, do I just simply include the numpy array as a column into my existing array I send the data with?
# Importing the dataset
dataset = np.genfromtxt("data.txt", delimiter='')
# Splitting the dataset into the Training set and Test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.08, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Initialising the ANN
model = Sequential()
# Adding the input layer and the first hidden layer
model.add(Dense(32, activation = 'relu', input_dim = 6))
# Adding the second hidden layer
model.add(Dense(units = 32, activation = 'relu'))
# Adding the third hidden layer
model.add(Dense(units = 32, activation = 'relu'))
# Adding the output layer
model.add(Dense(units = 1))
#model.add(Dense(1))
# Compiling the ANN
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Fitting the ANN to the Training set
model.fit(X_train, y_train, batch_size = 10, epochs = 100)
y_pred = model.predict(X_test)
How can I append the embedded numpy array to the 'dataset' numpy? Do I need to let keras know it's a embedding?
The tutorials I have seen create a separate embedding model but then join it together during training. Unfortunately in my case, I have to precalculate the embeddings and then connect them to my training data.
question from:
https://stackoverflow.com/questions/65649586/how-can-i-incorporate-embeddings-into-a-existing-model 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…