如果資料集過大,可能不適用於分配給程序的記憶體。在
我們設定了管道,並將整個資料集放入其中
進入記憶體、準備資料,然後將工作組傳遞至訓練的模型
函式。不過,Keras 會提供替代的訓練函式
(fit_generator
)。
能分批提取資料這樣一來,我們就能將轉換套用到
將資料管道轉移到資料的一小部分 (batch_size
的倍數)。
在實驗期間,我們針對像
DBPedia、Amazon 評論、Ag 最新消息和 Yelp 評論。
下列程式碼顯示如何產生資料批次並提供給
fit_generator
。
def _data_generator(x, y, num_features, batch_size): """Generates batches of vectorized texts for training/validation. # Arguments x: np.matrix, feature matrix. y: np.ndarray, labels. num_features: int, number of features. batch_size: int, number of samples per batch. # Returns Yields feature and label data in batches. """ num_samples = x.shape[0] num_batches = num_samples // batch_size if num_samples % batch_size: num_batches += 1 while 1: for i in range(num_batches): start_idx = i * batch_size end_idx = (i + 1) * batch_size if end_idx > num_samples: end_idx = num_samples x_batch = x[start_idx:end_idx] y_batch = y[start_idx:end_idx] yield x_batch, y_batch # Create training and validation generators. training_generator = _data_generator( x_train, train_labels, num_features, batch_size) validation_generator = _data_generator( x_val, val_labels, num_features, batch_size) # Get number of training steps. This indicated the number of steps it takes # to cover all samples in one epoch. steps_per_epoch = x_train.shape[0] // batch_size if x_train.shape[0] % batch_size: steps_per_epoch += 1 # Get number of validation steps. validation_steps = x_val.shape[0] // batch_size if x_val.shape[0] % batch_size: validation_steps += 1 # Train and validate model. history = model.fit_generator( generator=training_generator, steps_per_epoch=steps_per_epoch, validation_data=validation_generator, validation_steps=validation_steps, callbacks=callbacks, epochs=epochs, verbose=2) # Logs once per epoch.