I'm try to build a text classifier using a CNN with word embedding with Keras and Tensorflow.

Here is a snippet of the code that shows the model construction:

```
# Model (Convolutional NN)
inp = Input(shape=(maxlen,), dtype='int32')
embedding = embedding_layer(inp)
stacks = []
for kernel_size in [4, 8, 16, 32, 50]:
conv = Conv1D(64, kernel_size, padding='same', activation='relu', strides=1)(embedding)
pool = MaxPooling1D(pool_size=3)(conv)
drop = Dropout(0.7)(pool)
stacks.append(drop)
merged = Concatenate()(stacks)
flatten = Flatten()(merged)
drop = Dropout(0.7)(flatten)
outp = Dense(len(int_category), activation='softmax')(drop)
TextCNN = Model(inputs=inp, outputs=outp)
TextCNN.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
TextCNN.summary()
# Train
textcnn_history = TextCNN.fit(x_train,
y_train,
batch_size=1024,
epochs=200,
validation_data=(x_val, y_val),
callbacks=
[ModelCheckpoint("../checkpoints/ck{epoch:02d}-{acc:.2f}.hdf5", monitor="acc",
save_best_only=True),
TensorBoard(),
EarlyStopping(monitor='loss', min_delta=0.0001, patience=1)]
)
```

After some modifications I was able to reach the following results:

Train and evaluation accuracies

I already used dropout regularization after every convolutional block and after the dense layer.

Then I decided to improve the dropout rate to raise the accuracy curve a little (from 0.5 to 0.7), but after some iterations this is what happened:

Train and evaluation accuracies after the dropout incresing

Why is the accuracy curve above the training one? Shouldn't it be below everytime?

Thank you.

- Serverfault Help
- Superuser Help
- Ubuntu Help
- Webapps Help
- Webmasters Help
- Programmers Help
- Dba Help
- Drupal Help
- Wordpress Help
- Magento Help
- Joomla Help
- Android Help
- Apple Help
- Game Help
- Gaming Help
- Blender Help
- Ux Help
- Cooking Help
- Photo Help
- Stats Help
- Math Help
- Diy Help
- Gis Help
- Tex Help
- Meta Help
- Electronics Help
- Stackoverflow Help
- Bitcoin Help
- Ethereum Help