Friday 20 September 2019

neural network 6

good and bad reviews are transformed into vectors. Their angles are far apart.



all layers

#pycharm
#input layer - dictionary has 10000 words, 16 coefficients for vectors model (ax + by + cz...)
#output layer - between 0 and 1

model = keras.Sequential()
model.add(keras.layers.Embedding(10000, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))

#model.summary()

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

#dataset has 25000 reviews, use 10000 for training, and rest for testing
x_val = train_data[:10000]
x_train = train_data[10000:]

y_val = train_labels[:10000]
y_train = train_labels[10000:]

fitmodel = model.fit(x_train, y_train, epochs=20, batch_size=512, validation_data=(x_val, y_val), verbose=1)

result = model.evaluate(test_data, test_labels)

print(result)

prediction = model.predict(test_data)

for i in range(10):
    print('predicted: ', prediction[i], ' actural: ', test_labels[i])

--------------------------------------
#logs
#model accuracy for training and testing data are different

Epoch 20/20

  512/15000 [>.............................] - ETA: 0s - loss: 0.2149 - acc: 0.9180
 2048/15000 [===>..........................] - ETA: 0s - loss: 0.2155 - acc: 0.9243
 3584/15000 [======>.......................] - ETA: 0s - loss: 0.2150 - acc: 0.9241
 5120/15000 [=========>....................] - ETA: 0s - loss: 0.2174 - acc: 0.9209
 6656/15000 [============>.................] - ETA: 0s - loss: 0.2149 - acc: 0.9229
 8192/15000 [===============>..............] - ETA: 0s - loss: 0.2134 - acc: 0.9249
 9216/15000 [=================>............] - ETA: 0s - loss: 0.2102 - acc: 0.9271
10240/15000 [===================>..........] - ETA: 0s - loss: 0.2115 - acc: 0.9265
11776/15000 [======================>.......] - ETA: 0s - loss: 0.2123 - acc: 0.9260
13312/15000 [=========================>....] - ETA: 0s - loss: 0.2136 - acc: 0.9259
14848/15000 [============================>.] - ETA: 0s - loss: 0.2126 - acc: 0.9261
15000/15000 [==============================] - 1s 50us/sample - loss: 0.2122 - acc: 0.9264 - val_loss: 0.2958 - val_acc: 0.8809

   32/25000 [..............................] - ETA: 0s - loss: 0.2635 - acc: 0.9375
 3808/25000 [===>..........................] - ETA: 0s - loss: 0.3024 - acc: 0.8789
 7744/25000 [========>.....................] - ETA: 0s - loss: 0.3043 - acc: 0.8759
11488/25000 [============>.................] - ETA: 0s - loss: 0.3122 - acc: 0.8718
15264/25000 [=================>............] - ETA: 0s - loss: 0.3109 - acc: 0.8726
19072/25000 [=====================>........] - ETA: 0s - loss: 0.3068 - acc: 0.8752
23040/25000 [==========================>...] - ETA: 0s - loss: 0.3087 - acc: 0.8738
25000/25000 [==============================] - 0s 13us/sample - loss: 0.3084 - acc: 0.8740
[0.308388222618103, 0.87404]
predicted:  [0.255415]  actural:  0
predicted:  [0.99322104]  actural:  1
predicted:  [0.68470025]  actural:  1
predicted:  [0.4331221]  actural:  0
predicted:  [0.96165186]  actural:  1
predicted:  [0.64715904]  actural:  1
predicted:  [0.9213904]  actural:  1
predicted:  [0.20285302]  actural:  0
predicted:  [0.93451846]  actural:  0
predicted:  [0.9832784]  actural:  1

---------------------------------------
reference:
http://chuanshuoge2.blogspot.com/2019/09/neural-network-5.html
https://www.youtube.com/watch?v=qpb_39IjZA0&list=PLzMcBGfZo4-lak7tiFDec5_ZMItiIIfmj&index=6

No comments:

Post a Comment