2分类val_loss变化巨大的原因?

keras做图像2分类,lr=0.000000001,optimizer是rmsprop,batch size=32,y_pred>0.1定义为正
第一次跑的结果,:
Epoch 1/5
81/81 [==============================] - 1s 8ms/step - loss: 14.5646 - acc: 0.0864
Epoch 2/5
81/81 [==============================] - 0s 6ms/step - loss: 14.5646 - acc: 0.0864
Epoch 3/5
81/81 [==============================] - 0s 6ms/step - loss: 14.5646 - acc: 0.0864
Epoch 4/5
81/81 [==============================] - 0s 5ms/step - loss: 14.5646 - acc: 0.0864
Epoch 5/5
81/81 [==============================] - 0s 6ms/step - loss: 14.5646 - acc: 0.0864
混淆矩阵:
[[ 0 36]
[ 0 3]]

第2次跑的结果:
Epoch 1/5
81/81 [==============================] - 1s 7ms/step - loss: 3.4956 - acc: 0.4074
Epoch 2/5
81/81 [==============================] - 1s 7ms/step - loss: 1.3929 - acc: 0.9136
Epoch 3/5
81/81 [==============================] - 1s 6ms/step - loss: 1.3929 - acc: 0.9136
Epoch 4/5
81/81 [==============================] - 0s 5ms/step - loss: 1.3929 - acc: 0.9136
Epoch 5/5
81/81 [==============================] - 0s 5ms/step - loss: 1.3929 - acc: 0.9136
混淆矩阵
[[36 0]
[ 3 0]]

2次跑的结果差距这么大的原因?

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
keras model 训练 train_loss,train_acc再变,但是val_loss,val_test却一直不变,是哪里有问题?
Epoch 1/15 3112/3112 [==============================] - 73s 237ms/step - loss: 8.1257 - acc: 0.4900 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 2/15 3112/3112 [==============================] - 71s 231ms/step - loss: 8.1730 - acc: 0.4929 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 3/15 3112/3112 [==============================] - 72s 232ms/step - loss: 8.1730 - acc: 0.4929 - val_loss: 8.1763 - val_acc: 0.4427 Epoch 4/15 3112/3112 [==============================] - 71s 229ms/step - loss: 7.0495 - acc: 0.5617 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 5/15 3112/3112 [==============================] - 71s 230ms/step - loss: 5.5504 - acc: 0.6549 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 6/15 3112/3112 [==============================] - 71s 230ms/step - loss: 4.9359 - acc: 0.6931 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 7/15 3112/3112 [==============================] - 71s 230ms/step - loss: 4.8969 - acc: 0.6957 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 8/15 3112/3112 [==============================] - 72s 234ms/step - loss: 4.9446 - acc: 0.6925 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 9/15 3112/3112 [==============================] - 71s 231ms/step - loss: 4.5114 - acc: 0.7201 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 10/15 3112/3112 [==============================] - 73s 237ms/step - loss: 4.7944 - acc: 0.7021 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 11/15 3112/3112 [==============================] - 74s 240ms/step - loss: 4.6789 - acc: 0.7095 - val_loss: 8.1763 - val_acc: 0.4927
训练网络时损失值一直震荡
在训练网络时,损失值一直震荡,结果如下 检查了数据,数据没有问题,想问一下是什么原因引起的 学习率lr=1e-6,batchsize=4,网络是一个小型的VGG,如下 ``` model = models.Sequential() inputShape = input_shape chanDim = -1 # chanel last # CONV => RELU => POOL model.add(layers.Conv2D(32, (3, 3), padding="same", input_shape=inputShape)) model.add(Activation("relu")) model.add(BatchNormalization(axis=chanDim)) model.add(MaxPooling2D(pool_size=(3, 3))) model.add(Dropout(0.25)) # (CONV => RELU) * 2 => POOL model.add(Conv2D(64, (3, 3), padding="same")) model.add(Activation("relu")) model.add(BatchNormalization(axis=chanDim)) model.add(Conv2D(64, (3, 3), padding="same")) model.add(Activation("relu")) model.add(BatchNormalization(axis=chanDim)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) # (CONV => RELU) * 2 => POOL model.add(Conv2D(128, (3, 3), padding="same")) model.add(Activation("relu")) model.add(BatchNormalization(axis=chanDim)) model.add(Conv2D(128, (3, 3), padding="same")) model.add(Activation("relu")) model.add(BatchNormalization(axis=chanDim)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) # first (and only) set of FC => RELU layers model.add(Flatten()) model.add(Dense(1024))#原来是1024 model.add(Activation("relu")) model.add(BatchNormalization()) model.add(Dropout(0.5)) model.add(layers.Dense(classes, activation='sigmoid')) return model ``` ``` Epoch 1/10 : 1/77 .........train_loss:1.8660999536514282,train_acc:0.25,val_loss:0.0,val_acc:1.0 2/77 .........train_loss:0.3743000030517578,train_acc:0.75,val_loss:0.0006000000284984708,val_acc:1.0 3/77 .........train_loss:0.9629999995231628,train_acc:0.25,val_loss:0.0003000000142492354,val_acc:1.0 4/77 .........train_loss:0.9380999803543091,train_acc:0.75,val_loss:0.02290000021457672,val_acc:1.0 5/77 .........train_loss:1.3630000352859497,train_acc:0.25,val_loss:0.10599999874830246,val_acc:1.0 6/77 .........train_loss:1.1638000011444092,train_acc:0.25,val_loss:0.061000000685453415,val_acc:1.0 7/77 .........train_loss:0.6043999791145325,train_acc:0.5,val_loss:0.1623000055551529,val_acc:1.0 8/77 .........train_loss:0.271699994802475,train_acc:1.0,val_loss:0.15219999849796295,val_acc:1.0 9/77 .........train_loss:0.6158000230789185,train_acc:0.75,val_loss:0.17030000686645508,val_acc:1.0 10/77 .........train_loss:1.5370999574661255,train_acc:0.75,val_loss:0.16339999437332153,val_acc:1.0 11/77 .........train_loss:0.9704999923706055,train_acc:0.75,val_loss:0.15410000085830688,val_acc:1.0 12/77 .........train_loss:0.7396000027656555,train_acc:0.5,val_loss:0.1680999994277954,val_acc:1.0 13/77 .........train_loss:1.1079000234603882,train_acc:0.5,val_loss:0.18019999563694,val_acc:1.0 14/77 .........train_loss:0.6898999810218811,train_acc:0.5,val_loss:0.1656000018119812,val_acc:1.0 15/77 .........train_loss:0.3553999960422516,train_acc:0.75,val_loss:0.13729999959468842,val_acc:1.0 16/77 .........train_loss:0.9595999717712402,train_acc:0.75,val_loss:0.13079999387264252,val_acc:1.0 17/77 .........train_loss:0.6948999762535095,train_acc:0.75,val_loss:0.131400004029274,val_acc:1.0 18/77 .........train_loss:0.2732999920845032,train_acc:1.0,val_loss:0.13779999315738678,val_acc:1.0 19/77 .........train_loss:0.5321999788284302,train_acc:0.75,val_loss:0.12309999763965607,val_acc:1.0 20/77 .........train_loss:1.1571999788284302,train_acc:0.5,val_loss:0.11800000071525574,val_acc:1.0 21/77 .........train_loss:1.3125,train_acc:0.5,val_loss:0.10040000081062317,val_acc:1.0 22/77 .........train_loss:0.7470999956130981,train_acc:0.5,val_loss:0.09730000048875809,val_acc:1.0 23/77 .........train_loss:1.1202000379562378,train_acc:0.25,val_loss:0.09790000319480896,val_acc:1.0 24/77 .........train_loss:0.98089998960495,train_acc:0.75,val_loss:0.09480000287294388,val_acc:1.0 25/77 .........train_loss:0.8733999729156494,train_acc:0.75,val_loss:0.093299999833107,val_acc:1.0 26/77 .........train_loss:1.1812000274658203,train_acc:0.25,val_loss:0.08869999647140503,val_acc:1.0 27/77 .........train_loss:1.0749000310897827,train_acc:0.5,val_loss:0.08020000159740448,val_acc:1.0 28/77 .........train_loss:0.8490999937057495,train_acc:0.5,val_loss:0.0851999968290329,val_acc:1.0 29/77 .........train_loss:1.2085000276565552,train_acc:0.25,val_loss:0.08789999783039093,val_acc:1.0 30/77 .........train_loss:0.8118000030517578,train_acc:0.5,val_loss:0.0860000029206276,val_acc:1.0 31/77 .........train_loss:0.5519000291824341,train_acc:0.5,val_loss:0.07940000295639038,val_acc:1.0 32/77 .........train_loss:1.3104000091552734,train_acc:0.25,val_loss:0.0722000002861023,val_acc:1.0 33/77 .........train_loss:1.1902999877929688,train_acc:0.25,val_loss:0.06689999997615814,val_acc:1.0 34/77 .........train_loss:0.1615000069141388,train_acc:1.0,val_loss:0.06849999725818634,val_acc:1.0 35/77 .........train_loss:0.382999986410141,train_acc:0.75,val_loss:0.06880000233650208,val_acc:1.0 36/77 .........train_loss:0.555899977684021,train_acc:0.75,val_loss:0.07169999927282333,val_acc:1.0 37/77 .........train_loss:0.26019999384880066,train_acc:0.75,val_loss:0.07150000333786011,val_acc:1.0 38/77 .........train_loss:0.8740000128746033,train_acc:0.5,val_loss:0.06880000233650208,val_acc:1.0 39/77 .........train_loss:0.5134000182151794,train_acc:0.75,val_loss:0.06459999829530716,val_acc:1.0 40/77 .........train_loss:0.9340999722480774,train_acc:0.75,val_loss:0.06549999862909317,val_acc:1.0 41/77 .........train_loss:0.684499979019165,train_acc:0.5,val_loss:0.061900001019239426,val_acc:1.0 42/77 .........train_loss:1.3522000312805176,train_acc:0.25,val_loss:0.05620000138878822,val_acc:1.0 43/77 .........train_loss:0.9114000201225281,train_acc:0.5,val_loss:0.052299998700618744,val_acc:1.0 44/77 .........train_loss:0.8479999899864197,train_acc:0.75,val_loss:0.05079999938607216,val_acc:1.0 45/77 .........train_loss:0.25519999861717224,train_acc:1.0,val_loss:0.054499998688697815,val_acc:1.0 46/77 .........train_loss:1.2669999599456787,train_acc:0.25,val_loss:0.053599998354911804,val_acc:1.0 47/77 .........train_loss:0.8998000025749207,train_acc:0.5,val_loss:0.051899999380111694,val_acc:1.0 48/77 .........train_loss:1.0500999689102173,train_acc:0.75,val_loss:0.052799999713897705,val_acc:1.0 49/77 .........train_loss:0.5569999814033508,train_acc:0.75,val_loss:0.05570000037550926,val_acc:1.0 50/77 .........train_loss:1.8724000453948975,train_acc:0.5,val_loss:0.05649999901652336,val_acc:1.0 51/77 .........train_loss:0.43650001287460327,train_acc:1.0,val_loss:0.058800000697374344,val_acc:1.0 52/77 .........train_loss:1.0694999694824219,train_acc:0.5,val_loss:0.05950000137090683,val_acc:1.0 53/77 .........train_loss:1.9672000408172607,train_acc:0.5,val_loss:0.05640000104904175,val_acc:1.0 54/77 .........train_loss:0.24560000002384186,train_acc:1.0,val_loss:0.05469999834895134,val_acc:1.0 55/77 .........train_loss:0.7495999932289124,train_acc:0.75,val_loss:0.054099999368190765,val_acc:1.0 56/77 .........train_loss:0.58160001039505,train_acc:0.75,val_loss:0.052400000393390656,val_acc:1.0 57/77 .........train_loss:0.4336000084877014,train_acc:0.75,val_loss:0.05169999971985817,val_acc:1.0 58/77 .........train_loss:0.42489999532699585,train_acc:0.75,val_loss:0.050200000405311584,val_acc:1.0 59/77 .........train_loss:0.916700005531311,train_acc:0.25,val_loss:0.0502999983727932,val_acc:1.0 60/77 .........train_loss:0.8044000267982483,train_acc:0.5,val_loss:0.050999999046325684,val_acc:1.0 61/77 .........train_loss:1.0534000396728516,train_acc:0.5,val_loss:0.050999999046325684,val_acc:1.0 62/77 .........train_loss:1.2537000179290771,train_acc:0.75,val_loss:0.04969999939203262,val_acc:1.0 63/77 .........train_loss:1.7020000219345093,train_acc:0.25,val_loss:0.048900000751018524,val_acc:1.0 64/77 .........train_loss:1.801300048828125,train_acc:0.0,val_loss:0.04839999973773956,val_acc:1.0 65/77 .........train_loss:1.0925999879837036,train_acc:0.5,val_loss:0.04899999871850014,val_acc:1.0 66/77 .........train_loss:0.1964000016450882,train_acc:1.0,val_loss:0.04899999871850014,val_acc:1.0 67/77 .........train_loss:0.42289999127388,train_acc:0.75,val_loss:0.0494999997317791,val_acc:1.0 68/77 .........train_loss:1.36080002784729,train_acc:0.5,val_loss:0.048900000751018524,val_acc:1.0 69/77 .........train_loss:0.9207000136375427,train_acc:0.5,val_loss:0.049400001764297485,val_acc:1.0 70/77 .........train_loss:0.16259999573230743,train_acc:1.0,val_loss:0.050999999046325684,val_acc:1.0 71/77 .........train_loss:0.8614000082015991,train_acc:0.5,val_loss:0.0471000000834465,val_acc:1.0 72/77 .........train_loss:0.7986999750137329,train_acc:0.25,val_loss:0.04610000178217888,val_acc:1.0 73/77 .........train_loss:1.6700999736785889,train_acc:0.5,val_loss:0.046300001442432404,val_acc:1.0 74/77 .........train_loss:1.0089999437332153,train_acc:0.5,val_loss:0.047600001096725464,val_acc:1.0 75/77 .........train_loss:0.8454999923706055,train_acc:0.5,val_loss:0.045499999076128006,val_acc:1.0 76/77 .........train_loss:0.8860999941825867,train_acc:0.75,val_loss:0.0421999990940094,val_acc:1.0 77/77 .........train_loss:0.3116999864578247,train_acc:0.75,val_loss:0.04360000044107437,val_acc:1.0 1/10 .........epoch_loss0.8798521880979663 Epoch 2/10 : 1/77 .........train_loss:0.4812000095844269,train_acc:1.0,val_loss:0.043299999088048935,val_acc:1.0 2/77 .........train_loss:0.26570001244544983,train_acc:1.0,val_loss:0.04280000180006027,val_acc:1.0 3/77 .........train_loss:1.6969000101089478,train_acc:0.5,val_loss:0.0430000014603138,val_acc:1.0 4/77 .........train_loss:0.3089999854564667,train_acc:0.75,val_loss:0.04230000078678131,val_acc:1.0 5/77 .........train_loss:0.7508000135421753,train_acc:0.75,val_loss:0.043299999088048935,val_acc:1.0 6/77 .........train_loss:0.41130000352859497,train_acc:0.75,val_loss:0.04439999908208847,val_acc:1.0 7/77 .........train_loss:1.2664999961853027,train_acc:0.5,val_loss:0.04610000178217888,val_acc:1.0 8/77 .........train_loss:0.20499999821186066,train_acc:1.0,val_loss:0.04859999939799309,val_acc:1.0 9/77 .........train_loss:0.0925000011920929,train_acc:1.0,val_loss:0.04769999906420708,val_acc:1.0 10/77 .........train_loss:1.351699948310852,train_acc:0.5,val_loss:0.04769999906420708,val_acc:1.0 11/77 .........train_loss:0.724399983882904,train_acc:0.5,val_loss:0.04729999974370003,val_acc:1.0 12/77 .........train_loss:1.0586999654769897,train_acc:0.5,val_loss:0.04910000041127205,val_acc:1.0 13/77 .........train_loss:1.3040000200271606,train_acc:0.5,val_loss:0.04919999837875366,val_acc:1.0 14/77 .........train_loss:1.4428999423980713,train_acc:0.5,val_loss:0.05050000175833702,val_acc:1.0 15/77 .........train_loss:0.44850000739097595,train_acc:0.75,val_loss:0.050999999046325684,val_acc:1.0 16/77 .........train_loss:0.4724999964237213,train_acc:0.75,val_loss:0.051500000059604645,val_acc:1.0 17/77 .........train_loss:0.388700008392334,train_acc:0.75,val_loss:0.05079999938607216,val_acc:1.0 18/77 .........train_loss:0.3449000120162964,train_acc:1.0,val_loss:0.05169999971985817,val_acc:1.0 19/77 .........train_loss:0.835099995136261,train_acc:0.75,val_loss:0.05220000073313713,val_acc:1.0 20/77 .........train_loss:1.378499984741211,train_acc:0.5,val_loss:0.052299998700618744,val_acc:1.0 21/77 .........train_loss:1.5405999422073364,train_acc:0.0,val_loss:0.05040000006556511,val_acc:1.0 22/77 .........train_loss:0.7993000149726868,train_acc:0.75,val_loss:0.0502999983727932,val_acc:1.0 23/77 .........train_loss:0.8944000005722046,train_acc:0.5,val_loss:0.04910000041127205,val_acc:1.0 24/77 .........train_loss:1.215399980545044,train_acc:0.75,val_loss:0.04560000076889992,val_acc:1.0 25/77 .........train_loss:1.4556000232696533,train_acc:0.25,val_loss:0.044599998742341995,val_acc:1.0 26/77 .........train_loss:0.7631000280380249,train_acc:0.5,val_loss:0.04360000044107437,val_acc:1.0 27/77 .........train_loss:0.853600025177002,train_acc:0.25,val_loss:0.042899999767541885,val_acc:1.0 28/77 .........train_loss:1.0032999515533447,train_acc:0.75,val_loss:0.042500000447034836,val_acc:1.0 29/77 .........train_loss:0.644599974155426,train_acc:0.75,val_loss:0.04270000010728836,val_acc:1.0 30/77 .........train_loss:2.1033999919891357,train_acc:0.5,val_loss:0.04270000010728836,val_acc:1.0 31/77 .........train_loss:0.35929998755455017,train_acc:0.75,val_loss:0.0430000014603138,val_acc:1.0 32/77 .........train_loss:1.4740999937057495,train_acc:0.5,val_loss:0.04230000078678131,val_acc:1.0 33/77 .........train_loss:1.999400019645691,train_acc:0.5,val_loss:0.04259999841451645,val_acc:1.0 34/77 .........train_loss:0.08709999918937683,train_acc:1.0,val_loss:0.043699998408555984,val_acc:1.0 35/77 .........train_loss:0.8490999937057495,train_acc:0.75,val_loss:0.04540000110864639,val_acc:1.0 36/77 .........train_loss:0.22130000591278076,train_acc:1.0,val_loss:0.04729999974370003,val_acc:1.0 37/77 .........train_loss:0.17170000076293945,train_acc:1.0,val_loss:0.04960000142455101,val_acc:1.0 38/77 .........train_loss:0.8406000137329102,train_acc:0.5,val_loss:0.049400001764297485,val_acc:1.0 39/77 .........train_loss:1.1347999572753906,train_acc:0.25,val_loss:0.05009999871253967,val_acc:1.0 40/77 .........train_loss:0.11699999868869781,train_acc:1.0,val_loss:0.052000001072883606,val_acc:1.0 41/77 .........train_loss:0.9308000206947327,train_acc:0.75,val_loss:0.050700001418590546,val_acc:1.0 42/77 .........train_loss:0.9972000122070312,train_acc:0.75,val_loss:0.04919999837875366,val_acc:1.0 43/77 .........train_loss:0.5979999899864197,train_acc:0.5,val_loss:0.04969999939203262,val_acc:1.0 44/77 .........train_loss:1.0957000255584717,train_acc:0.5,val_loss:0.05079999938607216,val_acc:1.0 45/77 .........train_loss:0.46149998903274536,train_acc:0.75,val_loss:0.050999999046325684,val_acc:1.0 46/77 .........train_loss:1.5649000406265259,train_acc:0.25,val_loss:0.05090000107884407,val_acc:1.0 47/77 .........train_loss:1.2698999643325806,train_acc:0.5,val_loss:0.05009999871253967,val_acc:1.0 48/77 .........train_loss:1.8796000480651855,train_acc:0.25,val_loss:0.0494999997317791,val_acc:1.0 49/77 .........train_loss:0.06759999692440033,train_acc:1.0,val_loss:0.05130000039935112,val_acc:1.0 50/77 .........train_loss:0.8188999891281128,train_acc:0.5,val_loss:0.052799999713897705,val_acc:1.0 51/77 .........train_loss:0.2257000058889389,train_acc:1.0,val_loss:0.05310000106692314,val_acc:1.0 52/77 .........train_loss:1.1119999885559082,train_acc:0.5,val_loss:0.05310000106692314,val_acc:1.0 53/77 .........train_loss:0.9301000237464905,train_acc:0.5,val_loss:0.051600001752376556,val_acc:1.0 54/77 .........train_loss:0.4131999909877777,train_acc:0.75,val_loss:0.050700001418590546,val_acc:1.0 55/77 .........train_loss:0.7595999836921692,train_acc:0.5,val_loss:0.050700001418590546,val_acc:1.0 56/77 .........train_loss:0.5605999827384949,train_acc:0.75,val_loss:0.050700001418590546,val_acc:1.0 57/77 .........train_loss:0.677299976348877,train_acc:0.75,val_loss:0.04410000145435333,val_acc:1.0 58/77 .........train_loss:0.28029999136924744,train_acc:1.0,val_loss:0.04360000044107437,val_acc:1.0 59/77 .........train_loss:0.911300003528595,train_acc:0.75,val_loss:0.042399998754262924,val_acc:1.0 60/77 .........train_loss:0.1624000072479248,train_acc:1.0,val_loss:0.04360000044107437,val_acc:1.0 61/77 .........train_loss:1.5983999967575073,train_acc:0.75,val_loss:0.04179999977350235,val_acc:1.0 62/77 .........train_loss:2.1105000972747803,train_acc:0.25,val_loss:0.0430000014603138,val_acc:1.0 63/77 .........train_loss:0.817300021648407,train_acc:0.5,val_loss:0.04280000180006027,val_acc:1.0 64/77 .........train_loss:0.9713000059127808,train_acc:0.5,val_loss:0.04129999876022339,val_acc:1.0 65/77 .........train_loss:0.883400022983551,train_acc:0.5,val_loss:0.04190000146627426,val_acc:1.0 66/77 .........train_loss:0.22089999914169312,train_acc:1.0,val_loss:0.042399998754262924,val_acc:1.0 67/77 .........train_loss:0.4081000089645386,train_acc:0.75,val_loss:0.04270000010728836,val_acc:1.0 68/77 .........train_loss:0.48579999804496765,train_acc:0.75,val_loss:0.04390000179409981,val_acc:1.0 69/77 .........train_loss:0.39079999923706055,train_acc:0.75,val_loss:0.04439999908208847,val_acc:1.0 70/77 .........train_loss:0.3855000138282776,train_acc:0.75,val_loss:0.04399999976158142,val_acc:1.0 71/77 .........train_loss:1.3310999870300293,train_acc:0.75,val_loss:0.04179999977350235,val_acc:1.0 72/77 .........train_loss:0.19629999995231628,train_acc:1.0,val_loss:0.04129999876022339,val_acc:1.0 73/77 .........train_loss:1.7139999866485596,train_acc:0.25,val_loss:0.04100000113248825,val_acc:1.0 74/77 .........train_loss:0.5117999911308289,train_acc:0.75,val_loss:0.042399998754262924,val_acc:1.0 75/77 .........train_loss:1.1568000316619873,train_acc:0.5,val_loss:0.040800001472234726,val_acc:1.0 76/77 .........train_loss:0.777899980545044,train_acc:0.75,val_loss:0.039900001138448715,val_acc:1.0 77/77 .........train_loss:0.259799987077713,train_acc:1.0,val_loss:0.04039999842643738,val_acc:1.0 2/10 .........epoch_loss0.8271800862117246 ```
机器学习后val_loss变成一条直线
为什么val loss会突然下降变成一条直线 ![图片说明](https://img-ask.csdn.net/upload/201903/18/1552875377_661858.png) 可能是哪些参数出现的问题呢 我用的是keras 网络部分的代码如下: ``` model = Sequential() model.add(Dense(512, activation='relu', input_shape=(3600,), kernel_regularizer=regularizers.l1(0.01) # activity_regularizer=regularizers.l1(0.01) # bias_regularizer= regularizers.l1(0.1) )) model.add(Dropout(0.1)) model.add(Dense(256, activation='relu')) model.add(Dense(num_classes, activation='softmax')) model.summary() model.compile(loss='categorical_crossentropy', optimizer=RMSprop(), metrics=['accuracy']) history = LossHistory() model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=2, validation_data=(x_test, y_test), callbacks=[history] ) ```
在Spyder界面中使用tensorflow进行fashion_mnist数据集学习,结果loss为非数,并且准确率一直未变
1.建立了一个3个全连接层的神经网络; 2.代码如下: ``` import matplotlib as mpl import matplotlib.pyplot as plt #%matplotlib inline import numpy as np import sklearn import pandas as pd import os import sys import time import tensorflow as tf from tensorflow import keras print(tf.__version__) print(sys.version_info) for module in mpl, np, sklearn, tf, keras: print(module.__name__,module.__version__) fashion_mnist = keras.datasets.fashion_mnist (x_train_all, y_train_all), (x_test, y_test) = fashion_mnist.load_data() x_valid, x_train = x_train_all[:5000], x_train_all[5000:] y_valid, y_train = y_train_all[:5000], y_train_all[5000:] #tf.keras.models.Sequential model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape= [28,28])) model.add(keras.layers.Dense(300, activation="relu")) model.add(keras.layers.Dense(100, activation="relu")) model.add(keras.layers.Dense(10,activation="softmax")) ###sparse为最后输出为index类型,如果为one hot类型,则不需加sparse model.compile(loss = "sparse_categorical_crossentropy",optimizer = "sgd", metrics = ["accuracy"]) #model.layers #model.summary() history = model.fit(x_train, y_train, epochs=10, validation_data=(x_valid,y_valid)) ``` 3.输出结果: ``` runfile('F:/new/new world/deep learning/tensorflow/ex2/tf_keras_classification_model.py', wdir='F:/new/new world/deep learning/tensorflow/ex2') 2.0.0 sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0) matplotlib 3.1.1 numpy 1.16.5 sklearn 0.21.3 tensorflow 2.0.0 tensorflow_core.keras 2.2.4-tf Train on 55000 samples, validate on 5000 samples Epoch 1/10 WARNING:tensorflow:Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: WARNING: Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: 55000/55000 [==============================] - 3s 58us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 2/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 3/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 4/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 5/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 6/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 7/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 8/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 9/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 10/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 ```
sklearn中cross_val_predict训练过程中对模型拟合了吗?
sklearn中cross_val_predict训练过程中对模型拟合了吗?多分类问题中,已经明确有分类输出了。但是原模型若不fit的话,又无法用于下一步对测试集进行预测。请问cross_val_predict在生成predict时,是否已经训练了一次模型,如果是,这个在cross_val_predict过程中训练出的模型如何用于下一步预测?谢谢! ``` #train data:X0 #label : y #test data: v sgd = SGDClassifier() scores = cross_val_score(sgd, X0, y,cv=5, scoring = 'accuracy') y0 = cross_val_predict(sgd,X0,y,cv=5) print('scores = {}'.format(scores)) print('AVG : {}'.format(np.mean(scores))) print('SGD : {}'.format(sgd)) ``` 此处y0是有输出的。但是这个sgd不进行fit的话,无法调用predict。拟合的话:(假设拟合全部train): ``` sgd.fit(X0,y) y_pred= sgd.predict(X0) ``` 根据全体train得出的预测值y_pred又与原标签y不同。
fashion_mnist识别准确率问题
fashion_mnist识别准确率一般为多少呢?我看好多人都是92%左右,但是我用一个网络达到了94%,想问问做过的小伙伴到底是多少? ``` #这是我的结果示意 x_shape: (60000, 28, 28) y_shape: (60000,) epoches: 0 val_acc: 0.4991 train_acc 0.50481665 epoches: 1 val_acc: 0.6765 train_acc 0.66735 epoches: 2 val_acc: 0.755 train_acc 0.7474 epoches: 3 val_acc: 0.7846 train_acc 0.77915 epoches: 4 val_acc: 0.798 train_acc 0.7936 epoches: 5 val_acc: 0.8082 train_acc 0.80365 epoches: 6 val_acc: 0.8146 train_acc 0.8107 epoches: 7 val_acc: 0.8872 train_acc 0.8872333 epoches: 8 val_acc: 0.896 train_acc 0.89348334 epoches: 9 val_acc: 0.9007 train_acc 0.8986 epoches: 10 val_acc: 0.9055 train_acc 0.90243334 epoches: 11 val_acc: 0.909 train_acc 0.9058833 epoches: 12 val_acc: 0.9112 train_acc 0.90868336 epoches: 13 val_acc: 0.9126 train_acc 0.91108334 epoches: 14 val_acc: 0.9151 train_acc 0.9139 epoches: 15 val_acc: 0.9172 train_acc 0.91595 epoches: 16 val_acc: 0.9191 train_acc 0.91798335 epoches: 17 val_acc: 0.9204 train_acc 0.91975 epoches: 18 val_acc: 0.9217 train_acc 0.9220333 epoches: 19 val_acc: 0.9252 train_acc 0.9234667 epoches: 20 val_acc: 0.9259 train_acc 0.92515 epoches: 21 val_acc: 0.9281 train_acc 0.9266667 epoches: 22 val_acc: 0.9289 train_acc 0.92826664 epoches: 23 val_acc: 0.9301 train_acc 0.93005 epoches: 24 val_acc: 0.9315 train_acc 0.93126667 epoches: 25 val_acc: 0.9322 train_acc 0.9328 epoches: 26 val_acc: 0.9331 train_acc 0.9339667 epoches: 27 val_acc: 0.9342 train_acc 0.93523335 epoches: 28 val_acc: 0.9353 train_acc 0.93665 epoches: 29 val_acc: 0.9365 train_acc 0.9379333 epoches: 30 val_acc: 0.9369 train_acc 0.93885 epoches: 31 val_acc: 0.9387 train_acc 0.9399 epoches: 32 val_acc: 0.9395 train_acc 0.9409 epoches: 33 val_acc: 0.94 train_acc 0.9417667 epoches: 34 val_acc: 0.9403 train_acc 0.94271666 epoches: 35 val_acc: 0.9409 train_acc 0.9435167 epoches: 36 val_acc: 0.9418 train_acc 0.94443333 epoches: 37 val_acc: 0.942 train_acc 0.94515 epoches: 38 val_acc: 0.9432 train_acc 0.9460667 epoches: 39 val_acc: 0.9443 train_acc 0.9468833 epoches: 40 val_acc: 0.9445 train_acc 0.94741666 epoches: 41 val_acc: 0.9462 train_acc 0.9482 epoches: 42 val_acc: 0.947 train_acc 0.94893336 epoches: 43 val_acc: 0.9472 train_acc 0.94946665 epoches: 44 val_acc: 0.948 train_acc 0.95028335 epoches: 45 val_acc: 0.9486 train_acc 0.95095 epoches: 46 val_acc: 0.9488 train_acc 0.9515833 epoches: 47 val_acc: 0.9492 train_acc 0.95213336 epoches: 48 val_acc: 0.9495 train_acc 0.9529833 epoches: 49 val_acc: 0.9498 train_acc 0.9537 val_acc: 0.9498 ``` ``` import tensorflow as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt def to_onehot(y,num): lables = np.zeros([num,len(y)]) for i in range(len(y)): lables[y[i],i] = 1 return lables.T # 预处理数据 mnist = keras.datasets.fashion_mnist (train_images,train_lables),(test_images,test_lables) = mnist.load_data() print('x_shape:',train_images.shape) #(60000) print('y_shape:',train_lables.shape) X_train = train_images.reshape((-1,train_images.shape[1]*train_images.shape[1])) / 255.0 #X_train = tf.reshape(X_train,[-1,X_train.shape[1]*X_train.shape[2]]) Y_train = to_onehot(train_lables,10) X_test = test_images.reshape((-1,test_images.shape[1]*test_images.shape[1])) / 255.0 Y_test = to_onehot(test_lables,10) #双隐层的神经网络 input_nodes = 784 output_nodes = 10 layer1_nodes = 100 layer2_nodes = 50 batch_size = 100 learning_rate_base = 0.8 learning_rate_decay = 0.99 regularization_rate = 0.0000001 epochs = 50 mad = 0.99 learning_rate = 0.005 # def inference(input_tensor,avg_class,w1,b1,w2,b2): # if avg_class == None: # layer1 = tf.nn.relu(tf.matmul(input_tensor,w1)+b1) # return tf.nn.softmax(tf.matmul(layer1,w2) + b2) # else: # layer1 = tf.nn.relu(tf.matmul(input_tensor,avg_class.average(w1)) + avg_class.average(b1)) # return tf.matual(layer1,avg_class.average(w2)) + avg_class.average(b2) def train(mnist): X = tf.placeholder(tf.float32,[None,input_nodes],name = "input_x") Y = tf.placeholder(tf.float32,[None,output_nodes],name = "y_true") w1 = tf.Variable(tf.truncated_normal([input_nodes,layer1_nodes],stddev=0.1)) b1 = tf.Variable(tf.constant(0.1,shape=[layer1_nodes])) w2 = tf.Variable(tf.truncated_normal([layer1_nodes,layer2_nodes],stddev=0.1)) b2 = tf.Variable(tf.constant(0.1,shape=[layer2_nodes])) w3 = tf.Variable(tf.truncated_normal([layer2_nodes,output_nodes],stddev=0.1)) b3 = tf.Variable(tf.constant(0.1,shape=[output_nodes])) layer1 = tf.nn.relu(tf.matmul(X,w1)+b1) A2 = tf.nn.relu(tf.matmul(layer1,w2)+b2) A3 = tf.nn.relu(tf.matmul(A2,w3)+b3) y_hat = tf.nn.softmax(A3) # y_hat = inference(X,None,w1,b1,w2,b2) # global_step = tf.Variable(0,trainable=False) # variable_averages = tf.train.ExponentialMovingAverage(mad,global_step) # varible_average_op = variable_averages.apply(tf.trainable_variables()) #y = inference(x,variable_averages,w1,b1,w2,b2) cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=A3,labels=Y)) regularizer = tf.contrib.layers.l2_regularizer(regularization_rate) regularization = regularizer(w1) + regularizer(w2) +regularizer(w3) loss = cross_entropy + regularization * regularization_rate # learning_rate = tf.train.exponential_decay(learning_rate_base,global_step,epchos,learning_rate_decay) # train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_step) train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # with tf.control_dependencies([train_step,varible_average_op]): # train_op = tf.no_op(name="train") correct_prediction = tf.equal(tf.argmax(y_hat,1),tf.argmax(Y,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) total_loss = [] val_acc = [] total_train_acc = [] x_Xsis = [] with tf.Session() as sess: tf.global_variables_initializer().run() for i in range(epochs): # x,y = next_batch(X_train,Y_train,batch_size) batchs = int(X_train.shape[0] / batch_size + 1) loss_e = 0. for j in range(batchs): batch_x = X_train[j*batch_size:min(X_train.shape[0],j*(batch_size+1)),:] batch_y = Y_train[j*batch_size:min(X_train.shape[0],j*(batch_size+1)),:] sess.run(train_step,feed_dict={X:batch_x,Y:batch_y}) loss_e += sess.run(loss,feed_dict={X:batch_x,Y:batch_y}) # train_step.run(feed_dict={X:x,Y:y}) validate_acc = sess.run(accuracy,feed_dict={X:X_test,Y:Y_test}) train_acc = sess.run(accuracy,feed_dict={X:X_train,Y:Y_train}) print("epoches: ",i,"val_acc: ",validate_acc,"train_acc",train_acc) total_loss.append(loss_e / batch_size) val_acc.append(validate_acc) total_train_acc.append(train_acc) x_Xsis.append(i) validate_acc = sess.run(accuracy,feed_dict={X:X_test,Y:Y_test}) print("val_acc: ",validate_acc) return (x_Xsis,total_loss,total_train_acc,val_acc) result = train((X_train,Y_train,X_test,Y_test)) def plot_acc(total_train_acc,val_acc,x): plt.figure() plt.plot(x,total_train_acc,'--',color = "red",label="train_acc") plt.plot(x,val_acc,color="green",label="val_acc") plt.xlabel("Epoches") plt.ylabel("acc") plt.legend() plt.show() ```
交叉验证cross_val_score中y出现错误
![图片说明](https://img-ask.csdn.net/upload/201906/26/1561561318_185866.jpg) ![图片说明](https://img-ask.csdn.net/upload/201906/26/1561561340_595572.jpg) 做的是Kaggle的房价项目,交叉验证cross_val_score出现错误Classification metrics can't handle a mix of multiclass and continuous targets,bu'zhi'dao可以这么交叉验证么??
keras利用callback获取的每个batch的acc数据精度不足
我想利用callback收集训练过程中每个batch的acc数据 但按batch收集的acc只有小数点后两位,按epoch收集的acc数据与就保留了小数点后很多位,按batch和epoch收集的loss数据都保留了小数点后很多位 代码如下 ``` class LossHistory(callbacks.Callback): def on_train_begin(self, logs={}): self.losses = {'batch': [], 'epoch': []} self.accuracy = {'batch': [], 'epoch': []} self.val_loss = {'batch': [], 'epoch': []} self.val_acc = {'batch': [], 'epoch': []} def on_batch_end(self, batch, logs={}): self.losses['batch'].append(logs.get('loss')) self.accuracy['batch'].append(logs.get('acc')) self.val_loss['batch'].append(logs.get('val_loss')) self.val_acc['batch'].append(logs.get('val_acc')) def on_epoch_end(self, batch, logs={}): self.losses['epoch'].append(logs.get('loss')) self.accuracy['epoch'].append(logs.get('acc')) self.val_loss['epoch'].append(logs.get('val_loss')) self.val_acc['epoch'].append(logs.get('val_acc')) def loss_plot(self, loss_type): iters = range(len(self.losses[loss_type])) plt.figure() # acc plt.plot(iters, self.accuracy[loss_type], 'r', label='train acc') # loss plt.plot(iters, self.losses[loss_type], 'g', label='train loss') if loss_type == 'epoch': # val_acc plt.plot(iters, self.val_acc[loss_type], 'b', label='val acc') # val_loss plt.plot(iters, self.val_loss[loss_type], 'k', label='val loss') plt.grid(True) plt.xlabel(loss_type) plt.ylabel('acc-loss') plt.legend(loc="upper right") plt.show() class Csr: def __init__(self,voc): self.model = Sequential() #B*L self.model.add(Embedding(voc.num_words, 300, mask_zero = True, weights = [voc.index2emb], trainable = False)) #B*L*256 self.model.add(GRU(256)) #B*256 self.model.add(Dropout(0.5)) self.model.add(Dense(1, activation='sigmoid')) #B*1 self.model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) print('compole complete') def train(self, x_train, y_train, b_s=50, epo=10): print('training.....') history = LossHistory() his = self.model.fit(x_train, y_train, batch_size=b_s, epochs=epo, callbacks=[history]) history.loss_plot('batch') print('training complete') return his, history ``` 程序运行结果如下: ![图片说明](https://img-ask.csdn.net/upload/201905/14/1557803291_621582.png) ![图片说明](https://img-ask.csdn.net/upload/201905/14/1557803304_240896.png)
深度学习图片识别循环停止?
最近在跑深度学习的inceptionV3的时候偶尔会遇到一问题,就是代码在运行到某个时间点时,就停止迭代运算,不知道为什么? ![图片说明](https://img-ask.csdn.net/upload/201811/11/1541904283_461011.png) 上面图是个例子,假设运行到291的step的时候停止了,不在继续运算,但是CPU和GPU是满载的。 下面是代码: ``` # coding=utf-8 import tensorflow as tf import numpy as np import pdb import os from datetime import datetime import slim.inception_model as inception_v3 from create_tf_record import * import tensorflow.contrib.slim as slim labels_nums = 7 # 类别个数 batch_size = 64 # resize_height = 299 # 指定SSS存储图片高度 resize_width = 299 # 指定存储图片宽度 depths = 3 data_shape = [batch_size, resize_height, resize_width, depths] # 定义input_images为图片数据 input_images = tf.placeholder(dtype=tf.float32, shape=[None, resize_height, resize_width, depths], name='input') # 定义input_labels为labels数据 # input_labels = tf.placeholder(dtype=tf.int32, shape=[None], name='label') input_labels = tf.placeholder(dtype=tf.int32, shape=[None, labels_nums], name='label') # 定义dropout的概率 keep_prob = tf.placeholder(tf.float32, name='keep_prob') is_training = tf.placeholder(tf.bool, name='is_training') #config = tf.ConfigProto() #config = tf.ConfigProto() #config.gpu_options.allow_growth = True #tf.Session(config = config) #tf.Session(config=tf.ConfigProto(allow_growth=True)) def net_evaluation(sess, loss, accuracy, val_images_batch, val_labels_batch, val_nums): val_max_steps = int(val_nums / batch_size) val_losses = [] val_accs = [] for _ in range(val_max_steps): val_x, val_y = sess.run([val_images_batch, val_labels_batch]) # print('labels:',val_y) # val_loss = sess.run(loss, feed_dict={x: val_x, y: val_y, keep_prob: 1.0}) # val_acc = sess.run(accuracy,feed_dict={x: val_x, y: val_y, keep_prob: 1.0}) val_loss, val_acc = sess.run([loss, accuracy], feed_dict={input_images: val_x, input_labels: val_y, keep_prob: 1.0, is_training: False}) val_losses.append(val_loss) val_accs.append(val_acc) mean_loss = np.array(val_losses, dtype=np.float32).mean() mean_acc = np.array(val_accs, dtype=np.float32).mean() return mean_loss, mean_acc def step_train(train_op, loss, accuracy, train_images_batch, train_labels_batch, train_nums, train_log_step, val_images_batch, val_labels_batch, val_nums, val_log_step, snapshot_prefix, snapshot): ''' 循环迭代训练过程 :param train_op: 训练op :param loss: loss函数 :param accuracy: 准确率函数 :param train_images_batch: 训练images数据 :param train_labels_batch: 训练labels数据 :param train_nums: 总训练数据 :param train_log_step: 训练log显示间隔 :param val_images_batch: 验证images数据 :param val_labels_batch: 验证labels数据 :param val_nums: 总验证数据 :param val_log_step: 验证log显示间隔 :param snapshot_prefix: 模型保存的路径 :param snapshot: 模型保存间隔 :return: None ''' # 初始化 #init = tf.global_variables_initializer() saver = tf.train.Saver() max_acc = 0.0 #ckpt = tf.train.get_checkpoint_state('D:/can_test/inception v3/') #saver = tf.train.import_meta_graph(ckpt.model_checkpoint_path + '.meta') #tf.reset_default_graph() with tf.Session() as sess: #sess.run(tf.global_variables_initializer())#恢复训练用 #saver = tf.train.import_meta_graph('D://can_test/inception v3/best_models_2_0.7500.ckpt.meta')#恢复训练 #saver.restore(sess, tf.train.latest_checkpoint('D://can_test/inception v3/'))#恢复训练 sess.run(tf.global_variables_initializer()) sess.run(tf.local_variables_initializer()) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord) for i in range(max_steps + 1): batch_input_images, batch_input_labels = sess.run([train_images_batch, train_labels_batch]) _, train_loss = sess.run([train_op, loss], feed_dict={input_images: batch_input_images, input_labels: batch_input_labels, keep_prob: 0.5, is_training: True}) # train测试(这里仅测试训练集的一个batch) if i % train_log_step == 0: train_acc = sess.run(accuracy, feed_dict={input_images: batch_input_images, input_labels: batch_input_labels, keep_prob: 1.0, is_training: False}) print( "%s: Step [%d] train Loss : %f, training accuracy : %g" % ( datetime.now(), i, train_loss, train_acc) ) # val测试(测试全部val数据) if i % val_log_step == 0: mean_loss, mean_acc = net_evaluation(sess, loss, accuracy, val_images_batch, val_labels_batch, val_nums) print( "%s: Step [%d] val Loss : %f, val accuracy : %g" % (datetime.now(), i, mean_loss, mean_acc) ) # 模型保存:每迭代snapshot次或者最后一次保存模型 if i == max_steps: print('-----save:{}-{}'.format(snapshot_prefix, i)) saver.save(sess, snapshot_prefix, global_step=i) # 保存val准确率最高的模型 if mean_acc > max_acc and mean_acc > 0.90: max_acc = mean_acc path = os.path.dirname(snapshot_prefix) best_models = os.path.join(path, 'best_models_{}_{:.4f}.ckpt'.format(i, max_acc)) print('------save:{}'.format(best_models)) saver.save(sess, best_models) coord.request_stop() coord.join(threads) def train(train_record_file, train_log_step, train_param, val_record_file, val_log_step, labels_nums, data_shape, snapshot, snapshot_prefix): ''' :param train_record_file: 训练的tfrecord文件 :param train_log_step: 显示训练过程log信息间隔 :param train_param: train参数 :param val_record_file: 验证的tfrecord文件 :param val_log_step: 显示验证过程log信息间隔 :param val_param: val参数 :param labels_nums: labels数 :param data_shape: 输入数据shape :param snapshot: 保存模型间隔 :param snapshot_prefix: 保存模型文件的前缀名 :return: ''' [base_lr, max_steps] = train_param [batch_size, resize_height, resize_width, depths] = data_shape # 获得训练和测试的样本数 train_nums = get_example_nums(train_record_file) val_nums = get_example_nums(val_record_file) print('train nums:%d,val nums:%d' % (train_nums, val_nums)) # 从record中读取图片和labels数据 # train数据,训练数据一般要求打乱顺序shuffle=True train_images, train_labels = read_records(train_record_file, resize_height, resize_width, type='normalization') train_images_batch, train_labels_batch = get_batch_images(train_images, train_labels, batch_size=batch_size, labels_nums=labels_nums, one_hot=True, shuffle=True) # val数据,验证数据可以不需要打乱数据 val_images, val_labels = read_records(val_record_file, resize_height, resize_width, type='normalization') val_images_batch, val_labels_batch = get_batch_images(val_images, val_labels, batch_size=batch_size, labels_nums=labels_nums, one_hot=True, shuffle=False) # Define the model: with slim.arg_scope(inception_v3.inception_v3_arg_scope()): out, end_points = inception_v3.inception_v3(inputs=input_images, num_classes=labels_nums, dropout_keep_prob=keep_prob, is_training=is_training) # Specify the loss function: tf.losses定义的loss函数都会自动添加到loss函数,不需要add_loss()了 tf.losses.softmax_cross_entropy(onehot_labels=input_labels, logits=out) # 添加交叉熵损失loss=1.6 # slim.losses.add_loss(my_loss) loss = tf.losses.get_total_loss(add_regularization_losses=True) # 添加正则化损失loss=2.2 accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(out, 1), tf.argmax(input_labels, 1)), tf.float32)) # Specify the optimization scheme: optimizer = tf.train.GradientDescentOptimizer(learning_rate=base_lr) # global_step = tf.Variable(0, trainable=False) # learning_rate = tf.train.exponential_decay(0.05, global_step, 150, 0.9) # # optimizer = tf.train.MomentumOptimizer(learning_rate, 0.9) # # train_tensor = optimizer.minimize(loss, global_step) # train_op = slim.learning.create_train_op(loss, optimizer,global_step=global_step) # 在定义训练的时候, 注意到我们使用了`batch_norm`层时,需要更新每一层的`average`和`variance`参数, # 更新的过程不包含在正常的训练过程中, 需要我们去手动像下面这样更新 # 通过`tf.get_collection`获得所有需要更新的`op` update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) # 使用`tensorflow`的控制流, 先执行更新算子, 再执行训练 with tf.control_dependencies(update_ops): # create_train_op that ensures that when we evaluate it to get the loss, # the update_ops are done and the gradient updates are computed. # train_op = slim.learning.create_train_op(total_loss=loss,optimizer=optimizer) train_op = slim.learning.create_train_op(total_loss=loss, optimizer=optimizer) # 循环迭代过程 step_train(train_op, loss, accuracy, train_images_batch, train_labels_batch, train_nums, train_log_step, val_images_batch, val_labels_batch, val_nums, val_log_step, snapshot_prefix, snapshot) if __name__ == '__main__': train_record_file = '/home/lab/new_jeremie/train.tfrecords' val_record_file = '/home/lab/new_jeremie/val.tfrecords' #train_record_file = 'D://cancer_v2/data/cancer/train.tfrecords' #val_record_file = 'D://val.tfrecords' train_log_step = 1 base_lr = 0.01 # 学习率 max_steps = 100000 # 迭代次数 train_param = [base_lr, max_steps] val_log_step = 1 snapshot = 2000 # 保存文件间隔 snapshot_prefix = './v3model.ckpt' train(train_record_file=train_record_file, train_log_step=train_log_step, train_param=train_param, val_record_file=val_record_file, val_log_step=val_log_step, #val_log_step=val_log_step, labels_nums=labels_nums, data_shape=data_shape, snapshot=snapshot, snapshot_prefix=snapshot_prefix) ```
使用keras进行分类问题时,验证集loss,accuracy 显示0.0000e+00,但是最后画图像时能显示出验证曲线
data_train, data_test, label_train, label_test = train_test_split(data_all, label_all, test_size= 0.2, random_state = 1) data_train, data_val, label_train, label_val = train_test_split(data_train,label_train, test_size = 0.25) data_train = np.asarray(data_train, np.float32) data_test = np.asarray(data_test, np.float32) data_val = np.asarray(data_val, np.float32) label_train = np.asarray(label_train, np.int32) label_test = np.asarray(label_test, np.int32) label_val = np.asarray(label_val, np.int32) training = model.fit_generator(datagen.flow(data_train, label_train_binary, batch_size=200,shuffle=True), validation_data=(data_val,label_val_binary), samples_per_epoch=len(data_train)*8, nb_epoch=30, verbose=1) def plot_history(history): plt.plot(training.history['acc']) plt.plot(training.history['val_acc']) plt.title('model accuracy') plt.xlabel('epoch') plt.ylabel('accuracy') plt.legend(['acc', 'val_acc'], loc='lower right') plt.show() plt.plot(training.history['loss']) plt.plot(training.history['val_loss']) plt.title('model loss') plt.xlabel('epoch') plt.ylabel('loss') plt.legend(['loss', 'val_loss'], loc='lower right') plt.show() plot_history(training) ![图片说明](https://img-ask.csdn.net/upload/201812/10/1544423669_112599.jpg)![图片说明](https://img-ask.csdn.net/upload/201812/10/1544423681_598605.jpg)
迁移学习中进行医学影像分析,训练神经网络后accuracy保持不变。。。
使用的是vgg16的finetune网络,网络权重从keras中导入的,最上层有三层的一个小训练器的权重是由训练习得的。训练集大约300个样本,验证集大约80个样本,但程序运行后,第一二个epoch之间loss、acc还有变化,之后就不再变化,而且验证集的准确度一直接近于零。。想请有关卷积神经网络和机器学习方面的大神帮忙看一下是哪里出了问题 import keras from keras.models import Sequential from keras.layers import Dense,Dropout,Activation,Flatten from keras.layers import GlobalAveragePooling2D import numpy as np from keras.optimizers import RMSprop from keras.utils import np_utils import matplotlib.pyplot as plt from keras import regularizers from keras.applications.vgg16 import VGG16 from keras import optimizers from keras.layers.core import Lambda from keras import backend as K from keras.models import Model #写一个LossHistory类(回调函数),保存loss和acc,在keras下画图 class LossHistory(keras.callbacks.Callback): def on_train_begin(self, logs={}):#在每个batch的开始处(on_batch_begin):logs包含size,即当前batch的样本数 self.losses = {'batch':[], 'epoch':[]} self.accuracy = {'batch':[], 'epoch':[]} self.val_loss = {'batch':[], 'epoch':[]} self.val_acc = {'batch':[], 'epoch':[]} def on_batch_end(self, batch, logs={}): self.losses['batch'].append(logs.get('loss')) self.accuracy['batch'].append(logs.get('acc')) self.val_loss['batch'].append(logs.get('val_loss')) self.val_acc['batch'].append(logs.get('val_acc')) def on_epoch_end(self, batch, logs={}):#每迭代完一次从log中取得数据 self.losses['epoch'].append(logs.get('loss')) self.accuracy['epoch'].append(logs.get('acc')) self.val_loss['epoch'].append(logs.get('val_loss')) self.val_acc['epoch'].append(logs.get('val_acc')) def loss_plot(self, loss_type): iters = range(len(self.losses[loss_type])) #绘图的横坐标? plt.figure() #建立一个空的画布 if loss_type == 'epoch': plt.subplot(211) plt.plot(iters,self.accuracy[loss_type],'r',label='train acc') plt.plot(iters,self.val_acc[loss_type],'b',label='val acc') # val_acc用蓝色线表示 plt.grid(True) plt.xlabel(loss_type) plt.ylabel('accuracy') plt.show() plt.subplot(212) plt.plot(iters, self.losses[loss_type], 'r', label='train loss') # val_acc 用蓝色线表示 plt.plot(iters, self.val_loss[loss_type], 'b', label='val loss') # val_loss 用黑色线表示 plt.xlabel(loss_type) plt.ylabel('loss') plt.legend(loc="upper right") #把多个axs的图例放在一张图上,loc表示位置 plt.show() print(np.mean(self.val_acc[loss_type])) print(np.std(self.val_acc[loss_type])) seed = 7 np.random.seed(seed) #训练网络的几个参数 batch_size=32 num_classes=2 epochs=100 weight_decay=0.0005 learn_rate=0.0001 #读入训练、测试数据,改变大小,显示基本信息 X_train=np.load(open('/image_BRATS_240_240_3_normal.npy',mode='rb')) Y_train=np.load(open('/label_BRATS_240_240_3_normal.npy',mode='rb')) Y_train = keras.utils.to_categorical(Y_train, 2) #搭建神经网络 model_vgg16=VGG16(include_top=False,weights='imagenet',input_shape=(240,240,3),classes=2) model_vgg16.layers.pop() model=Sequential() model.add(model_vgg16) model.add(Flatten(input_shape=X_train.shape[1:])) model.add(Dense(436,activation='relu')) #return x*10的向量 model.add(Dense(2,activation='softmax')) #model(inputs=model_vgg16.input,outputs=predictions) for layer in model_vgg16.layers[:13]: layer.trainable=False model_vgg16.summary() model.compile(optimizer=RMSprop(lr=learn_rate,decay=weight_decay), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() history=LossHistory() model.fit(X_train,Y_train, batch_size=batch_size,epochs=epochs, verbose=1, shuffle=True, validation_split=0.2, callbacks=[history]) #模型评估 history.loss_plot('epoch') 比如: ![实验运行结果:](https://img-ask.csdn.net/upload/201804/19/1524134477_869793.png)
tensorflow训练完模型直接测试和导入模型进行测试的结果不同,一个很好,一个略差,这是为什么?
在tensorflow训练完模型,我直接采用同一个session进行测试,得到结果较好,但是采用训练完保存的模型,进行重新载入进行测试,结果较差,不懂是为什么会出现这样的结果。注:测试数据是一样的。以下是模型结果: 训练集:loss:0.384,acc:0.931. 验证集:loss:0.212,acc:0.968. 训练完在同一session内的测试集:acc:0.96。导入保存的模型进行测试:acc:0.29 ``` def create_model(hps): global_step = tf.Variable(tf.zeros([], tf.float64), name = 'global_step', trainable = False) scale = 1.0 / math.sqrt(hps.num_embedding_size + hps.num_lstm_nodes[-1]) / 3.0 print(type(scale)) gru_init = tf.random_normal_initializer(-scale, scale) with tf.variable_scope('Bi_GRU_nn', initializer = gru_init): for i in range(hps.num_lstm_layers): cell_bw = tf.contrib.rnn.GRUCell(hps.num_lstm_nodes[i], activation = tf.nn.relu, name = 'cell-bw') cell_bw = tf.contrib.rnn.DropoutWrapper(cell_bw, output_keep_prob = dropout_keep_prob) cell_fw = tf.contrib.rnn.GRUCell(hps.num_lstm_nodes[i], activation = tf.nn.relu, name = 'cell-fw') cell_fw = tf.contrib.rnn.DropoutWrapper(cell_fw, output_keep_prob = dropout_keep_prob) rnn_outputs, _ = tf.nn.bidirectional_dynamic_rnn(cell_bw, cell_fw, inputs, dtype=tf.float32) embeddedWords = tf.concat(rnn_outputs, 2) finalOutput = embeddedWords[:, -1, :] outputSize = hps.num_lstm_nodes[-1] * 2 # 因为是双向LSTM,最终的输出值是fw和bw的拼接,因此要乘以2 last = tf.reshape(finalOutput, [-1, outputSize]) # reshape成全连接层的输入维度 last = tf.layers.batch_normalization(last, training = is_training) fc_init = tf.uniform_unit_scaling_initializer(factor = 1.0) with tf.variable_scope('fc', initializer = fc_init): fc1 = tf.layers.dense(last, hps.num_fc_nodes, name = 'fc1') fc1_batch_normalization = tf.layers.batch_normalization(fc1, training = is_training) fc_activation = tf.nn.relu(fc1_batch_normalization) logits = tf.layers.dense(fc_activation, hps.num_classes, name = 'fc2') with tf.name_scope('metrics'): softmax_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits = logits, labels = tf.argmax(outputs, 1)) loss = tf.reduce_mean(softmax_loss) # [0, 1, 5, 4, 2] ->argmax:2 因为在第二个位置上是最大的 y_pred = tf.argmax(tf.nn.softmax(logits), 1, output_type = tf.int64, name = 'y_pred') # 计算准确率,看看算对多少个 correct_pred = tf.equal(tf.argmax(outputs, 1), y_pred) # tf.cast 将数据转换成 tf.float32 类型 accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) with tf.name_scope('train_op'): tvar = tf.trainable_variables() for var in tvar: print('variable name: %s' % (var.name)) grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvar), hps.clip_lstm_grads) optimizer = tf.train.AdamOptimizer(hps.learning_rate) train_op = optimizer.apply_gradients(zip(grads, tvar), global_step) # return((inputs, outputs, is_training), (loss, accuracy, y_pred), (train_op, global_step)) return((inputs, outputs), (loss, accuracy, y_pred), (train_op, global_step)) placeholders, metrics, others = create_model(hps) content, labels = placeholders loss, accuracy, y_pred = metrics train_op, global_step = others def val_steps(sess, x_batch, y_batch, writer = None): loss_val, accuracy_val = sess.run([loss,accuracy], feed_dict = {inputs: x_batch, outputs: y_batch, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) return loss_val, accuracy_val loss_summary = tf.summary.scalar('loss', loss) accuracy_summary = tf.summary.scalar('accuracy', accuracy) # 将所有的变量都集合起来 merged_summary = tf.summary.merge_all() # 用于test测试的summary merged_summary_test = tf.summary.merge([loss_summary, accuracy_summary]) LOG_DIR = '.' run_label = 'run_Bi-GRU_Dropout_tensorboard' run_dir = os.path.join(LOG_DIR, run_label) if not os.path.exists(run_dir): os.makedirs(run_dir) train_log_dir = os.path.join(run_dir, timestamp, 'train') test_los_dir = os.path.join(run_dir, timestamp, 'test') if not os.path.exists(train_log_dir): os.makedirs(train_log_dir) if not os.path.join(test_los_dir): os.makedirs(test_los_dir) # saver得到的文件句柄,可以将文件训练的快照保存到文件夹中去 saver = tf.train.Saver(tf.global_variables(), max_to_keep = 5) # train 代码 init_op = tf.global_variables_initializer() train_keep_prob_value = 0.2 test_keep_prob_value = 1.0 # 由于如果按照每一步都去计算的话,会很慢,所以我们规定每100次存储一次 output_summary_every_steps = 100 num_train_steps = 1000 # 每隔多少次保存一次 output_model_every_steps = 500 # 测试集测试 test_model_all_steps = 4000 i = 0 session_conf = tf.ConfigProto( gpu_options = tf.GPUOptions(allow_growth=True), allow_soft_placement = True, log_device_placement = False) with tf.Session(config = session_conf) as sess: sess.run(init_op) # 将训练过程中,将loss,accuracy写入文件里,后面是目录和计算图,如果想要在tensorboard中显示计算图,就想sess.graph加上 train_writer = tf.summary.FileWriter(train_log_dir, sess.graph) # 同样将测试的结果保存到tensorboard中,没有计算图 test_writer = tf.summary.FileWriter(test_los_dir) batches = batch_iter(list(zip(x_train, y_train)), hps.batch_size, hps.num_epochs) for batch in batches: train_x, train_y = zip(*batch) eval_ops = [loss, accuracy, train_op, global_step] should_out_summary = ((i + 1) % output_summary_every_steps == 0) if should_out_summary: eval_ops.append(merged_summary) # 那三个占位符输进去 # 计算loss, accuracy, train_op, global_step的图 eval_ops.append(merged_summary) outputs_train = sess.run(eval_ops, feed_dict={ inputs: train_x, outputs: train_y, dropout_keep_prob: train_keep_prob_value, is_training: hps.train_is_training }) loss_train, accuracy_train = outputs_train[0:2] if should_out_summary: # 由于我们想在100steps之后计算summary,所以上面 should_out_summary = ((i + 1) % output_summary_every_steps == 0)成立, # 即为真True,那么我们将训练的内容放入eval_ops的最后面了,因此,我们想获得summary的结果得在eval_ops_results的最后一个 train_summary_str = outputs_train[-1] # 将获得的结果写训练tensorboard文件夹中,由于训练从0开始,所以这里加上1,表示第几步的训练 train_writer.add_summary(train_summary_str, i + 1) test_summary_str = sess.run([merged_summary_test], feed_dict = {inputs: x_dev, outputs: y_dev, dropout_keep_prob: 1.0, is_training: hps.val_is_training })[0] test_writer.add_summary(test_summary_str, i + 1) current_step = tf.train.global_step(sess, global_step) if (i + 1) % 100 == 0: print("Step: %5d, loss: %3.3f, accuracy: %3.3f" % (i + 1, loss_train, accuracy_train)) # 500个batch校验一次 if (i + 1) % 500 == 0: loss_eval, accuracy_eval = val_steps(sess, x_dev, y_dev) print("Step: %5d, val_loss: %3.3f, val_accuracy: %3.3f" % (i + 1, loss_eval, accuracy_eval)) if (i + 1) % output_model_every_steps == 0: path = saver.save(sess,os.path.join(out_dir, 'ckp-%05d' % (i + 1))) print("Saved model checkpoint to {}\n".format(path)) print('model saved to ckp-%05d' % (i + 1)) if (i + 1) % test_model_all_steps == 0: # test_loss, test_acc, all_predictions= sess.run([loss, accuracy, y_pred], feed_dict = {inputs: x_test, outputs: y_test, dropout_keep_prob: 1.0}) test_loss, test_acc, all_predictions= sess.run([loss, accuracy, y_pred], feed_dict = {inputs: x_test, outputs: y_test, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) print("test_loss: %3.3f, test_acc: %3.3d" % (test_loss, test_acc)) batches = batch_iter(list(x_test), 128, 1, shuffle=False) # Collect the predictions here all_predictions = [] for x_test_batch in batches: batch_predictions = sess.run(y_pred, {inputs: x_test_batch, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) all_predictions = np.concatenate([all_predictions, batch_predictions]) correct_predictions = float(sum(all_predictions == y.flatten())) print("Total number of test examples: {}".format(len(y_test))) print("Accuracy: {:g}".format(correct_predictions/float(len(y_test)))) test_y = y_test.argmax(axis = 1) #生成混淆矩阵 conf_mat = confusion_matrix(test_y, all_predictions) fig, ax = plt.subplots(figsize = (4,2)) sns.heatmap(conf_mat, annot=True, fmt = 'd', xticklabels = cat_id_df.category_id.values, yticklabels = cat_id_df.category_id.values) font_set = FontProperties(fname = r"/usr/share/fonts/truetype/wqy/wqy-microhei.ttc", size=15) plt.ylabel(u'实际结果',fontsize = 18,fontproperties = font_set) plt.xlabel(u'预测结果',fontsize = 18,fontproperties = font_set) plt.savefig('./test.png') print('accuracy %s' % accuracy_score(all_predictions, test_y)) print(classification_report(test_y, all_predictions,target_names = cat_id_df['category_name'].values)) print(classification_report(test_y, all_predictions)) i += 1 ``` 以上的模型代码,请求各位大神帮我看看,为什么出现这样的结果?
运行mixmatch源码CIFAR10数据集时报错AttributeError: 'CIFAR10' object has no attribute 'targets',是怎么回事?
1. 在运行mixmatch程序的时候,用torchvision.datasets载入CIFAT10的时候出现AttributeError: 'CIFAR10' object has no attribute 'targets',错误 还有一个问题就是:由于用torchvision下载太慢,我先把数据集下下来了,然后放在了data目录下面,这个对结果会有影响嘛? 希望大家可以给点建议和意见,谢谢。 加载数据集的代码如下: ``` def get_cifar10(root, n_labeled, transform_train=None, transform_val=None, download=True): base_dataset = torchvision.datasets.CIFAR10(root, train=True, target_transform=True, download=download,) train_labeled_idxs, train_unlabeled_idxs, val_idxs = train_val_split(base_dataset.targets, int(n_labeled/10)) train_labeled_dataset = CIFAR10_labeled(root, train_labeled_idxs, train=True, transform=transform_train) train_unlabeled_dataset = CIFAR10_unlabeled(root, train_unlabeled_idxs, train=True, transform=TransformTwice(transform_train)) val_dataset = CIFAR10_labeled(root, val_idxs, train=True, transform=transform_val, download=True) test_dataset = CIFAR10_labeled(root, train=False, transform=transform_val, download=True) print (f"#Labeled: {len(train_labeled_idxs)} #Unlabeled: {len(train_unlabeled_idxs)} #Val: {len(val_idxs)}") return train_labeled_dataset, train_unlabeled_dataset, val_dataset, test_dataset ``` ``` def train_val_split(labels, n_labeled_per_class): labels = np.array(labels) train_labeled_idxs = [] train_unlabeled_idxs = [] val_idxs = [] for i in range(10): idxs = np.where(labels == i)[0] np.random.shuffle(idxs) train_labeled_idxs.extend(idxs[:n_labeled_per_class]) train_unlabeled_idxs.extend(idxs[n_labeled_per_class:-500]) val_idxs.extend(idxs[-500:]) np.random.shuffle(train_labeled_idxs) np.random.shuffle(train_unlabeled_idxs) np.random.shuffle(val_idxs) return train_labeled_idxs, train_unlabeled_idxs, val_idxs ``` 错误信息如下 (base) D:\CSStudy\PycharmProject\MixMatch-pytorch-master>python train.py --gpu 0 --n-labeled 250 --out cifar10@250 ==> Preparing cifar10 Using downloaded and verified file: ./data\cifar-10-python.tar.gz Traceback (most recent call last): File "train.py", line 431, in <module> main() File "train.py", line 88, in main train_labeled_set, train_unlabeled_set, val_set, test_set = dataset.get_cifar10('./data', args.n_labeled, transform_train=transform_train, transf orm_val=transform_val) File "D:\CSStudy\PycharmProject\MixMatch-pytorch-master\dataset\cifar10.py", line 21, in get_cifar10 train_labeled_idxs, train_unlabeled_idxs, val_idxs = train_val_split(base_dataset.targets, int(n_labeled/10)) AttributeError: 'CIFAR10' object has no attribute 'targets'
使用keras搭建黑体汉字单个字符识别网络val_acc=0.0002
这是我读入训练数据的过程,数据集是根据txt文件生成的单个汉字的图像(shape为64*64)788种字符(包括数字和X字符),每张图片的开头命名为在txt字典中的位置,作为标签, ``` def read_train_image(self, name): img = Image.open(name).convert('RGB') return np.array(img) def train(self): train_img_list = [] train_label_list = [] for file in os.listdir('train'): files_img_in_array = self.read_train_image(name='train/'+ file) train_img_list.append(files_img_in_array) # Image list add up train_label_list.append(int(file.split('_')[0])) # lable list addup train_img_list = np.array(train_img_list) train_label_list = np.array(train_label_list) train_label_list = np_utils.to_categorical(train_label_list, self.count) train_img_list = train_img_list.astype('float32') train_img_list /= 255 ``` 训练下来,虽然train_acc达到0.99,但是验证accuracy一直都等于0. 下面是网络结构: ``` model = Sequential() #创建第一个卷积层。 model.add(Convolution2D(32, 3, 3, border_mode='valid', input_shape=(64,64,3),kernel_regularizer=l2(0.0001))) model.add(BatchNormalization(axis=3)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) #创建第二个卷积层。 model.add(Convolution2D(64, 3, 3, border_mode='valid',kernel_regularizer=l2(0.0001))) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) #创建第三个卷积层。 model.add(Convolution2D(128, 3, 3, border_mode='valid',kernel_regularizer=l2(0.0001))) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) # 创建全连接层。 model.add(Flatten()) model.add(Dense(128, init= 'he_normal')) model.add(BatchNormalization()) model.add(Activation('relu')) #创建输出层,使用 Softmax函数输出属于各个字符的概率值。 model.add(Dense(output_dim=self.count, init= 'he_normal')) model.add(Activation('softmax')) #设置神经网络中的损失函数和优化算法。 model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy']) #开始训练,并设置批尺寸和训练的步数。 model.fit( train_img_list, train_label_list, epochs=500, batch_size=128, validation_split=0.2, verbose=1, shuffle= False, ) ``` 大概结构是这样,十几轮后训练集acc达到了0.99.验证集val_acc为0.网上说这种情况大概是过拟合了,希望高手指点一下。
tensorflow训练过程权重不更新,loss不下降,输出保持不变,只有bias在非常缓慢地变化?
模型里没有参数被初始化为0 ,学习率从10的-5次方试到了0.1,输入数据都已经被归一化为了0-1之间,模型是改过的vgg16,有四个输出,使用了vgg16的预训练模型来初始化参数,输出中间结果也没有nan或者inf值。是不是不能自定义损失函数呢?但输出中间梯度发现并不是0,非常奇怪。 **train.py的部分代码** ``` def train(): x = tf.placeholder(tf.float32, [None, 182, 182, 2], name = 'image_input') y_ = tf.placeholder(tf.float32, [None, 8], name='label_input') global_step = tf.Variable(0, trainable=False) learning_rate = tf.train.exponential_decay(learning_rate=0.0001,decay_rate=0.9, global_step=TRAINING_STEPS, decay_steps=50,staircase=True) # 读取图片数据,pos是标签为1的图,neg是标签为0的图 pos, neg = get_data.get_image(img_path) #输入标签固定,输入数据每个batch前4张放pos,后4张放neg label_batch = np.reshape(np.array([1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0]),[1, 8]) vgg = vgg16.Vgg16() vgg.build(x) #loss函数的定义在后面 loss = vgg.side_loss( y_,vgg.output1, vgg.output2, vgg.output3, vgg.output4) train_step = tf.train.AdamOptimizer(learning_rate).minimize(loss, global_step=global_step) init_op = tf.global_variables_initializer() saver = tf.train.Saver() with tf.device('/gpu:0'): with tf.Session() as sess: sess.run(init_op) for i in range(TRAINING_STEPS): #在train.py的其他部分定义了batch_size= 4 start = i * batch_size end = start + batch_size #制作输入数据,前4个是标签为1的图,后4个是标签为0的图 image_list = [] image_list.append(pos[start:end]) image_list.append(neg[start:end]) image_batch = np.reshape(np.array(image_list),[-1,182,182,2]) _,loss_val,step = sess.run([train_step,loss,global_step], feed_dict={x: image_batch,y_:label_batch}) if i % 50 == 0: print("the step is %d,loss is %f" % (step, loss_val)) if loss_val < min_loss: min_loss = loss_val saver.save(sess, 'ckpt/vgg.ckpt', global_step=2000) ``` **Loss 函数的定义** ``` **loss函数的定义(写在了Vgg16类里)** ``` class Vgg16: #a,b,c,d都是vgg模型里的输出,是多输出模型 def side_loss(self,yi,a,b,c,d): self.loss1 = self.f_indicator(yi, a) self.loss2 = self.f_indicator(yi, b) self.loss3 = self.f_indicator(yi, c) self.loss_fuse = self.f_indicator(yi, d) self.loss_side = self.loss1 + self.loss2 + self.loss3 + self.loss_fuse res_loss = tf.reduce_sum(self.loss_side) return res_loss #损失函数的定义,标签为0时为log(1-yj),标签为1时为log(yj) def f_indicator(self,yi,yj): b = tf.where(yj>=1,yj*50,tf.abs(tf.log(tf.abs(1 - yj)))) res=tf.where(tf.equal(yi , 0.0), b,tf.abs(tf.log(tf.clip_by_value(yj, 1e-8, float("inf"))))) return res ```
Tensorflow 2.0 : When using data tensors as input to a model, you should specify the `steps_per_epoch` argument.
下面代码每次执行到epochs 中的最后一个step 都会报错,请教大牛这是什么问题呢? ``` import tensorflow_datasets as tfds dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True, as_supervised=True) train_dataset,test_dataset = dataset['train'],dataset['test'] tokenizer = info.features['text'].encoder print('vocabulary size: ', tokenizer.vocab_size) sample_string = 'Hello world, tensorflow' tokenized_string = tokenizer.encode(sample_string) print('tokened id: ', tokenized_string) src_string= tokenizer.decode(tokenized_string) print(src_string) for t in tokenized_string: print(str(t) + ': '+ tokenizer.decode([t])) BUFFER_SIZE=6400 BATCH_SIZE=64 num_train_examples = info.splits['train'].num_examples num_test_examples=info.splits['test'].num_examples print("Number of training examples: {}".format(num_train_examples)) print("Number of test examples: {}".format(num_test_examples)) train_dataset=train_dataset.shuffle(BUFFER_SIZE) train_dataset=train_dataset.padded_batch(BATCH_SIZE,train_dataset.output_shapes) test_dataset=test_dataset.padded_batch(BATCH_SIZE,test_dataset.output_shapes) def get_model(): model=tf.keras.Sequential([ tf.keras.layers.Embedding(tokenizer.vocab_size,64), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)), tf.keras.layers.Dense(64,activation='relu'), tf.keras.layers.Dense(1,activation='sigmoid') ]) return model model =get_model() model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) import math #from tensorflow import keras #train_dataset= keras.preprocessing.sequence.pad_sequences(train_dataset, maxlen=BUFFER_SIZE) history =model.fit(train_dataset, epochs=2, steps_per_epoch=(math.ceil(BUFFER_SIZE/BATCH_SIZE) -90 ), validation_data= test_dataset) ``` Train on 10 steps Epoch 1/2 9/10 [==========================>...] - ETA: 3s - loss: 0.6955 - accuracy: 0.4479 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-111-8ddec076c096> in <module> 6 epochs=2, 7 steps_per_epoch=(math.ceil(BUFFER_SIZE/BATCH_SIZE) -90 ), ----> 8 validation_data= test_dataset) /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 726 max_queue_size=max_queue_size, 727 workers=workers, --> 728 use_multiprocessing=use_multiprocessing) 729 730 def evaluate(self, /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs) 672 validation_steps=validation_steps, 673 validation_freq=validation_freq, --> 674 steps_name='steps_per_epoch') 675 676 def evaluate(self, /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs) 437 validation_in_fit=True, 438 prepared_feed_values_from_dataset=(val_iterator is not None), --> 439 steps_name='validation_steps') 440 if not isinstance(val_results, list): 441 val_results = [val_results] /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs) 174 if not is_dataset: 175 num_samples_or_steps = _get_num_samples_or_steps(ins, batch_size, --> 176 steps_per_epoch) 177 else: 178 num_samples_or_steps = steps_per_epoch /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in _get_num_samples_or_steps(ins, batch_size, steps_per_epoch) 491 return steps_per_epoch 492 return training_utils.check_num_samples(ins, batch_size, steps_per_epoch, --> 493 'steps_per_epoch') 494 495 /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in check_num_samples(ins, batch_size, steps, steps_name) 422 raise ValueError('If ' + steps_name + 423 ' is set, the `batch_size` must be None.') --> 424 if check_steps_argument(ins, steps, steps_name): 425 return None 426 /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in check_steps_argument(input_data, steps, steps_name) 1199 raise ValueError('When using {input_type} as input to a model, you should' 1200 ' specify the `{steps_name}` argument.'.format( -> 1201 input_type=input_type_str, steps_name=steps_name)) 1202 return True 1203 ValueError: When using data tensors as input to a model, you should specify the `steps_per_epoch` argument.
keras 训练 IMDB数据 为什么预测的是正面情感?
学习 利用Keras中的IMDB数据集,对评论进行二分类,有个疑问是:为什么预测的是正面情感?代码如下: from keras.datasets import imdb from keras import models from keras import layers import numpy as np import matplotlib.pyplot as plt def vectorize_sequences(sequences, dimension=10000): results = np.zeros((len(sequences), dimension)) for i, sequence in enumerate(sequences): results[i, sequence] = 1. print('i=',i,'results[i]=',results[i]) return results (train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000) '''word_index = imdb.get_word_index() reverse_word_index = dict([(value, key) for (key, value) in word_index.items()]) decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]]) ''' x_train = vectorize_sequences(train_data) x_test = vectorize_sequences(test_data) y_train = np.asarray(train_labels).astype('float32') y_test = np.asarray(test_labels).astype('float32') model = models.Sequential() model.add(layers.Dense(16, activation='relu', input_shape=(10000,))) model.add(layers.Dense(16,activation='relu')) model.add(layers.Dense(1,activation='sigmoid')) x_val = x_train[:10000] partial_x_train = x_train[10000:] y_val = y_train[:10000] partial_y_train = y_train[10000:] model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc']) history = model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=512,validation_data=(x_val, y_val)) history_dict = history.history loss_value = history_dict['loss'] val_loss_value = history_dict['val_loss'] epochs = range(1,len(loss_value)+1) plt.plot(epochs, loss_value, 'bo', label='Trianing Loss') plt.plot(epochs, val_loss_value, 'b', label='Validation Loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show()
Tensorflow中训练cifar10出现name 'train_data' is not defined问题
用神经网络训练cifar10 ``` init=tf.global_variables_initializer() batch_size=20 train_steps=1000 with tf.Session() as sess: for i in range(train_steps): batch_data,batch_labels=train_data.next_batch(batch_size) loss_val,acc_val,_=sess.run( [loss,accuracy,train_op], feed_dict= {x: batch_data, y: batch_labels}) if i%500==0: print('[Train] Step:%d,loss:%4.5f,acc:%4.5f'\ %(i,loss_val,acc_val)) # ``` 出现name 'train_data' is not defined问题不知道怎么解决了
keras下self-attention和Recall, F1-socre值实现问题?
麻烦大神帮忙看一下: (1)为何返回不了Precise, Recall, F1-socre值? (2)为何在CNN前加了self-attention层,训练后的acc反而降低在0.78上下? 【研一小白求详解,万分感谢大神】 ``` import os #导入os模块,用于确认文件是否存在 import numpy as np from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.callbacks import Callback from sklearn.metrics import f1_score, precision_score, recall_score maxlen = 380#句子长截断为100 training_samples = 20000#在 200 个样本上训练 validation_samples = 5000#在 10 000 个样本上验证 max_words = 10000#只考虑数据集中前 10 000 个最常见的单词 def dataProcess(): imdb_dir = 'data/aclImdb'#基本路径,经常要打开这个 #处理训练集 train_dir = os.path.join(imdb_dir, 'train')#添加子路径 train_labels = [] train_texts = [] for label_type in ['neg', 'pos']: dir_name = os.path.join(train_dir, label_type) for fname in os.listdir(dir_name):#获取目录下所有文件名字 if fname[-4:] == '.txt': f = open(os.path.join(dir_name, fname),'r',encoding='utf8') train_texts.append(f.read()) f.close() if label_type == 'neg': train_labels.append(0) else:train_labels.append(1) #处理测试集 test_dir = os.path.join(imdb_dir, 'test') test_labels = [] test_texts = [] for label_type in ['neg', 'pos']: dir_name = os.path.join(test_dir, label_type) for fname in sorted(os.listdir(dir_name)): if fname[-4:] == '.txt': f = open(os.path.join(dir_name, fname),'r',encoding='utf8') test_texts.append(f.read()) f.close() if label_type == 'neg': test_labels.append(0) else: test_labels.append(1) #对数据进行分词和划分训练集和数据集 tokenizer = Tokenizer(num_words=max_words) tokenizer.fit_on_texts(train_texts)#构建单词索引结构 sequences = tokenizer.texts_to_sequences(train_texts)#整数索引的向量化模型 word_index = tokenizer.word_index#索引字典 print('Found %s unique tokens.' % len(word_index)) data = pad_sequences(sequences, maxlen=maxlen) train_labels = np.asarray(train_labels)#把列表转化为数组 print('Shape of data tensor:', data.shape) print('Shape of label tensor:', train_labels.shape) indices = np.arange(data.shape[0])#评论顺序0,1,2,3 np.random.shuffle(indices)#把评论顺序打乱3,1,2,0 data = data[indices] train_labels = train_labels[indices] x_train = data[:training_samples] y_train = train_labels[:training_samples] x_val = data[training_samples: training_samples + validation_samples] y_val = train_labels[training_samples: training_samples + validation_samples] #同样需要将测试集向量化 test_sequences = tokenizer.texts_to_sequences(test_texts) x_test = pad_sequences(test_sequences, maxlen=maxlen) y_test = np.asarray(test_labels) return x_train,y_train,x_val,y_val,x_test,y_test,word_index embedding_dim = 100#特征数设为100 #"""将预训练的glove词嵌入文件,构建成可以加载到embedding层中的嵌入矩阵""" def load_glove(word_index):#导入glove的词向量 embedding_file='data/glove.6B' embeddings_index={}#定义字典 f = open(os.path.join(embedding_file, 'glove.6B.100d.txt'),'r',encoding='utf8') for line in f: values = line.split() word = values[0] coefs = np.asarray(values[1:], dtype='float32') embeddings_index[word] = coefs f.close() # """转化为矩阵:构建可以加载到embedding层中的嵌入矩阵,形为(max_words(单词数), embedding_dim(向量维数)) """ embedding_matrix = np.zeros((max_words, embedding_dim)) for word, i in word_index.items():#字典里面的单词和索引 if i >= max_words:continue embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector return embedding_matrix if __name__ == '__main__': x_train, y_train, x_val, y_val,x_test,y_test, word_index = dataProcess() embedding_matrix=load_glove(word_index) #可以把得到的嵌入矩阵保存起来,方便后面fine-tune""" # #保存 from keras.models import Sequential from keras.layers.core import Dense,Dropout,Activation,Flatten from keras.layers.recurrent import LSTM from keras.layers import Embedding from keras.layers import Bidirectional from keras.layers import Conv1D, MaxPooling1D import keras from keras_self_attention import SeqSelfAttention model = Sequential() model.add(Embedding(max_words, embedding_dim, input_length=maxlen)) model.add(SeqSelfAttention(attention_activation='sigmod')) model.add(Conv1D(filters = 64, kernel_size = 5, padding = 'same', activation = 'relu')) model.add(MaxPooling1D(pool_size = 4)) model.add(Dropout(0.25)) model.add(Bidirectional(LSTM(64,activation='tanh',dropout=0.2,recurrent_dropout=0.2))) model.add(Dense(256, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(1, activation='sigmoid')) model.summary() model.layers[0].set_weights([embedding_matrix]) model.layers[0].trainable = False model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc']) class Metrics(Callback): def on_train_begin(self, logs={}): self.val_f1s = [] self.val_recalls = [] self.val_precisions = [] def on_epoch_end(self, epoch, logs={}): val_predict = (np.asarray(self.model.predict(self.validation_data[0]))).round() val_targ = self.validation_data[1] _val_f1 = f1_score(val_targ, val_predict) _val_recall = recall_score(val_targ, val_predict) _val_precision = precision_score(val_targ, val_predict) self.val_f1s.append(_val_f1) self.val_recalls.append(_val_recall) self.val_precisions.append(_val_precision) return metrics = Metrics() history = model.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_val, y_val), callbacks=[metrics]) model.save_weights('pre_trained_glove_model.h5')#保存结果 ```
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
有哪些让程序员受益终生的建议
从业五年多,辗转两个大厂,出过书,创过业,从技术小白成长为基层管理,联合几个业内大牛回答下这个问题,希望能帮到大家,记得帮我点赞哦。 敲黑板!!!读了这篇文章,你将知道如何才能进大厂,如何实现财务自由,如何在工作中游刃有余,这篇文章很长,但绝对是精品,记得帮我点赞哦!!!! 一腔肺腑之言,能看进去多少,就看你自己了!!! 目录: 在校生篇: 为什么要尽量进大厂? 如何选择语言及方...
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发...
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 演示地点演示 html代码如下` music 这个年纪 七月的风 音乐 ` 然后就是css`*{ margin: 0; padding: 0; text-decoration: none; list-...
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。
数据库优化 - SQL优化
以实际SQL入手,带你一步一步走上SQL优化之路!
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 cpp 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7 p...
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小型人工智障。 知识可以运用在不同地方,不一定非是天气预报。
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
英特尔不为人知的 B 面
从 PC 时代至今,众人只知在 CPU、GPU、XPU、制程、工艺等战场中,英特尔在与同行硬件芯片制造商们的竞争中杀出重围,且在不断的成长进化中,成为全球知名的半导体公司。殊不知,在「刚硬」的背后,英特尔「柔性」的软件早已经做到了全方位的支持与支撑,并持续发挥独特的生态价值,推动产业合作共赢。 而对于这一不知人知的 B 面,很多人将其称之为英特尔隐形的翅膀,虽低调,但是影响力却不容小觑。 那么,在...
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹...
面试官:你连RESTful都不知道我怎么敢要你?
干货,2019 RESTful最贱实践
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看...
白话阿里巴巴Java开发手册高级篇
不久前,阿里巴巴发布了《阿里巴巴Java开发手册》,总结了阿里巴巴内部实际项目开发过程中开发人员应该遵守的研发流程规范,这些流程规范在一定程度上能够保证最终的项目交付质量,通过在时间中总结模式,并推广给广大开发人员,来避免研发人员在实践中容易犯的错误,确保最终在大规模协作的项目中达成既定目标。 无独有偶,笔者去年在公司里负责升级和制定研发流程、设计模板、设计标准、代码标准等规范,并在实际工作中进行...
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
redis分布式锁,面试官请随便问,我都会
文章有点长并且绕,先来个图片缓冲下! 前言 现在的业务场景越来越复杂,使用的架构也就越来越复杂,分布式、高并发已经是业务要求的常态。像腾讯系的不少服务,还有CDN优化、异地多备份等处理。 说到分布式,就必然涉及到分布式锁的概念,如何保证不同机器不同线程的分布式锁同步呢? 实现要点 互斥性,同一时刻,智能有一个客户端持有锁。 防止死锁发生,如果持有锁的客户端崩溃没有主动释放锁,也要保证锁可以正常释...
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
Nginx 原理和架构
Nginx 是一个免费的,开源的,高性能的 HTTP 服务器和反向代理,以及 IMAP / POP3 代理服务器。Nginx 以其高性能,稳定性,丰富的功能,简单的配置和低资源消耗而闻名。 Nginx 的整体架构 Nginx 里有一个 master 进程和多个 worker 进程。master 进程并不处理网络请求,主要负责调度工作进程:加载配置、启动工作进程及非停升级。worker 进程负责处...
Python 编程开发 实用经验和技巧
Python是一门很灵活的语言,也有很多实用的方法,有时候实现一个功能可以用多种方法实现,我这里总结了一些常用的方法和技巧,包括小数保留指定位小数、判断变量的数据类型、类方法@classmethod、制表符中文对齐、遍历字典、datetime.timedelta的使用等,会持续更新......
YouTube排名第一的励志英文演讲《Dream(梦想)》
Idon’t know what that dream is that you have, I don't care how disappointing it might have been as you've been working toward that dream,but that dream that you’re holding in your mind, that it’s po...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
Java世界最常用的工具类库
Apache Commons Apache Commons有很多子项目 Google Guava 参考博客
相关热词 c# clr dll c# 如何orm c# 固定大小的字符数组 c#框架设计 c# 删除数据库 c# 中文文字 图片转 c# 成员属性 接口 c#如何将程序封装 16进制负数转换 c# c#练手项目
立即提问