tensorflow载入训练好的模型进行预测,同一张图片预测的结果却不一样????

最近在跑deeplabv1,在测试代码的时候,跑通了训练程序,但是用训练好的模型进行与测试却发现相同的图片预测的结果不一样??请问有大神知道怎么回事吗?
用的是saver.restore()方法载入模型。代码如下:

def main():
    """Create the model and start the inference process."""
    args = get_arguments()
    # Prepare image.
    img = tf.image.decode_jpeg(tf.read_file(args.img_path), channels=3)
    # Convert RGB to BGR.
    img_r, img_g, img_b = tf.split(value=img, num_or_size_splits=3, axis=2)
    img = tf.cast(tf.concat(axis=2, values=[img_b, img_g, img_r]), dtype=tf.float32)
    # Extract mean.
    img -= IMG_MEAN
    # Create network.
    net = DeepLabLFOVModel()
    # Which variables to load.
    trainable = tf.trainable_variables()
    # Predictions.
    pred = net.preds(tf.expand_dims(img, dim=0))

    # Set up TF session and initialize variables.

    config = tf.ConfigProto()
    config.gpu_options.allow_growth = True

    sess = tf.Session(config=config)

    #init = tf.global_variables_initializer()

    sess.run(tf.global_variables_initializer())

    # Load weights.

    saver = tf.train.Saver(var_list=trainable)

    load(saver, sess, args.model_weights)

    # Perform inference.

    preds = sess.run([pred])
    print(preds)

    if not os.path.exists(args.save_dir):
        os.makedirs(args.save_dir)


    msk = decode_labels(np.array(preds)[0, 0, :, :, 0])

    im = Image.fromarray(msk)

    im.save(args.save_dir + 'mask1.png')

    print('The output file has been saved to {}'.format(
        args.save_dir + 'mask.png'))
if __name__ == '__main__':
    main()

其中load是

def load(saver, sess, ckpt_path):
    '''Load trained weights.
    Args:
      saver: TensorFlow saver object.
      sess: TensorFlow session.
      ckpt_path: path to checkpoint file with parameters.
    '''
    ckpt = tf.train.get_checkpoint_state(ckpt_path)
    if ckpt and ckpt.model_checkpoint_path:

        saver.restore(sess, ckpt.model_checkpoint_path)
        print("Restored model parameters from {}".format(ckpt_path))

DeepLabLFOVMode类如下:

class DeepLabLFOVModel(object):
    """DeepLab-LargeFOV model with atrous convolution and bilinear upsampling.
    This class implements a multi-layer convolutional neural network for semantic image segmentation task.
    This is the same as the model described in this paper: https://arxiv.org/abs/1412.7062 - please look
    there for details.
    """

    def __init__(self, weights_path=None):
        """Create the model.
        Args:
          weights_path: the path to the cpkt file with dictionary of weights from .caffemodel.
        """
        self.variables = self._create_variables(weights_path)

    def _create_variables(self, weights_path):
        """Create all variables used by the network.
        This allows to share them between multiple calls 
        to the loss function.
        Args:
          weights_path: the path to the ckpt file with dictionary of weights from .caffemodel. 
                        If none, initialise all variables randomly.
        Returns:
          A dictionary with all variables.
        """
        var = list()
        index = 0

        if weights_path is not None:
            with open(weights_path, "rb") as f:
                weights = cPickle.load(f)  # Load pre-trained weights.
                for name, shape in net_skeleton:
                    var.append(tf.Variable(weights[name],
                                           name=name))
                del weights
        else:
            # Initialise all weights randomly with the Xavier scheme,
            # and
            # all biases to 0's.
            for name, shape in net_skeleton:
                if "/w" in name:  # Weight filter.
                    w = create_variable(name, list(shape))
                    var.append(w)
                else:
                    b = create_bias_variable(name, list(shape))
                    var.append(b)
        return var

    def _create_network(self, input_batch, keep_prob):
        """Construct DeepLab-LargeFOV network.
        Args:
          input_batch: batch of pre-processed images.
          keep_prob: probability of keeping neurons intact.
        Returns:
          A downsampled segmentation mask. 
        """
        current = input_batch

        v_idx = 0  # Index variable.

        # Last block is the classification layer.
        for b_idx in xrange(len(dilations) - 1):
            for l_idx, dilation in enumerate(dilations[b_idx]):
                w = self.variables[v_idx * 2]
                b = self.variables[v_idx * 2 + 1]
                if dilation == 1:
                    conv = tf.nn.conv2d(current, w, strides=[
                                        1, 1, 1, 1], padding='SAME')
                else:
                    conv = tf.nn.atrous_conv2d(
                        current, w, dilation, padding='SAME')
                current = tf.nn.relu(tf.nn.bias_add(conv, b))
                v_idx += 1
            # Optional pooling and dropout after each block.
            if b_idx < 3:
                current = tf.nn.max_pool(current,
                                         ksize=[1, ks, ks, 1],
                                         strides=[1, 2, 2, 1],
                                         padding='SAME')
            elif b_idx == 3:
                current = tf.nn.max_pool(current,
                                         ksize=[1, ks, ks, 1],
                                         strides=[1, 1, 1, 1],
                                         padding='SAME')
            elif b_idx == 4:
                current = tf.nn.max_pool(current,
                                         ksize=[1, ks, ks, 1],
                                         strides=[1, 1, 1, 1],
                                         padding='SAME')
                current = tf.nn.avg_pool(current,
                                         ksize=[1, ks, ks, 1],
                                         strides=[1, 1, 1, 1],
                                         padding='SAME')
            elif b_idx <= 6:
                current = tf.nn.dropout(current, keep_prob=keep_prob)

        # Classification layer; no ReLU.
        # w = self.variables[v_idx * 2]
        w = create_variable(name='w', shape=[1, 1, 1024, n_classes])
        # b = self.variables[v_idx * 2 + 1]
        b = create_bias_variable(name='b', shape=[n_classes])
        conv = tf.nn.conv2d(current, w, strides=[1, 1, 1, 1], padding='SAME')
        current = tf.nn.bias_add(conv, b)

        return current

    def prepare_label(self, input_batch, new_size):
        """Resize masks and perform one-hot encoding.
        Args:
          input_batch: input tensor of shape [batch_size H W 1].
          new_size: a tensor with new height and width.
        Returns:
          Outputs a tensor of shape [batch_size h w 18]
          with last dimension comprised of 0's and 1's only.
        """
        with tf.name_scope('label_encode'):
            # As labels are integer numbers, need to use NN interp.
            input_batch = tf.image.resize_nearest_neighbor(
                input_batch, new_size)
            # Reducing the channel dimension.
            input_batch = tf.squeeze(input_batch, squeeze_dims=[3])
            input_batch = tf.one_hot(input_batch, depth=n_classes)
        return input_batch

    def preds(self, input_batch):
        """Create the network and run inference on the input batch.
        Args:
          input_batch: batch of pre-processed images.
        Returns:
          Argmax over the predictions of the network of the same shape as the input.
        """
        raw_output = self._create_network(
            tf.cast(input_batch, tf.float32), keep_prob=tf.constant(1.0))
        raw_output = tf.image.resize_bilinear(
            raw_output, tf.shape(input_batch)[1:3, ])
        raw_output = tf.argmax(raw_output, dimension=3)
        raw_output = tf.expand_dims(raw_output, dim=3)  # Create 4D-tensor.
        return tf.cast(raw_output, tf.uint8)

    def loss(self, img_batch, label_batch):
        """Create the network, run inference on the input batch and compute loss.
        Args:
          input_batch: batch of pre-processed images.
        Returns:
          Pixel-wise softmax loss.
        """
        raw_output = self._create_network(
            tf.cast(img_batch, tf.float32), keep_prob=tf.constant(0.5))
        prediction = tf.reshape(raw_output, [-1, n_classes])

        # Need to resize labels and convert using one-hot encoding.
        label_batch = self.prepare_label(
            label_batch, tf.stack(raw_output.get_shape()[1:3]))
        gt = tf.reshape(label_batch, [-1, n_classes])

        # Pixel-wise softmax loss.
        loss = tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=gt)
        reduced_loss = tf.reduce_mean(loss)

        return reduced_loss

按理说载入模型应该没有问题,可是不知道为什么结果却不一样?
图片:图片说明
图片说明
预测的结果:
图片说明
图片说明
两次结果不一样,与保存的模型算出来的结果也不一样。
我用的是GitHub上这个人的代码:
https://github.com/minar09/DeepLab-LFOV-TensorFlow
急急急,请问有大神知道吗???

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
tensorflow训练完模型直接测试和导入模型进行测试的结果不同,一个很好,一个略差,这是为什么?
在tensorflow训练完模型,我直接采用同一个session进行测试,得到结果较好,但是采用训练完保存的模型,进行重新载入进行测试,结果较差,不懂是为什么会出现这样的结果。注:测试数据是一样的。以下是模型结果: 训练集:loss:0.384,acc:0.931. 验证集:loss:0.212,acc:0.968. 训练完在同一session内的测试集:acc:0.96。导入保存的模型进行测试:acc:0.29 ``` def create_model(hps): global_step = tf.Variable(tf.zeros([], tf.float64), name = 'global_step', trainable = False) scale = 1.0 / math.sqrt(hps.num_embedding_size + hps.num_lstm_nodes[-1]) / 3.0 print(type(scale)) gru_init = tf.random_normal_initializer(-scale, scale) with tf.variable_scope('Bi_GRU_nn', initializer = gru_init): for i in range(hps.num_lstm_layers): cell_bw = tf.contrib.rnn.GRUCell(hps.num_lstm_nodes[i], activation = tf.nn.relu, name = 'cell-bw') cell_bw = tf.contrib.rnn.DropoutWrapper(cell_bw, output_keep_prob = dropout_keep_prob) cell_fw = tf.contrib.rnn.GRUCell(hps.num_lstm_nodes[i], activation = tf.nn.relu, name = 'cell-fw') cell_fw = tf.contrib.rnn.DropoutWrapper(cell_fw, output_keep_prob = dropout_keep_prob) rnn_outputs, _ = tf.nn.bidirectional_dynamic_rnn(cell_bw, cell_fw, inputs, dtype=tf.float32) embeddedWords = tf.concat(rnn_outputs, 2) finalOutput = embeddedWords[:, -1, :] outputSize = hps.num_lstm_nodes[-1] * 2 # 因为是双向LSTM,最终的输出值是fw和bw的拼接,因此要乘以2 last = tf.reshape(finalOutput, [-1, outputSize]) # reshape成全连接层的输入维度 last = tf.layers.batch_normalization(last, training = is_training) fc_init = tf.uniform_unit_scaling_initializer(factor = 1.0) with tf.variable_scope('fc', initializer = fc_init): fc1 = tf.layers.dense(last, hps.num_fc_nodes, name = 'fc1') fc1_batch_normalization = tf.layers.batch_normalization(fc1, training = is_training) fc_activation = tf.nn.relu(fc1_batch_normalization) logits = tf.layers.dense(fc_activation, hps.num_classes, name = 'fc2') with tf.name_scope('metrics'): softmax_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits = logits, labels = tf.argmax(outputs, 1)) loss = tf.reduce_mean(softmax_loss) # [0, 1, 5, 4, 2] ->argmax:2 因为在第二个位置上是最大的 y_pred = tf.argmax(tf.nn.softmax(logits), 1, output_type = tf.int64, name = 'y_pred') # 计算准确率,看看算对多少个 correct_pred = tf.equal(tf.argmax(outputs, 1), y_pred) # tf.cast 将数据转换成 tf.float32 类型 accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) with tf.name_scope('train_op'): tvar = tf.trainable_variables() for var in tvar: print('variable name: %s' % (var.name)) grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvar), hps.clip_lstm_grads) optimizer = tf.train.AdamOptimizer(hps.learning_rate) train_op = optimizer.apply_gradients(zip(grads, tvar), global_step) # return((inputs, outputs, is_training), (loss, accuracy, y_pred), (train_op, global_step)) return((inputs, outputs), (loss, accuracy, y_pred), (train_op, global_step)) placeholders, metrics, others = create_model(hps) content, labels = placeholders loss, accuracy, y_pred = metrics train_op, global_step = others def val_steps(sess, x_batch, y_batch, writer = None): loss_val, accuracy_val = sess.run([loss,accuracy], feed_dict = {inputs: x_batch, outputs: y_batch, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) return loss_val, accuracy_val loss_summary = tf.summary.scalar('loss', loss) accuracy_summary = tf.summary.scalar('accuracy', accuracy) # 将所有的变量都集合起来 merged_summary = tf.summary.merge_all() # 用于test测试的summary merged_summary_test = tf.summary.merge([loss_summary, accuracy_summary]) LOG_DIR = '.' run_label = 'run_Bi-GRU_Dropout_tensorboard' run_dir = os.path.join(LOG_DIR, run_label) if not os.path.exists(run_dir): os.makedirs(run_dir) train_log_dir = os.path.join(run_dir, timestamp, 'train') test_los_dir = os.path.join(run_dir, timestamp, 'test') if not os.path.exists(train_log_dir): os.makedirs(train_log_dir) if not os.path.join(test_los_dir): os.makedirs(test_los_dir) # saver得到的文件句柄,可以将文件训练的快照保存到文件夹中去 saver = tf.train.Saver(tf.global_variables(), max_to_keep = 5) # train 代码 init_op = tf.global_variables_initializer() train_keep_prob_value = 0.2 test_keep_prob_value = 1.0 # 由于如果按照每一步都去计算的话,会很慢,所以我们规定每100次存储一次 output_summary_every_steps = 100 num_train_steps = 1000 # 每隔多少次保存一次 output_model_every_steps = 500 # 测试集测试 test_model_all_steps = 4000 i = 0 session_conf = tf.ConfigProto( gpu_options = tf.GPUOptions(allow_growth=True), allow_soft_placement = True, log_device_placement = False) with tf.Session(config = session_conf) as sess: sess.run(init_op) # 将训练过程中,将loss,accuracy写入文件里,后面是目录和计算图,如果想要在tensorboard中显示计算图,就想sess.graph加上 train_writer = tf.summary.FileWriter(train_log_dir, sess.graph) # 同样将测试的结果保存到tensorboard中,没有计算图 test_writer = tf.summary.FileWriter(test_los_dir) batches = batch_iter(list(zip(x_train, y_train)), hps.batch_size, hps.num_epochs) for batch in batches: train_x, train_y = zip(*batch) eval_ops = [loss, accuracy, train_op, global_step] should_out_summary = ((i + 1) % output_summary_every_steps == 0) if should_out_summary: eval_ops.append(merged_summary) # 那三个占位符输进去 # 计算loss, accuracy, train_op, global_step的图 eval_ops.append(merged_summary) outputs_train = sess.run(eval_ops, feed_dict={ inputs: train_x, outputs: train_y, dropout_keep_prob: train_keep_prob_value, is_training: hps.train_is_training }) loss_train, accuracy_train = outputs_train[0:2] if should_out_summary: # 由于我们想在100steps之后计算summary,所以上面 should_out_summary = ((i + 1) % output_summary_every_steps == 0)成立, # 即为真True,那么我们将训练的内容放入eval_ops的最后面了,因此,我们想获得summary的结果得在eval_ops_results的最后一个 train_summary_str = outputs_train[-1] # 将获得的结果写训练tensorboard文件夹中,由于训练从0开始,所以这里加上1,表示第几步的训练 train_writer.add_summary(train_summary_str, i + 1) test_summary_str = sess.run([merged_summary_test], feed_dict = {inputs: x_dev, outputs: y_dev, dropout_keep_prob: 1.0, is_training: hps.val_is_training })[0] test_writer.add_summary(test_summary_str, i + 1) current_step = tf.train.global_step(sess, global_step) if (i + 1) % 100 == 0: print("Step: %5d, loss: %3.3f, accuracy: %3.3f" % (i + 1, loss_train, accuracy_train)) # 500个batch校验一次 if (i + 1) % 500 == 0: loss_eval, accuracy_eval = val_steps(sess, x_dev, y_dev) print("Step: %5d, val_loss: %3.3f, val_accuracy: %3.3f" % (i + 1, loss_eval, accuracy_eval)) if (i + 1) % output_model_every_steps == 0: path = saver.save(sess,os.path.join(out_dir, 'ckp-%05d' % (i + 1))) print("Saved model checkpoint to {}\n".format(path)) print('model saved to ckp-%05d' % (i + 1)) if (i + 1) % test_model_all_steps == 0: # test_loss, test_acc, all_predictions= sess.run([loss, accuracy, y_pred], feed_dict = {inputs: x_test, outputs: y_test, dropout_keep_prob: 1.0}) test_loss, test_acc, all_predictions= sess.run([loss, accuracy, y_pred], feed_dict = {inputs: x_test, outputs: y_test, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) print("test_loss: %3.3f, test_acc: %3.3d" % (test_loss, test_acc)) batches = batch_iter(list(x_test), 128, 1, shuffle=False) # Collect the predictions here all_predictions = [] for x_test_batch in batches: batch_predictions = sess.run(y_pred, {inputs: x_test_batch, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) all_predictions = np.concatenate([all_predictions, batch_predictions]) correct_predictions = float(sum(all_predictions == y.flatten())) print("Total number of test examples: {}".format(len(y_test))) print("Accuracy: {:g}".format(correct_predictions/float(len(y_test)))) test_y = y_test.argmax(axis = 1) #生成混淆矩阵 conf_mat = confusion_matrix(test_y, all_predictions) fig, ax = plt.subplots(figsize = (4,2)) sns.heatmap(conf_mat, annot=True, fmt = 'd', xticklabels = cat_id_df.category_id.values, yticklabels = cat_id_df.category_id.values) font_set = FontProperties(fname = r"/usr/share/fonts/truetype/wqy/wqy-microhei.ttc", size=15) plt.ylabel(u'实际结果',fontsize = 18,fontproperties = font_set) plt.xlabel(u'预测结果',fontsize = 18,fontproperties = font_set) plt.savefig('./test.png') print('accuracy %s' % accuracy_score(all_predictions, test_y)) print(classification_report(test_y, all_predictions,target_names = cat_id_df['category_name'].values)) print(classification_report(test_y, all_predictions)) i += 1 ``` 以上的模型代码,请求各位大神帮我看看,为什么出现这样的结果?
tensorflow模型载入失败
载入模型的时候不知道怎么出现这种报错 DataLossError (see above for traceback): Unable to open table file .\eye-model\eye_kaggle.ckpt-192.meta: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
tensorflow CNN训练图片分类的时候,模型训练不出来,准确率0.1(分了十类),模型失效,是什么原因?
``` def compute_accuracy(v_xs, v_ys): global prediction y_pre = sess.run(prediction, feed_dict={xs: v_xs, keep_prob: 1}) correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(v_ys,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) result = sess.run(accuracy, feed_dict={xs: v_xs, ys: v_ys, keep_prob: 1}) return result def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) def conv2d(x, W): # stride [1, x_movement, y_movement, 1] # Must have strides[0] = strides[3] = 1 return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') def max_pool_2x2(x): # stride [1, x_movement, y_movement, 1] return tf.nn.max_pool(x, ksize=[1,4,4,1], strides=[1,4,4,1], padding='SAME') # define placeholder for inputs to network xs = tf.placeholder(tf.float32, [None, 65536])/255. # 256x256 ys = tf.placeholder(tf.float32, [None, 10]) keep_prob = tf.placeholder(tf.float32) x_image = tf.reshape(xs, [-1, 256, 256, 1]) # print(x_image.shape) # [n_samples, 28,28,1] ## conv1 layer ## W_conv1 = weight_variable([3,3, 1,64]) # patch 5x5, in size 1, out size 32 b_conv1 = bias_variable([64]) h_conv1 = tf.nn.elu(conv2d(x_image, W_conv1) + b_conv1) # output size 28x28x32 h_pool1 = tf.nn.max_pool(h_conv1, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME') # output size 14x14x32 ## conv2 layer ## W_conv2 = weight_variable([3,3, 64, 128]) # patch 5x5, in size 32, out size 64 b_conv2 = bias_variable([128]) h_conv2 = tf.nn.elu(conv2d(h_pool1, W_conv2) + b_conv2) # output size 14x14x64 h_pool2 = max_pool_2x2(h_conv2) # output size 7x7x64 ## conv3 layer ## W_conv3 = weight_variable([3,3, 128, 256]) # patch 5x5, in size 32, out size 64 b_conv3 = bias_variable([256]) h_conv3 = tf.nn.elu(conv2d(h_pool2, W_conv3) + b_conv3) # output size 14x14x64 h_pool3 = max_pool_2x2(h_conv3) ## conv4 layer ## W_conv4 = weight_variable([3,3, 256, 512]) # patch 5x5, in size 32, out size 64 b_conv4 = bias_variable([512]) h_conv4 = tf.nn.elu(conv2d(h_pool3, W_conv4) + b_conv4) # output size 14x14x64 h_pool4 = max_pool_2x2(h_conv4) # ## conv5 layer ## # W_conv5 = weight_variable([3,3, 512, 512]) # patch 5x5, in size 32, out size 64 # b_conv5 = bias_variable([512]) # h_conv5 = tf.nn.relu(conv2d(h_pool3, W_conv4) + b_conv4) # output size 14x14x64 # h_pool5 = max_pool_2x2(h_conv4) ## fc1 layer ## W_fc1 = weight_variable([2*2*512, 128]) b_fc1 = bias_variable([128]) # [n_samples, 7, 7, 64] ->> [n_samples, 7*7*64] h_pool4_flat = tf.reshape(h_pool4, [-1, 2*2*512]) h_fc1 = tf.nn.elu(tf.matmul(h_pool4_flat, W_fc1) + b_fc1) h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) ## fc2 layer ## W_fc2 = weight_variable([128, 10]) b_fc2 = bias_variable([10]) prediction = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2) # 定义优化器和训练op loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=ys, logits=prediction)) train_step = tf.train.RMSPropOptimizer((1e-3)).minimize(loss) correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(ys, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # 用于保存和载入模型 saver = tf.train.Saver() def int2onehot(train_batch_ys): num_labels = train_batch_ys.shape[0] num_classes=10 index_offset = np.arange(num_labels) * num_classes labels_one_hot = np.zeros((num_labels, num_classes),dtype=np.float32) labels_one_hot.flat[index_offset + train_batch_ys.ravel()] = 1 return labels_one_hot train_label_lists, train_data_lists, train_fname_lists = read_tfrecords(train_tfrecord_file) iterations = 100 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # 执行训练迭代 for it in range(iterations): # 这里的关键是要把输入数组转为np.array for i in range(200): train_label_list = train_label_lists[i] train_data_list= train_data_lists[i] train_name_list = train_fname_lists[i] #print("shape of train_data_list: {}\tshape of train_label_list: {}".format(train_data_list.shape, train_label_list.shape)) #print('该批文件名:',train_name_list) print('该批标签:',train_label_list) # 计算有多少类图片 #num_classes = len(set(train_label_list)) #print("num_classes:",num_classes) train_batch_xs = train_data_list train_batch_xs = np.reshape(train_batch_xs, (-1, 65536)) train_batch_ys = train_label_list train_batch_ys = int2onehot(train_batch_ys) #print('第'+str(i)+'批-----------') print("连接层1之后----------------------------------------") for i in range(80): print("元素"+str(i)+":",sess.run(tf.reduce_mean(sess.run(h_fc1_drop,feed_dict={xs: train_batch_xs, ys: train_batch_ys, keep_prob: 0.5})[i].shape))) print("元素"+str(i)+":",sess.run(h_fc1_drop,feed_dict={xs: train_batch_xs, ys: train_batch_ys, keep_prob: 0.5})[i]) print("连接层2之后----------------------------------------") for i in range(80): print("元素"+str(i)+":",sess.run(tf.reduce_mean(sess.run(prediction,feed_dict={xs: train_batch_xs, ys: train_batch_ys, keep_prob: 0.5})[i].shape))) print("元素"+str(i)+":",sess.run(prediction,feed_dict={xs: train_batch_xs, ys: train_batch_ys, keep_prob: 0.5})[i]) #loss.run(feed_dict={xs: train_batch_xs, ys: train_batch_ys, keep_prob: 0.5}) train_step.run(feed_dict={xs: train_batch_xs, ys: train_batch_ys, keep_prob: 0.5}) time.sleep(7) # 每完成五次迭代,判断准确度是否已达到100%,达到则退出迭代循环 iterate_accuracy = 0 if it%5 == 0: iterate_accuracy = accuracy.eval(feed_dict={xs: train_batch_xs, ys: train_batch_ys, keep_prob: 0.5}) print ('iteration %d: accuracy %s' % (it, iterate_accuracy)) if iterate_accuracy >= 1: break; print ('完成训练!') ```
tensorflow重载模型继续训练得到的loss比原模型继续训练得到的loss大,是什么原因??
我使用tensorflow训练了一个模型,在第10个epoch时保存模型,然后在一个新的文件里重载模型继续训练,结果我发现重载的模型在第一个epoch的loss比原模型在epoch=11的loss要大,我感觉既然是重载了原模型,那么重载模型训练的第一个epoch应该是和原模型训练的第11个epoch相等的,一直找不到问题或者自己逻辑的问题,希望大佬能指点迷津。源代码和重载模型的代码如下: ``` 原代码: from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf import os import numpy as np os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' mnist = input_data.read_data_sets("./",one_hot=True) tf.reset_default_graph() ###定义数据和标签 n_inputs = 784 n_classes = 10 X = tf.placeholder(tf.float32,[None,n_inputs],name='X') Y = tf.placeholder(tf.float32,[None,n_classes],name='Y') ###定义网络结构 n_hidden_1 = 256 n_hidden_2 = 256 layer_1 = tf.layers.dense(inputs=X,units=n_hidden_1,activation=tf.nn.relu,kernel_regularizer=tf.contrib.layers.l2_regularizer(0.01)) layer_2 = tf.layers.dense(inputs=layer_1,units=n_hidden_2,activation=tf.nn.relu,kernel_regularizer=tf.contrib.layers.l2_regularizer(0.01)) outputs = tf.layers.dense(inputs=layer_2,units=n_classes,name='outputs') pred = tf.argmax(tf.nn.softmax(outputs,axis=1),axis=1) print(pred.name) err = tf.count_nonzero((pred - tf.argmax(Y,axis=1))) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=outputs,labels=Y),name='cost') print(cost.name) ###定义优化器 learning_rate = 0.001 optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost,name='OP') saver = tf.train.Saver() checkpoint = 'softmax_model/dense_model.cpkt' ###训练 batch_size = 100 training_epochs = 11 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for epoch in range(training_epochs): batch_num = int(mnist.train.num_examples / batch_size) epoch_cost = 0 sumerr = 0 for i in range(batch_num): batch_x,batch_y = mnist.train.next_batch(batch_size) c,e = sess.run([cost,err],feed_dict={X:batch_x,Y:batch_y}) _ = sess.run(optimizer,feed_dict={X:batch_x,Y:batch_y}) epoch_cost += c / batch_num sumerr += e / mnist.train.num_examples if epoch == (training_epochs - 1): print('batch_cost = ',c) if epoch == (training_epochs - 2): saver.save(sess, checkpoint) print('test_error = ',sess.run(cost, feed_dict={X: mnist.test.images, Y: mnist.test.labels})) ``` ``` 重载模型的代码: from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' mnist = input_data.read_data_sets("./",one_hot=True) #one_hot=True指对样本标签进行独热编码 file_path = 'softmax_model/dense_model.cpkt' saver = tf.train.import_meta_graph(file_path + '.meta') graph = tf.get_default_graph() X = graph.get_tensor_by_name('X:0') Y = graph.get_tensor_by_name('Y:0') cost = graph.get_operation_by_name('cost').outputs[0] train_op = graph.get_operation_by_name('OP') training_epoch = 10 learning_rate = 0.001 batch_size = 100 with tf.Session() as sess: saver.restore(sess,file_path) print('test_cost = ',sess.run(cost, feed_dict={X: mnist.test.images, Y: mnist.test.labels})) for epoch in range(training_epoch): batch_num = int(mnist.train.num_examples / batch_size) epoch_cost = 0 for i in range(batch_num): batch_x, batch_y = mnist.train.next_batch(batch_size) c = sess.run(cost, feed_dict={X: batch_x, Y: batch_y}) _ = sess.run(train_op, feed_dict={X: batch_x, Y: batch_y}) epoch_cost += c / batch_num print(epoch_cost) ``` 值得注意的是,我在原模型和重载模型里都计算了测试集的cost,两者的结果是一致的。说明参数载入应该是对的
R语言加载xlsx包出错无法载入
R语言加载xlsx包出错,按照搜索过的教程重新下载了java也设置了环境,但依旧无法加载xlsx包 Error: package or namespace load failed for 'xlsx': .onLoad failed in loadNamespace() for 'rJava', details: call: inDL(x, as.logical(local), as.logical(now), ...) error: 无法载入共享目标对象‘D:/R/R-3.6.0/library/rJava/libs/i386/rJava.dll’:: LoadLibrary failure: %1 不是有效的 Win32 应用程序。 错误: 载入失败 停止执行 R和java检查过了都是64位的,请各位大神看看是怎么回事?万分感激!! [图片说明](https://img-ask.csdn.net/upload/202002/15/1581737567_471063.png)
tensorflow inceptionv3
``` #8.4使用inception-v3做各种图像识别 import tensorflow as tf import os import numpy as np import re from PIL import Image import matplotlib.pyplot as plt class NodeLookup(object): def __init__(self): label_lookup_path = 'inception_model/imagenet_2012_challenge_label_map_proto.pbtxt' uid_lookup_path = 'inception_model/imagenet_sysnet_to_human_label_map.txt' self.node_lookup = self.load(label_lookup_path,uid_lookup_path) def load(self,label_lookup_path,uid_lookup_path): #加载分类字符串n**********对应分类名称的文件 proto_as_ascii_lines = tf.gfile.GFile(uid_lookup_path).readlines() uid_to_human = {} #一行一行读取数据 for line in proto_as_ascii_lines: #去掉换行符 line = line.strip('\n') #按照'\t'分割 parsed_items = line.split('\t') #获取分类编号 human_string = parsed_items[1] #保存编号字符串n********于分类名称映射的关系 uid_to_human[uid] = human_string #加载分类字符串n*********对应分类编号1-1000的文件 proto_as_ascii = tf.gfile.GFile(label_lookup_path).readlines() node_id_to_uid = {} for line in proto_as_ascii: if line.starstwith(' target_class:'):#前面要有空格 #获取分类编号1-1000 target_class = int(line.split(': ')[1])#:后面要有空格 if line.startswith(' target_class_string:'): #获取编号字符串n******** target_class_string = line.split(': ')[1] #保存分类编号1-1000于编号字符串n********映射关系 node_id_to_uid[target_class] = target_class_string[1:-2]#第一个字符取到倒数第二个 #建立分类编好1-1000对应分类名称的映射关系 node_id_to_name = {} for key,val in node_id_to_uid.items(): #获取分类名称 name = uid_to_human[val] #建立分类编号1-1000到分类名称的映射关系 node_id_to_name[key] = name return node_id_to_name #传入分类编号1-1000,返回分类名称 def id_to_string(self,node_id): if node_id not in self.node_lookup: return'' return self.node_lookup[node_id] #创建一个图来存放google训练好的模型 with tf.gfile.FastGFile('inception_model/classify_image_graph_def.pb','rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) tf.import_graph_def(graph_def,name = '') with tf.Session() as sess: softmax_tensor = sess.graph.get_tensor_by_name('softmax:0') #遍历目录 for root,dirs,files in os.walk('images/'): for file in files: #载入图片 image_data = tf.gfile.GFile(os.path.join(root,file),'rb').read() predictions = sess.run(softmax_tensor,{'DecodeJpeg/contents:0':image_data})#图片格式为jpg predictions = np.squeeze(predictions)#把结果转化为一维数据 #打印图片路径及名称 image_path = os.path.join(root.file) print(image_path) #显示图片 img = Image.open(image_path) plt.imshow(img) plt.axis('off') plt.show #排序 top_k = predictions.argsort()[-5:][::-1] node_lookup = NodeLookup() for node_id in top_k: #获取分类名称 human_string = node_lookup.id_to_string(node_id) #获取该分类的置信度 score = predictions[node_id] print('%s(score = %.5f)'%(human_string.score)) print() ``` NameError Traceback (most recent call last) <ipython-input-1-1f6ae1c54da3> in <module> 7 import matplotlib.pyplot as plt 8 ----> 9 class NodeLookup(object): 10 def __init__(self): 11 <ipython-input-1-1f6ae1c54da3> in NodeLookup() 29 uid_to_human[uid] = human_string 30 #加载分类字符串n*********对应分类编号1-1000的文件 ---> 31 proto_as_ascii = tf.gfile.GFile(label_lookup_path).readlines() 32 node_id_to_uid = {} 33 for line in proto_as_ascii: NameError: name 'label_lookup_path' is not defined
tensorflow模型推理,两个列表串行,输出结果是第一个列表的循环,新手求教
tensorflow模型推理,两个列表串行,输出结果是第一个列表的循环,新手求教 ``` from __future__ import print_function import argparse from datetime import datetime import os import sys import time import scipy.misc import scipy.io as sio import cv2 from glob import glob import multiprocessing os.environ["CUDA_VISIBLE_DEVICES"] = "0" import tensorflow as tf import numpy as np from PIL import Image from utils import * N_CLASSES = 20 DATA_DIR = './datasets/CIHP' LIST_PATH = './datasets/CIHP/list/val2.txt' DATA_ID_LIST = './datasets/CIHP/list/val_id2.txt' with open(DATA_ID_LIST, 'r') as f: NUM_STEPS = len(f.readlines()) RESTORE_FROM = './checkpoint/CIHP_pgn' # Load reader. with tf.name_scope("create_inputs") as scp1: reader = ImageReader(DATA_DIR, LIST_PATH, DATA_ID_LIST, None, False, False, False, None) image, label, edge_gt = reader.image, reader.label, reader.edge image_rev = tf.reverse(image, tf.stack([1])) image_list = reader.image_list image_batch = tf.stack([image, image_rev]) label_batch = tf.expand_dims(label, dim=0) # Add one batch dimension. edge_gt_batch = tf.expand_dims(edge_gt, dim=0) h_orig, w_orig = tf.to_float(tf.shape(image_batch)[1]), tf.to_float(tf.shape(image_batch)[2]) image_batch050 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 0.50)), tf.to_int32(tf.multiply(w_orig, 0.50))])) image_batch075 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 0.75)), tf.to_int32(tf.multiply(w_orig, 0.75))])) image_batch125 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 1.25)), tf.to_int32(tf.multiply(w_orig, 1.25))])) image_batch150 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 1.50)), tf.to_int32(tf.multiply(w_orig, 1.50))])) image_batch175 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 1.75)), tf.to_int32(tf.multiply(w_orig, 1.75))])) ``` 新建网络 ``` # Create network. with tf.variable_scope('', reuse=False) as scope: net_100 = PGNModel({'data': image_batch}, is_training=False, n_classes=N_CLASSES) with tf.variable_scope('', reuse=True): net_050 = PGNModel({'data': image_batch050}, is_training=False, n_classes=N_CLASSES) with tf.variable_scope('', reuse=True): net_075 = PGNModel({'data': image_batch075}, is_training=False, n_classes=N_CLASSES) with tf.variable_scope('', reuse=True): net_125 = PGNModel({'data': image_batch125}, is_training=False, n_classes=N_CLASSES) with tf.variable_scope('', reuse=True): net_150 = PGNModel({'data': image_batch150}, is_training=False, n_classes=N_CLASSES) with tf.variable_scope('', reuse=True): net_175 = PGNModel({'data': image_batch175}, is_training=False, n_classes=N_CLASSES) # parsing net parsing_out1_050 = net_050.layers['parsing_fc'] parsing_out1_075 = net_075.layers['parsing_fc'] parsing_out1_100 = net_100.layers['parsing_fc'] parsing_out1_125 = net_125.layers['parsing_fc'] parsing_out1_150 = net_150.layers['parsing_fc'] parsing_out1_175 = net_175.layers['parsing_fc'] parsing_out2_050 = net_050.layers['parsing_rf_fc'] parsing_out2_075 = net_075.layers['parsing_rf_fc'] parsing_out2_100 = net_100.layers['parsing_rf_fc'] parsing_out2_125 = net_125.layers['parsing_rf_fc'] parsing_out2_150 = net_150.layers['parsing_rf_fc'] parsing_out2_175 = net_175.layers['parsing_rf_fc'] # edge net edge_out2_100 = net_100.layers['edge_rf_fc'] edge_out2_125 = net_125.layers['edge_rf_fc'] edge_out2_150 = net_150.layers['edge_rf_fc'] edge_out2_175 = net_175.layers['edge_rf_fc'] # combine resize parsing_out1 = tf.reduce_mean(tf.stack([tf.image.resize_images(parsing_out1_050, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out1_075, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out1_100, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out1_125, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out1_150, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out1_175, tf.shape(image_batch)[1:3,])]), axis=0) parsing_out2 = tf.reduce_mean(tf.stack([tf.image.resize_images(parsing_out2_050, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out2_075, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out2_100, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out2_125, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out2_150, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out2_175, tf.shape(image_batch)[1:3,])]), axis=0) edge_out2_100 = tf.image.resize_images(edge_out2_100, tf.shape(image_batch)[1:3,]) edge_out2_125 = tf.image.resize_images(edge_out2_125, tf.shape(image_batch)[1:3,]) edge_out2_150 = tf.image.resize_images(edge_out2_150, tf.shape(image_batch)[1:3,]) edge_out2_175 = tf.image.resize_images(edge_out2_175, tf.shape(image_batch)[1:3,]) edge_out2 = tf.reduce_mean(tf.stack([edge_out2_100, edge_out2_125, edge_out2_150, edge_out2_175]), axis=0) raw_output = tf.reduce_mean(tf.stack([parsing_out1, parsing_out2]), axis=0) head_output, tail_output = tf.unstack(raw_output, num=2, axis=0) tail_list = tf.unstack(tail_output, num=20, axis=2) tail_list_rev = [None] * 20 for xx in range(14): tail_list_rev[xx] = tail_list[xx] tail_list_rev[14] = tail_list[15] tail_list_rev[15] = tail_list[14] tail_list_rev[16] = tail_list[17] tail_list_rev[17] = tail_list[16] tail_list_rev[18] = tail_list[19] tail_list_rev[19] = tail_list[18] tail_output_rev = tf.stack(tail_list_rev, axis=2) tail_output_rev = tf.reverse(tail_output_rev, tf.stack([1])) raw_output_all = tf.reduce_mean(tf.stack([head_output, tail_output_rev]), axis=0) raw_output_all = tf.expand_dims(raw_output_all, dim=0) pred_scores = tf.reduce_max(raw_output_all, axis=3) raw_output_all = tf.argmax(raw_output_all, axis=3) pred_all = tf.expand_dims(raw_output_all, dim=3) # Create 4-d tensor. raw_edge = tf.reduce_mean(tf.stack([edge_out2]), axis=0) head_output, tail_output = tf.unstack(raw_edge, num=2, axis=0) tail_output_rev = tf.reverse(tail_output, tf.stack([1])) raw_edge_all = tf.reduce_mean(tf.stack([head_output, tail_output_rev]), axis=0) raw_edge_all = tf.expand_dims(raw_edge_all, dim=0) pred_edge = tf.sigmoid(raw_edge_all) res_edge = tf.cast(tf.greater(pred_edge, 0.5), tf.int32) # prepare ground truth preds = tf.reshape(pred_all, [-1,]) gt = tf.reshape(label_batch, [-1,]) weights = tf.cast(tf.less_equal(gt, N_CLASSES - 1), tf.int32) # Ignoring all labels greater than or equal to n_classes. mIoU, update_op_iou = tf.contrib.metrics.streaming_mean_iou(preds, gt, num_classes=N_CLASSES, weights=weights) macc, update_op_acc = tf.contrib.metrics.streaming_accuracy(preds, gt, weights=weights) # # Which variables to load. # restore_var = tf.global_variables() # # Set up tf session and initialize variables. # config = tf.ConfigProto() # config.gpu_options.allow_growth = True # # gpu_options=tf.GPUOptions(per_process_gpu_memory_fraction=0.7) # # config=tf.ConfigProto(gpu_options=gpu_options) # init = tf.global_variables_initializer() # evaluate prosessing parsing_dir = './output' # Set up tf session and initialize variables. config = tf.ConfigProto() config.gpu_options.allow_growth = True ``` 以上是初始化网络和初始化参数载入模型,下面定义两个函数分别处理val1.txt和val2.txt两个列表内部的数据。 ``` # 处理第一个列表函数 def humanParsing1(): # Which variables to load. restore_var = tf.global_variables() init = tf.global_variables_initializer() with tf.Session(config=config) as sess: sess.run(init) sess.run(tf.local_variables_initializer()) # Load weights. loader = tf.train.Saver(var_list=restore_var) if RESTORE_FROM is not None: if load(loader, sess, RESTORE_FROM): print(" [*] Load SUCCESS") else: print(" [!] Load failed...") # Create queue coordinator. coord = tf.train.Coordinator() # Start queue threads. threads = tf.train.start_queue_runners(coord=coord, sess=sess) # Iterate over training steps. for step in range(NUM_STEPS): # parsing_, scores, edge_ = sess.run([pred_all, pred_scores, pred_edge])# , update_op parsing_, scores, edge_ = sess.run([pred_all, pred_scores, pred_edge]) # , update_op print('step {:d}'.format(step)) print(image_list[step]) img_split = image_list[step].split('/') img_id = img_split[-1][:-4] msk = decode_labels(parsing_, num_classes=N_CLASSES) parsing_im = Image.fromarray(msk[0]) parsing_im.save('{}/{}_vis.png'.format(parsing_dir, img_id)) coord.request_stop() coord.join(threads) # 处理第二个列表函数 def humanParsing2(): # Set up tf session and initialize variables. config = tf.ConfigProto() config.gpu_options.allow_growth = True # gpu_options=tf.GPUOptions(per_process_gpu_memory_fraction=0.7) # config=tf.ConfigProto(gpu_options=gpu_options) # Which variables to load. restore_var = tf.global_variables() init = tf.global_variables_initializer() with tf.Session(config=config) as sess: # Create queue coordinator. coord = tf.train.Coordinator() sess.run(init) sess.run(tf.local_variables_initializer()) # Load weights. loader = tf.train.Saver(var_list=restore_var) if RESTORE_FROM is not None: if load(loader, sess, RESTORE_FROM): print(" [*] Load SUCCESS") else: print(" [!] Load failed...") LIST_PATH = './datasets/CIHP/list/val1.txt' DATA_ID_LIST = './datasets/CIHP/list/val_id1.txt' with open(DATA_ID_LIST, 'r') as f: NUM_STEPS = len(f.readlines()) # with tf.name_scope("create_inputs"): with tf.name_scope(scp1): tf.get_variable_scope().reuse_variables() reader = ImageReader(DATA_DIR, LIST_PATH, DATA_ID_LIST, None, False, False, False, coord) image, label, edge_gt = reader.image, reader.label, reader.edge image_rev = tf.reverse(image, tf.stack([1])) image_list = reader.image_list # Start queue threads. threads = tf.train.start_queue_runners(coord=coord, sess=sess) # Load weights. loader = tf.train.Saver(var_list=restore_var) if RESTORE_FROM is not None: if load(loader, sess, RESTORE_FROM): print(" [*] Load SUCCESS") else: print(" [!] Load failed...") # Iterate over training steps. for step in range(NUM_STEPS): # parsing_, scores, edge_ = sess.run([pred_all, pred_scores, pred_edge])# , update_op parsing_, scores, edge_ = sess.run([pred_all, pred_scores, pred_edge]) # , update_op print('step {:d}'.format(step)) print(image_list[step]) img_split = image_list[step].split('/') img_id = img_split[-1][:-4] msk = decode_labels(parsing_, num_classes=N_CLASSES) parsing_im = Image.fromarray(msk[0]) parsing_im.save('{}/{}_vis.png'.format(parsing_dir, img_id)) coord.request_stop() coord.join(threads) if __name__ == '__main__': humanParsing1() humanParsing2() ``` 最终输出结果一直是第一个列表里面的循环,代码上用了 self.queue = tf.train.slice_input_producer([self.images, self.labels, self.edges], shuffle=shuffle),队列的方式进行多线程推理。最终得到的结果一直是第一个列表的循环,求大神告诉问题怎么解决。
tensorflow debug 调试错误
# -*- coding: utf-8 -*- """ Created on Sat Oct 28 15:40:51 2017 @author: Administrator """ import tensorflow as tf from tensorflow.python import debug as tf_debug '''载入数据''' from aamodule import input_data mnist = input_data.read_data_sets('d://MNIST',one_hot=True) '''构建回归模型''' #定义回归模型 x = tf.placeholder(tf.float32,[None,784]) y = tf.placeholder(tf.float32,[None,10]) W = tf.Variable(tf.zeros([784,10])) b = tf.Variable(tf.zeros([10])) y_ = tf.matmul(x,W) + b #预测值 #定义损失函数和优化器 cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=y_,labels=y) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) '''训练模型''' sess = tf.InteractiveSession() sess.run(tf.global_variables_initializer()) sess = tf_debug.LocalCLIDebugWrapperSession(sess,ui_type='readline') #sess.add_tensor_filter("has_inf_or_nan", tf_debug.has_inf_or_nan) #Train for i in range(1000): batch_xs,batch_ys = mnist.train.next_batch(100) sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys}) #评估训练好的模型 correct_prediction = tf.equal(tf.argmax(y_,1),tf.argmax(y,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) #计算模型在测试集上的准确率 print(sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels})) 加入sess = tf_debug.LocalCLIDebugWrapperSession(sess,ui_type='readline')后就运行不了了,ValueError: Exhausted all fallback ui_types. ``` ```
基于springmvc+mybatis的登录表单提交后404
前端填写正确的用户名和密码后,提交登录,浏览器显示404,前端提交的地址是/userlogin,后端对接的/user/userlogin也是对的,浏览器f12里显示请求的确成功传给了后端,但是不知道为什么404。用的是tomcat8.5,java版本1.8,phpstudy最新版,mysql5.7.26。 ## 这是所用到的包: ![图片说明](https://img-ask.csdn.net/upload/202002/01/1580567617_880785.jpg) ## 这是目录结构: ![图片说明](https://img-ask.csdn.net/upload/202002/01/1580567511_223692.jpg) ## 这是web.xml: ```xml <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" id="WebApp_ID" version="2.5"> <display-name>springmvc_test</display-name> <welcome-file-list> <welcome-file>index.html</welcome-file> <welcome-file>index.jsp</welcome-file> </welcome-file-list> <!-- 配置spring监听器 --> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <!-- 加载spring配置文件 --> <context-param> <param-name>contextConfigLocation</param-name> <param-value>classpath*:config/context-config.xml</param-value> </context-param> <!-- 配置前端控制器 --> <servlet> <servlet-name>springmvc</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <!-- DispatcherServlet在初始化方法里面会读取该初始化参数的值来获得 spring配置文件的位置 ,然后启动spring容器。 --> <param-name>contextConfigLocation</param-name> <param-value>classpath*:config/springmvc-config.xml</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>springmvc</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> <!-- 配置字符编码 --> <filter> <filter-name>encodingFilter</filter-name> <filter-class>org.springframework.web.filter.CharacterEncodingFilter</filter-class> <init-param> <param-name>encoding</param-name> <param-value>UTF-8</param-value> </init-param> </filter> <filter-mapping> <filter-name>encodingFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> </web-app> ``` ## 这是context-config.xml: ```xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd"> <!-- 扫描service包下的注解 --> <context:component-scan base-package="com.test.service"></context:component-scan> <!-- 配置数据库 --> <!-- 加载配置文件 --> <!-- <context:property-placeholder location="classpath:jdbc.properties"/> --> <bean id="dateSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"> <property name="driver" value="com.mysql.jdbc.Driver"></property> <property name="url" value="jdbc:mysql://localhost:3306/Eday_Test"></property> <property name="username" value="root"></property> <property name="password" value="123456"></property> </bean> <!-- 配置Sqlsessionfactory并将数据源注入 --> <bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean"> <!-- 引入数据源 --> <property name="dateSource" ref="dateSource"></property> <!-- 载入mybatis配置文件 --> <property name="configLocation" value="classpath:mybatis-config.xml"></property> <!-- 载入配置mapper映射的xml --> <property name="mapperLocations" value="classpath:com/test/mapper/*.xml"></property> </bean> <!-- 配置扫描mapper接口 --> <bean class="org.mybatis.spring.mapper.MapperScannerConfigurer"> <property name="basePackge" value="com.test.mapper"></property> <property name="sqlSessionFactoryBeanName" value="sqlSessionFactory"></property> </bean> <!-- 配置事务管理器 --> <bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <property name="dateSource" ref="dateSource"></property> </bean> <tx:annotation-driven transaction-manager="transactionManager"/> </beans> ``` ## 这是mybatis-config.xml: ```xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE configuration PUBLIC "-//mybatis.org//DTD Config 3.0//EN" "http://mybatis.org/dtd/mybatis-3-config.dtd"> <configuration> <typeAliases> <!-- 配置别名 --> <typeAlias alias="User" type="com.test.pojo.User"/> </typeAliases> </configuration> ``` ## 这是springmvc-config.xml: ```xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc.xsd"> <!-- 扫描controller包下的注解 --> <context:component-scan base-package="com.test.controller"></context:component-scan> <!-- 开启注解 --> <mvc:annotation-driven></mvc:annotation-driven> <!-- 静态资源访问 --> <mvc:default-servlet-handler/> <!-- 视图解析器 --> <bean class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <!-- 配置试图解析的默认路径,即配置页面的根路径 --> <property name="prefix" value="/"></property> <property name="suffix" value=".jsp"></property> </bean> </beans> ``` ## 这是UserController.java: ```java package com.test.controller; import javax.servlet.http.HttpSession; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Scope; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.servlet.ModelAndView; import com.test.pojo.User; import com.test.service.UserService; //声明控制器 @Controller //设置bean的scope属性为多例(prototype) @Scope("prototype") //设置请求映射,当客户端请求/user时,转到该控制器处理 @RequestMapping("/user") public class UserController { @Autowired private UserService userService; @RequestMapping(value="/userlogin") public ModelAndView login(String user_Name,String user_pwd,ModelAndView mv,HttpSession session){ //调用userService中的login方法处理user实体类对象 User user = userService.login(user_Name,user_pwd); //登录的逻辑判断,判断条件是返回结果不为空 if(user!=null){ //登陆成功,将user对象设置到HttpSession作用范围域中,相当于服务端的cookie,有效时间默认30分钟 //在程序运行期间,在任意页面都可以提取它的值。 session.setAttribute("user",user); //转发到main请求 //登录成功,跳转页面 mv.setViewName("login/login-success"); }else{ //登录失败,向前端传递失败信息 mv.addObject("message","用户名或密码错误,请重新输入!"); //登录失败,跳转到登录页面 mv.setViewName("login"); } return mv; } //跳转到用户注册界面 @RequestMapping(value="/userregister"/*,method=RequestMethod.POST*/) public String register(User user){ String user_Name = user.getUser_Name(); //如果数据库中没有该用户,可以注册,否则跳转页面 if(userService.findByUserName(user_Name)==null){ //添加用户 userService.register(user); //注册成功,跳转到主页面 return "index"; }else{ //注册失败,跳转到错误页面 return "error"; } } } ``` ## 这是UserMapper.java: ```java package com.test.mapper; import org.apache.ibatis.annotations.Param; import com.test.pojo.User; public interface UserMapper { //根据用户名和密码查找,mybatis中有多个参数时,需要使用@Param注解 User findByUserNameAndPassword(@Param("user_Name")String user_Name,@Param("user_Pwd")String user_Pwd); //增加用户 void addUser(User user); //根据用户名查询 User findByUserName(String user_Name); } ``` ## 这是UserMapper.xml: ```xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE mapper PUBLIC "//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd"> <mapper namespace="com.tes.mapper.UserMapper"> <!-- 根据用户名和密码查询 --> <select id="findByUserNameAndPasssword" resultType="User"> select * from user where user_Name=#{user_Name} and user_Pwd=#{user_Pwd} </select> <!-- 增加用户 --> <insert id="addUser" parameterType="User"> insert into user (user_Name,user_Pwd,user_Email,user_NickName,user_Birth,user_Phone,user_InvitationCode) values(#{user_Name},#{user_Pwd},#{user_Email},#{user_NickName},#{user_Birth},#{user_Phone},#{user_InvitationCode}) </insert> <!-- 根据用户名查询 --> <select id="findByUserName" resultType="User"> select * from user where user_Name=#{user_Name} </select> </mapper> ``` ## 这是User.java: ```java package com.test.pojo; import java.sql.Timestamp; public class User { private int user_Id; private String user_Name; private int user_Pwd; private String user_Email; private String user_NickName; private Timestamp user_Time; private String user_Birth; private int user_Fans; private int user_Follow; private int user_Score; private String user_HeadImgAddr; private int user_Phone; private String user_InvitationCode; public int getUser_Id() { return user_Id; } public void setUser_Id(int user_Id) { this.user_Id = user_Id; } public String getUser_Name() { return user_Name; } public void setUser_Name(String user_Name) { this.user_Name = user_Name; } public int getUser_Pwd() { return user_Pwd; } public void setUser_Pwd(int user_Pwd) { this.user_Pwd = user_Pwd; } public String getUser_Email() { return user_Email; } public void setUser_Email(String user_Email) { this.user_Email = user_Email; } public String getUser_NickName() { return user_NickName; } public void setUser_NickName(String user_NickName) { this.user_NickName = user_NickName; } public Timestamp getUser_Time() { return user_Time; } public void setUser_Time(Timestamp user_Time) { this.user_Time = user_Time; } public String getUser_Birth() { return user_Birth; } public void setUser_Birth(String user_Birth) { this.user_Birth = user_Birth; } public int getUser_Fans() { return user_Fans; } public void setUser_Fans(int user_Fans) { this.user_Fans = user_Fans; } public int getUser_Follow() { return user_Follow; } public void setUser_Follow(int user_Follow) { this.user_Follow = user_Follow; } public int getUser_Score() { return user_Score; } public void setUser_Score(int user_Score) { this.user_Score = user_Score; } public String getUser_HeadImgAddr() { return user_HeadImgAddr; } public void setUser_HeadImgAddr(String user_HeadImgAddr) { this.user_HeadImgAddr = user_HeadImgAddr; } public int getUser_Phone() { return user_Phone; } public void setUser_Phone(int user_Phone) { this.user_Phone = user_Phone; } public String getUser_InvitationCode() { return user_InvitationCode; } public void setUser_InvitationCode(String user_InvitationCode) { this.user_InvitationCode = user_InvitationCode; } @Override public String toString() { return "User [user_Id=" + user_Id + ", user_Name=" + user_Name + ", user_Pwd=" + user_Pwd + ", user_Email=" + user_Email + ", user_NickName=" + user_NickName + ", user_Time=" + user_Time + ", user_Birth=" + user_Birth + ", user_Fans=" + user_Fans + ", user_Follow=" + user_Follow + ", user_Score=" + user_Score + ", user_HeadImgAddr=" + user_HeadImgAddr + ", user_Phone=" + user_Phone + ", user_InvitationCode=" + user_InvitationCode + "]"; } } ``` ## 这是UserService.java: ```java package com.test.service; import com.test.pojo.User; public interface UserService { //通过用户名及密码核查用户登录 User login(String user_Name,String user_Pwd); //增加用户 void register(User user); //根据用户名查询 User findByUserName(String user); } ``` ## 这是UserServiceImpl.java: ```java package com.test.service.impl; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Transactional; import com.test.mapper.UserMapper; import com.test.pojo.User; import com.test.service.UserService; @Service @Transactional public class UserServiceImpl implements UserService { //注入UserMapper接口 @Autowired private UserMapper userMapper; //登录,根据用户名和密码进行查询 @Override public User login(String user_Name,String user_Pwd){ return userMapper.findByUserNameAndPassword(user_Name,user_Pwd); } //注册,增加用户 @Override public void register(User user){ userMapper.addUser(user); } //根据用户名查询 @Override public User findByUserName(String user_Name){ return userMapper.findByUserName(user_Name); } } ``` ##这是login.jsp ```jsp <%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <link rel="stylesheet" type="text/css" href="../css/login.css" /> <title>登录</title> </head> <body> <!--导航栏--> <nav class="nav"> <ul> <li class="logo"><a href="#">Eday</a></li> <li class="shouye"><a href="../index.jsp" class="neirong">首页</a></li> <li class="fgx">|</li> <li><a href="#" class="neirong">文章列表</a></li> <li class="fgx">|</li> <li><a href="#" class="neirong">留言板</a></li> <li class="fgx">|</li> <li><a href="#" class="neirong">更新日志</a></li> </ul> </nav> <!--登录板块--> <header class="header"> <p class="logintitle"> <b>登录</b> </p> <form action="${pageContext.request.contextPath }/user/userlogin" method="post" class=login-form> <p class= login-username-p> <label for="username" class="login-username-text">用户名:</label> <input type=text name="user_Name" class="login-username-input"> <br><a href="register.jsp" class=login-register>注册</a> </p> <p class="login-psw-p"> <label for="psw" class="login-psw-text">密码:</label> <input type=password name="user_Pwd" class="login-psw-input"> <br><a href="#" class=login-forget>忘记密码</a> </p> <div class=login-button-div> <label for=button></label> <button type=sublim class=login-button>登录</button> </div> </form> </header> </body> </html> ``` ## 这是login-success.jsp: ```jsp <%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>登录成功,请等待跳转...</title> <script type="text/javascript"> onload=function(){ setInterval(go, 1000); }; var x=3; //利用了全局变量来执行 function go(){ x--; if(x>0){ document.getElementById("sp").innerHTML=x; //每次设置的x的值都不一样了。 }else{ location.href='../index.jsp'; } } </script> </head> <body> <!--导航栏--> <nav class="nav"> <ul> <li class="logo"><a href="#">Eday</a></li> <li class="shouye"><a href="../index.jsp" class="neirong">首页</a></li> <li class="fgx">|</li> <li><a href="#" class="neirong">文章列表</a></li> <li class="fgx">|</li> <li><a href="#" class="neirong">留言板</a></li> <li class="fgx">|</li> <li><a href="#" class="neirong">更新日志</a></li> </ul> </nav> <header> <div> <p>登录成功!页面将在3秒后自动跳转,请稍等...</p> </div> </header> </body> </html> ``` ## 这是数据库: ![图片说明](https://img-ask.csdn.net/upload/202002/01/1580568589_694404.jpg)
关于html前端的datatables的问题
## # 请问大神们,datatables在**初始化完后**,怎么添加数据?我写了一个添加按钮,想向datatables添加数据,还有就是怎么清空表格数据? ![图片说明](https://img-ask.csdn.net/upload/202001/27/1580136818_897690.png) ``` $('#sampleTable').DataTable({ "bPaginate": false, //翻页功能 "bLengthChange": false, //改变每页显示数据数量 "bFilter": false, //过滤功能 "bSort": true, //排序功能 "bInfo": false,//页脚信息 "bAutoWidth": true,//自动宽度 "bProcessing": true, //DataTables载入数据时,是否显示‘进度’提示 "bStateSave": true, }); ```
tensorflow RNN LSTM代码运行不正确?
报错显示是ValueError: None values not supported. 在cross_entropy处有问题。谢谢大家 ``` #7.2 RNN import tensorflow as tf #tf.reset_default_graph() from tensorflow.examples.tutorials.mnist import input_data #载入数据集 mnist = input_data.read_data_sets("MNIST_data/", one_hot = True) #输入图片是28*28 n_inputs = 28 #输入一行,一行有28个数据 max_time = 28 #一共28行 lstm_size = 100 #隐层单元 n_classes = 10 #10个分量 batch_size = 50 #每批次50个样本 n_batch = mnist.train.num_examples//batch_size #计算共由多少个批次 #这里的none表示第一个维度可以是任意长度 x = tf.placeholder(tf.float32, [batch_size, 784]) #正确的标签 y = tf.placeholder(tf.float32, [batch_size, 10]) #初始化权值 weights = tf.Variable(tf.truncated_normal([lstm_size, n_classes], stddev = 0.1)) #初始化偏置 biases = tf.Variable(tf.constant(0.1, shape = [n_classes])) #定义RNN网络 def RNN(X, weights, biases): #input = [batch_size, max_size, n_inputs] inputs = tf.reshape(X, [-1, max_time, n_inputs]) #定义LSTM基本CELL lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(lstm_size) #final_state[0]是cell_state #final_state[1]是hidden_state outputs, final_state = tf.nn.dynamic_rnn(lstm_cell, inputs, dtype = tf.float32) results = tf.nn.softmax(tf.matmul(final_state[1], weights) + biases) #计算RNN的返回结果 prediction = RNN(x, weights, biases) #损失函数 cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels = y,logits = prediction)) #使用AdamOptimizer进行优化 train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) #结果存放在一个布尔型列表中 correct_prediction = tf.equal(tf.argmax(y, 1),tf.argmax(prediction, 1)) #求准确率 accuracy = tf.reduce_mean(tf.cast(correct_precdition,tf.float32)) #初始化 init = tf.global_variable_initializer() with tf.Session() as sess: sess.run(init) for epoch in range(6): for batch in range(n_batch): batch_xs,batch_ys=mnist.train.next_batch(batch_size) sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys}) acc = sess.run(accuracy, feed_dict={x:mnist.test.images,y:mnist.test.labels}) print('Iter' + str(epoch) + ',Testing Accuracy = ' + str(acc)) ```
载入PDF时发生错误?PDF.js v1.10.100 (build: ea29ec83) 信息:Failed to fetch
PDF.js v1.10.100 (build: ea29ec83) 信息:Failed to fetch
使用caffe2 ,可以同时载入多个模型吗?
使用caffe2 做目标检测,需要同时载入连个不同的模型,但是会出现什么都检测不到的情况?在一个项目中,分别每次载入一个模型做检测,结果都正确,一旦同时载入,载入第一个的时候成功,载入第二个就失败,后续的检测也都不成功。
自己制作的类似Fashion-MNIST数据集,怎么使用
在做想关项目,因为需要自己的数据集,因此我按照要求做了一个,如下 ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574843320_334333.png) 用的是MXNet框架,jupyter notebook写 我自己的做法是把测试集和训练集用数组读取后包装 训练集如下,两个数组分别是图片像素和对应标签 ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574843532_588917.png) 训练过程如下,为了能分别遍历又拆了train_iter为train_iter[0]、[1] ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574843838_858564.png) 接着在导入训练模型(第12行)时候出现问题,报错如下 ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574844074_977535.png) 搞了几天不明白这个数据类型,是导入数据集的方式错了还是、、、、 下面这个是载入Fashion-MNIST数据集的函数 没看明白,现在还在尝试,但是有大佬指导下就更好了(求~) ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574844395_318556.png) 附代码 ``` def load_data_fashion_mnist(batch_size, resize=None, root=os.path.join( '~', '.mxnet', 'datasets', 'fashion-mnist')): root = os.path.expanduser(root) # 展开用户路径'~' transformer = [] if resize: transformer += [gdata.vision.transforms.Resize(resize)] transformer += [gdata.vision.transforms.ToTensor()] transformer = gdata.vision.transforms.Compose(transformer) mnist_train = gdata.vision.FashionMNIST(root=root, train=True) mnist_test = gdata.vision.FashionMNIST(root=root, train=False) num_workers = 0 if sys.platform.startswith('win32') else 4 train_iter = gdata.DataLoader( mnist_train.transform_first(transformer), batch_size, shuffle=True, num_workers=num_workers) test_iter = gdata.DataLoader( mnist_test.transform_first(transformer), batch_size, shuffle=False, num_workers=num_workers) return train_iter, test_iter batch_size = 128 # 如出现“out of memory”的报错信息,可减小batch_size或resize train_iter, test_iter = load_data_fashion_mnist(batch_size, resize=224) ```
web前端,向图片中添加标签或者按钮,哪位有经验的大佬指点指点
web前端需要实现可以载入一张图片, 然后可以向图片中自定的某个位置添加不同种类的标签或者按钮。 想了很久差了不少完全不知道该怎么办,请各位大佬指点。
CNN网络不知道载入的数据集是什么格式的?
CNN初学者,最近自己在github上拿了个项目练手,问题是数据集不公开,只能自己做数据集,但是却看不懂数据集应该怎么制作。 代码如下 应该就是DAC_DATASET类中 class DAC_Dataset(RNGDataFlow): def __init__(self, dataset_dir, train, all_classes): self.images = [] if all_classes == 1: for directory in listdir(dataset_dir): for file in listdir(dataset_dir + '/' + directory): if '.jpg' in file: for c in classes: if c[0] in directory: label = c[1] break self.images.append([dataset_dir + '/' + directory + '/' + file, label]) else: for file in listdir(dataset_dir): if '.jpg' in file: self.images.append([dataset_dir + '/' + file, 0]) shuffle(self.images) if train == 0: self.images = self.images[0:1000] def get_data(self): for image in self.images: xml_name = image[0].replace('jpg','xml') im = cv2.imread(image[0], cv2.IMREAD_COLOR) im = cv2.resize(im, (square_size, square_size)) im = im.reshape((square_size, square_size, 3)) meta = None if os.path.isfile(image[0].replace('jpg','xml')): meta = xml.etree.ElementTree.parse(xml_name).getroot() label = np.array(image[1]) bndbox = {} bndbox['xmin'] = 0 bndbox['xmax'] = 0 bndbox['ymin'] = 0 bndbox['ymax'] = 0 if meta is not None: obj = meta.find('object') if obj is not None: box = obj.find('bndbox') if box is not None: bndbox['xmin'] = int(box.find('xmin').text) bndbox['xmax'] = int(box.find('xmax').text) bndbox['ymin'] = int(box.find('ymin').text) bndbox['ymax'] = int(box.find('ymax').text) bndbox['xmin'] = int(bndbox['xmin']*(square_size/IMAGE_WIDTH)) bndbox['xmax'] = int(bndbox['xmax']*(square_size/IMAGE_WIDTH)) bndbox['ymin'] = int(bndbox['ymin']*(square_size/IMAGE_HEIGHT)) bndbox['ymax'] = int(bndbox['ymax']*(square_size/IMAGE_HEIGHT)) iou = np.zeros( (height_width, height_width) ) for h in range(0, height_width): for w in range(0, height_width): rect = {} rect['xmin'] = int(w*down_sample_factor) rect['xmax'] = int((w+1)*down_sample_factor) rect['ymin'] = int(h*down_sample_factor) rect['ymax'] = int((h+1)*down_sample_factor) if DEMO_DATASET == 0: if intersection(rect, bndbox) == 0.0: iou[h,w] = 0.0 else: iou[h,w] = 1.0 else: if intersection(rect, bndbox) < 0.5: iou[h,w] = 0.0 else: iou[h,w] = 1.0 # if iou[h,w] > 0: # cv2.rectangle(im, (int(rect['xmin']),int(rect['ymin'])), (int(rect['xmax']),int(rect['ymax'])), (0,0,iou[h,w]*255), 1) iou = iou.reshape( (height_width, height_width, 1) ) valid = np.zeros((height_width, height_width, 4), dtype='float32') relative_bndboxes = np.zeros((height_width, height_width, 4), dtype='float32') for h in range(0, height_width): for w in range(0, height_width): if iou[h, w] > 0.0: valid[h,w,0] = 1.0 valid[h,w,1] = 1.0 valid[h,w,2] = 1.0 valid[h,w,3] = 1.0 relative_bndboxes[h, w, 0] = bndbox['xmin'] - w*down_sample_factor relative_bndboxes[h, w, 1] = bndbox['ymin'] - h*down_sample_factor relative_bndboxes[h, w, 2] = bndbox['xmax'] - w*down_sample_factor relative_bndboxes[h, w, 3] = bndbox['ymax'] - h*down_sample_factor else: relative_bndboxes[h, w] = np.zeros(4) # cv2.rectangle(im, (bndbox['xmin'],bndbox['ymin']), (bndbox['xmax'],bndbox['ymax']), (255,0,0), 1) # cv2.imshow('image', im) # cv2.waitKey(1000) yield [im, label, iou, valid, relative_bndboxes] def size(self): return len(self.images) class Model(ModelDesc): def _get_inputs(self): return [InputDesc(tf.float32, [None, square_size, square_size, 3], 'input'), InputDesc(tf.int32, [None], 'label'), InputDesc(tf.float32, [None, height_width, height_width, 1], 'ious'), InputDesc(tf.float32, [None, height_width, height_width, 4], 'valids'), InputDesc(tf.float32, [None, height_width, height_width, 4], 'bndboxes')] def _build_graph(self, inputs): image, label, ious, valids, bndboxes = inputs image = tf.round(image) fw, fa, fg = get_dorefa(BITW, BITA, BITG) old_get_variable = tf.get_variable def monitor(x, name): if MONITOR == 1: return tf.Print(x, [x], message='\n\n' + name + ': ', summarize=1000, name=name) else: return x def new_get_variable(v): name = v.op.name if not name.endswith('W') or 'conv1' in name or 'conv_obj' in name or 'conv_box' in name: return v else: logger.info("Quantizing weight {}".format(v.op.name)) if MONITOR == 1: return tf.Print(fw(v), [fw(v)], message='\n\n' + v.name + ', Quantized weights are:', summarize=100) else: return fw(v) def activate(x): if BITA == 32: return tf.nn.relu(x) else: return fa(tf.nn.relu(x)) def bn_activate(name, x): x = BatchNorm(name, x) x = monitor(x, name + '_noact_out') return activate(x) def halffire(name, x, num_squeeze_filters, num_expand_3x3_filters, skip): out_squeeze = Conv2D('squeeze_conv_' + name, x, out_channel=num_squeeze_filters, kernel_shape=1, stride=1, padding='SAME') out_squeeze = bn_activate('bn_squeeze_' + name, out_squeeze) out_expand_3x3 = Conv2D('expand_3x3_conv_' + name, out_squeeze, out_channel=num_expand_3x3_filters, kernel_shape=3, stride=1, padding='SAME') out_expand_3x3 = bn_activate('bn_expand_3x3_' + name, out_expand_3x3) if skip == 0: return out_expand_3x3 else: return tf.add(x, out_expand_3x3) def halffire_noact(name, x, num_squeeze_filters, num_expand_3x3_filters): out_squeeze = Conv2D('squeeze_conv_' + name, x, out_channel=num_squeeze_filters, kernel_shape=1, stride=1, padding='SAME') out_squeeze = bn_activate('bn_squeeze_' + name, out_squeeze) out_expand_3x3 = Conv2D('expand_3x3_conv_' + name, out_squeeze, out_channel=num_expand_3x3_filters, kernel_shape=3, stride=1, padding='SAME') return out_expand_3x3 with remap_variables(new_get_variable), \ argscope([Conv2D, FullyConnected], use_bias=False, nl=tf.identity), \ argscope(BatchNorm, decay=0.9, epsilon=1e-4): image = monitor(image, 'image_out') l = Conv2D('conv1', image, out_channel=32, kernel_shape=3, stride=2, padding='SAME') l = bn_activate('bn1', l) l = monitor(l, 'conv1_out') l = MaxPooling('pool1', l, shape=3, stride=2, padding='SAME') l = monitor(l, 'pool1_out') l = halffire('fire1', l, NUM_SQUEEZE_FILTERS, NUM_EXPAND_FILTERS, 0) l = monitor(l, 'fire1_out') l = MaxPooling('pool2', l, shape=3, stride=2, padding='SAME') l = monitor(l, 'pool2_out') l = halffire('fire2', l, NUM_SQUEEZE_FILTERS, NUM_EXPAND_FILTERS, 0) l = monitor(l, 'fire2_out') l = MaxPooling('pool3', l, shape=3, stride=2, padding='SAME') l = monitor(l, 'pool3_out') l = halffire('fire3', l, NUM_SQUEEZE_FILTERS, NUM_EXPAND_FILTERS, 0) l = monitor(l, 'fire3_out') l = halffire('fire4', l, NUM_SQUEEZE_FILTERS, NUM_EXPAND_FILTERS, 0) l = monitor(l, 'fire4_out') l = halffire('fire5', l, NUM_SQUEEZE_FILTERS, NUM_EXPAND_FILTERS, 0) l = monitor(l, 'fire5_out') l = halffire('fire6', l, NUM_SQUEEZE_FILTERS, NUM_EXPAND_FILTERS, 0) l = monitor(l, 'fire6_out') l = halffire('fire7', l, NUM_SQUEEZE_FILTERS, NUM_EXPAND_FILTERS, 0) l = monitor(l, 'fire7_out') # Classification classify = Conv2D('conv_class', l, out_channel=12, kernel_shape=1, stride=1, padding='SAME') classify = bn_activate('bn_class', classify) classify = monitor(classify, 'conv_class_out') logits = GlobalAvgPooling('pool_class', classify) class_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=label) class_loss = tf.reduce_mean(class_loss, name='cross_entropy_loss') wrong = prediction_incorrect(logits, label, 1, name='wrong-top1') add_moving_summary(tf.reduce_mean(wrong, name='train-error-top1')) # Object Detection l = tf.concat([l, classify], axis=3) objdetect = Conv2D('conv_obj', l, out_channel=1, kernel_shape=1, stride=1, padding='SAME') objdetect = tf.identity(objdetect, name='objdetect_out') objdetect_loss = tf.losses.hinge_loss(labels=ious, logits=objdetect) bndbox = Conv2D('conv_box', l, out_channel=4, kernel_shape=1, stride=1, padding='SAME') bndbox = tf.identity(bndbox, name='bndbox_out') bndbox = tf.multiply(bndbox, valids, name='mult0') bndbox_loss = tf.losses.mean_squared_error(labels=bndboxes, predictions=bndbox) # weight decay on all W of fc layers # reg_cost = regularize_cost('(fire7|conv_obj|conv_box).*/W', l2_regularizer(1e-5), name='regularize_cost') # cost = class_loss*objdetect_loss*bndbox_loss # cost = class_loss + objdetect_loss + bndbox_loss + reg_cost cost = class_loss + 10*objdetect_loss + bndbox_loss add_moving_summary(class_loss, objdetect_loss, bndbox_loss, cost) self.cost = cost tf.get_variable = old_get_variable def _get_optimizer(self): lr = tf.get_variable('learning_rate', initializer=1e-2, trainable=False) opt = tf.train.AdamOptimizer(lr, epsilon=1e-5) # lr = tf.get_variable('learning_rate', initializer=1e-1, trainable=False) # opt = tf.train.MomentumOptimizer(lr, momentum=0.9) return opt def get_data(dataset_dir, train): if DEMO_DATASET == 0: all_classes = 1 else: all_classes = 0 ds = DAC_Dataset(dataset_dir, train, all_classes) ds = BatchData(ds, BATCH_SIZE, remainder=False) ds = PrefetchDataZMQ(ds, nr_proc=8, hwm=6) return ds def get_config(): logger.auto_set_dir() data_train = get_data(args.data, 1) data_test = get_data(args.data, 0) if DEMO_DATASET == 0: return TrainConfig( dataflow=data_train, callbacks=[ ModelSaver(max_to_keep=10), HumanHyperParamSetter('learning_rate'), ScheduledHyperParamSetter('learning_rate', [(40, 0.001), (60, 0.0001), (90, 0.00001)]) ,InferenceRunner(data_test, [ScalarStats('cross_entropy_loss'), ClassificationError('wrong-top1', 'val-error-top1')]) ], model=Model(), max_epoch=150 ) else: return TrainConfig( dataflow=data_train, callbacks=[ ModelSaver(max_to_keep=10), HumanHyperParamSetter('learning_rate'), ScheduledHyperParamSetter('learning_rate', [(100, 0.001), (200, 0.0001), (250, 0.00001)]) ], model=Model(), max_epoch=300 ) def run_image(model, sess_init, image_dir): print('Running image!') output_names = ['objdetect_out', 'bndbox_out'] pred_config = PredictConfig( model=model, session_init=sess_init, input_names=['input'], output_names=output_names ) predictor = OfflinePredictor(pred_config) images = [] metas = [] for file in listdir(image_dir): if '.jpg' in file: images.append(file) if '.xml' in file: metas.append(file) images.sort() metas.sort() THRESHOLD = 0 index = 0 for image in images: meta = xml.etree.ElementTree.parse(image_dir + '/' + metas[index]).getroot() true_bndbox = {} true_bndbox['xmin'] = 0 true_bndbox['xmax'] = 0 true_bndbox['ymin'] = 0 true_bndbox['ymax'] = 0 if meta is not None: obj = meta.find('object') if obj is not None: box = obj.find('bndbox') if box is not None: true_bndbox['xmin'] = int(box.find('xmin').text) true_bndbox['xmax'] = int(box.find('xmax').text) true_bndbox['ymin'] = int(box.find('ymin').text) true_bndbox['ymax'] = int(box.find('ymax').text) index += 1 im = cv2.imread(image_dir + '/' + image, cv2.IMREAD_COLOR) im = cv2.resize(im, (square_size, square_size)) im = im.reshape((1, square_size, square_size, 3)) outputs = predictor([im]) im = cv2.imread(image_dir + '/' + image, cv2.IMREAD_COLOR) objdetect = outputs[0] bndboxes = outputs[1] max_pred = -100 max_h = -1 max_w = -1 for h in range(0, objdetect.shape[1]): for w in range(0, objdetect.shape[2]): if objdetect[0, h, w] > max_pred: max_pred = objdetect[0, h, w] max_h = h max_w = w sum_labels= 0; bndbox = {} bndbox['xmin'] = 0 bndbox['ymin'] = 0 bndbox['xmax'] = 0 bndbox['ymax'] = 0 for h in range(0, objdetect.shape[1]): for w in range(0, objdetect.shape[2]): if (objdetect[0, h, w] > THRESHOLD and (h == max_h-1 or h == max_h or h == max_h+1) and (w == max_w-1 or w == max_w or w == max_w+1)) or (h == max_h and w == max_w): sum_labels += 1 bndbox['xmin'] += int( (bndboxes[0,h,w,0] + w*down_sample_factor) ) bndbox['ymin'] += int( (bndboxes[0,h,w,1] + h*down_sample_factor) ) bndbox['xmax'] += int( (bndboxes[0,h,w,2] + w*down_sample_factor) ) bndbox['ymax'] += int( (bndboxes[0,h,w,3] + h*down_sample_factor) ) temp_xmin = int( (bndboxes[0,h,w,0] + w*down_sample_factor) *(IMAGE_WIDTH/square_size) ) temp_ymin = int( (bndboxes[0,h,w,1] + h*down_sample_factor) *(IMAGE_HEIGHT/square_size) ) temp_xmax = int( (bndboxes[0,h,w,2] + w*down_sample_factor) *(IMAGE_WIDTH/square_size) ) temp_ymax = int( (bndboxes[0,h,w,3] + h*down_sample_factor) *(IMAGE_HEIGHT/square_size) ) cv2.rectangle(im, (temp_xmin,temp_ymin), (temp_xmax,temp_ymax), (255,0,0), 1) bndbox['xmin'] = int(bndbox['xmin']*(1/sum_labels)) bndbox['ymin'] = int(bndbox['ymin']*(1/sum_labels)) bndbox['xmax'] = int(bndbox['xmax']*(1/sum_labels)) bndbox['ymax'] = int(bndbox['ymax']*(1/sum_labels)) bndbox['xmin'] = int(bndbox['xmin']*(IMAGE_WIDTH/square_size)) bndbox['ymin'] = int(bndbox['ymin']*(IMAGE_HEIGHT/square_size)) bndbox['xmax'] = int(bndbox['xmax']*(IMAGE_WIDTH/square_size)) bndbox['ymax'] = int(bndbox['ymax']*(IMAGE_HEIGHT/square_size)) bndbox2 = {} bndbox2['xmin'] = int( bndboxes[0,max_h,max_w,0] + max_w*down_sample_factor) bndbox2['ymin'] = int( bndboxes[0,max_h,max_w,1] + max_h*down_sample_factor) bndbox2['xmax'] = int( bndboxes[0,max_h,max_w,2] + max_w*down_sample_factor) bndbox2['ymax'] = int( bndboxes[0,max_h,max_w,3] + max_h*down_sample_factor) bndbox2['xmin'] = int(bndbox2['xmin']*(IMAGE_WIDTH/square_size)) bndbox2['ymin'] = int(bndbox2['ymin']*(IMAGE_HEIGHT/square_size)) bndbox2['xmax'] = int(bndbox2['xmax']*(IMAGE_WIDTH/square_size)) bndbox2['ymax'] = int(bndbox2['ymax']*(IMAGE_HEIGHT/square_size)) print('----------------------------------------') print(str(max_h*14+max_w)) print('xmin: ' + str(bndbox2['xmin'])) print('xmax: ' + str(bndbox2['xmax'])) print('ymin: ' + str(bndbox2['ymin'])) print('ymax: ' + str(bndbox2['ymax'])) cv2.rectangle(im, (int(max_w*down_sample_factor*(IMAGE_WIDTH/square_size)),int(max_h*down_sample_factor*(IMAGE_HEIGHT/square_size))), (int((max_w+1)*down_sample_factor*(IMAGE_WIDTH/square_size)),int((max_h+1)*down_sample_factor*(IMAGE_HEIGHT/square_size))), (0,0,255), 1) cv2.rectangle(im, (true_bndbox['xmin'], true_bndbox['ymin']), (true_bndbox['xmax'], true_bndbox['ymax']), (255,0,0), 2) cv2.rectangle(im, (bndbox2['xmin'], bndbox2['ymin']), (bndbox2['xmax'],bndbox2['ymax']), (0,255,0), 2) cv2.imshow('image', im) cv2.imwrite('images_log/' + image, im) cv2.waitKey(800) def run_single_image(model, sess_init, image): print('Running single image!') if MONITOR == 1: monitor_names = ['conv_class_out', 'image_out', 'conv1_out', 'pool1_out', 'fire1_out', 'pool2_out', 'pool3_out', 'fire5_out', 'fire6_out', 'fire7_out'] else: monitor_names = [] output_names = ['objdetect_out', 'bndbox_out'] output_names.extend(monitor_names) pred_config = PredictConfig( model=model, session_init=sess_init, input_names=['input'], output_names=output_names ) predictor = OfflinePredictor(pred_config) if REAL_IMAGE == 1: im = cv2.imread(image, cv2.IMREAD_COLOR) im = cv2.resize(im, (square_size, square_size)) cv2.imwrite('test_image.png', im) im = im.reshape((1, square_size, square_size, 3)) else: im = np.zeros((1, square_size, square_size, 3)) k = 0 for h in range(0, square_size): for w in range(0,square_size): for c in range (0,3): # im[0][h][w][c] = 0 im[0][h][w][c] = k%256 k += 1 outputs = predictor([im]) objdetect = outputs[0] bndboxes = outputs[1] max_pred = -100 max_h = -1 max_w = -1 for h in range(0, objdetect.shape[1]): for w in range(0, objdetect.shape[2]): if objdetect[0, h, w] > max_pred: max_pred = objdetect[0, h, w] max_h = h max_w = w bndbox2 = {} bndbox2['xmin'] = int( bndboxes[0,max_h,max_w,0] + max_w*down_sample_factor) bndbox2['ymin'] = int( bndboxes[0,max_h,max_w,1] + max_h*down_sample_factor) bndbox2['xmax'] = int( bndboxes[0,max_h,max_w,2] + max_w*down_sample_factor) bndbox2['ymax'] = int( bndboxes[0,max_h,max_w,3] + max_h*down_sample_factor) bndbox2['xmin'] = int(bndbox2['xmin']*(640/square_size)) bndbox2['ymin'] = int(bndbox2['ymin']*(360/square_size)) bndbox2['xmax'] = int(bndbox2['xmax']*(640/square_size)) bndbox2['ymax'] = int(bndbox2['ymax']*(360/square_size)) # im = cv2.imread(image, cv2.IMREAD_COLOR) # cv2.rectangle(im, (bndbox2['xmin'], bndbox2['ymin']), (bndbox2['xmax'],bndbox2['ymax']), (0,255,0), 2) # cv2.imshow('image', im) # cv2.waitKey(2000) print('max_h: ' + str(max_h)) print('max_w: ' + str(max_w)) print('objdetect: ' + str(objdetect)) print('bndboxes: ' + str(bndboxes[0,max_h,max_w])) index = 2 for o in monitor_names: print(o + ', shape: ' + str(outputs[index].shape) ) if 'image' not in o: print(str(outputs[index])) if len(outputs[index].shape) == 4: file_name = o.split('/')[-1] print('Writing file... ' + file_name) if not os.path.exists('./log'): os.makedirs('./log') with open('./log/' + file_name + '.log', 'w') as f: for sample in range(0, outputs[index].shape[0]): for h in range(0, outputs[index].shape[1]): for w in range(0, outputs[index].shape[2]): res = '' for c in range(0, outputs[index].shape[3]): if 'image' in file_name: res = hexFromInt( int(outputs[index][sample, h, w, c]), 8 ) + '_' + res elif 'noact' in file_name: temp = (2**FACTOR_SCALE_BITS)*outputs[index][sample, h, w, c] res = hexFromInt( int(temp), 32 ) + '_' + res else: res = hexFromInt( int(outputs[index][sample, h, w, c]), BITA) + '_' + res f.write('0x' + res + '\n') index += 1 def dump_weights(meta, model, output): fw, fa, fg = get_dorefa(BITW, BITA, BITG) with tf.Graph().as_default() as G: tf.train.import_meta_graph(meta) init = get_model_loader(model) sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) sess.run(tf.global_variables_initializer()) init.init(sess) with sess.as_default(): if output: if output.endswith('npy') or output.endswith('npz'): varmanip.dump_session_params(output) else: var = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES) var.extend(tf.get_collection(tf.GraphKeys.MODEL_VARIABLES)) var_dict = {} for v in var: name = varmanip.get_savename_from_varname(v.name) var_dict[name] = v logger.info("Variables to dump:") logger.info(", ".join(var_dict.keys())) saver = tf.train.Saver( var_list=var_dict, write_version=tf.train.SaverDef.V2) saver.save(sess, output, write_meta_graph=False) network_model = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES) network_model.extend(tf.get_collection(tf.GraphKeys.MODEL_VARIABLES)) target_frequency = 200000000 target_FMpS = 300 non_quantized_layers = ['conv1/Conv2D', 'conv_obj/Conv2D', 'conv_box/Conv2D'] json_out, layers_list, max_cycles = generateLayers(sess, BITA, BITW, non_quantized_layers, target_frequency, target_FMpS) achieved_FMpS = target_frequency/max_cycles if DEMO_DATASET == 0: generateConfig(layers_list, 'halfsqueezenet-config.h') genereateHLSparams(layers_list, network_model, 'halfsqueezenet-params.h', fw) else: generateConfig(layers_list, 'halfsqueezenet-config_demo.h') genereateHLSparams(layers_list, network_model, 'halfsqueezenet-params_demo.h', fw) print('|---------------------------------------------------------|') print('target_FMpS: ' + str(target_FMpS) ) print('achieved_FMpS: ' + str(achieved_FMpS) ) if __name__ == '__main__': print('Start') parser = argparse.ArgumentParser() parser.add_argument('dump2_train1_test0', help='dump(2), train(1) or test(0)') parser.add_argument('--model', help='model file') parser.add_argument('--meta', help='metagraph file') parser.add_argument('--output', help='output for dumping') parser.add_argument('--gpu', help='the physical ids of GPUs to use') parser.add_argument('--data', help='DAC dataset dir') parser.add_argument('--run', help='directory of images to test') parser.add_argument('--weights', help='weights file') args = parser.parse_args() print('Using GPU ' + str(args.gpu)) if args.gpu: os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu print(str(args.dump2_train1_test0)) if args.dump2_train1_test0 == '1': if args.data == None: print('Provide DAC dataset path with --data') sys.exit() config = get_config() if args.model: config.session_init = SaverRestore(args.model) SimpleTrainer(config).train() elif args.dump2_train1_test0 == '0': if args.run == None: print('Provide images with --run ') sys.exit() if args.weights == None: print('Provide weights file (.npy) for testing!') sys.exit() assert args.weights.endswith('.npy') run_image(Model(), DictRestore(np.load(args.weights, encoding='latin1').item()), args.run) elif args.dump2_train1_test0 == '2': if args.meta == None: print('Provide meta file (.meta) for dumping') sys.exit() if args.model == None: print('Provide model file (.data-00000-of-00001) for dumping') sys.exit() dump_weights(args.meta, args.model, args.output) elif args.dump2_train1_test0 == '3': if args.run == None: print('Provide image with --run ') sys.exit() if args.weights == None: print('Provide weights file (.npy) for testing!') sys.exit() assert args.weights.endswith('.npy') run_single_image(Model(), DictRestore(np.load(args.weights, encoding='latin1').item()), args.run)
一个奇怪的c#客户端 异步事件调用datagridview控件的问题
我的客户端有两个界面,下面就简称为Form1 和Form2。 Form1有一个datagridview控件,需要有的操作就是在控件里循环载入数据库中的数据并清空再下一轮载入数据。 Form2的功能是监听网络请求并如果收到请求告诉Form1开始工作。 然后问题就来了如果 在Form1 中调用这个操作没有任何问题,如果用Form2调用这个操作的方法(这里是用了一个委托事件触发的,而且因为在调用Form中方法的同时还需要保持监听,所以用线程池做成异步来处理)就会返回main主函数抛出一个空指针异常,但是我断点调试没有找到空指针的对象。异常如图: ![图片说明](https://img-ask.csdn.net/upload/201912/13/1576202647_246492.png) ![图片说明](https://img-ask.csdn.net/upload/201912/13/1576202720_799585.png) 然后上错误代码: Form1对控件操作: ``` public void test() { Monitor.Enter(this); try { for (int i = 0; i < 7; i++) { Control.CheckForIllegalCrossThreadCalls = false; string fn = list[i]; OpenDB(fn); //Thread.Sleep(1000); } } finally { Monitor.Exit(this); } } void connectToDB(string fn) { m_dbConnection = new SQLiteConnection("Data Source=" + fn + ";Version=3;"); //m_dbConnection.SetPassword("abc"); m_dbConnection.SetPassword("1234LiaoQiu4321"); m_dbConnection.Open(); } public void OpenDB(string fn) { try { string sql; SQLiteCommand command; connectToDB(fn); sql = @"SELECT * FROM [评分细则] ORDER BY [步骤];"; command = new SQLiteCommand(sql, m_dbConnection); SQLiteDataReader reader = command.ExecuteReader(); dataGridView2.Rows.Clear(); object[] obj = new object[10]; while (reader.Read()) { for (int i = 0; i < 10; ++i) { obj[i] = reader[i]; } dataGridView2.Rows.Add(obj); } reader.Close(); } catch //(Exception ex) { MessageBox.Show("打开失败!", "错误", MessageBoxButtons.OK, MessageBoxIcon.Exclamation); } finally { m_dbConnection.Close(); } } /// <summary> /// 事件订阅 /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void 其他窗口运行ToolStripMenuItem_Click(object sender, EventArgs e) { Form2 ab = new Form2(); ab.getEvent += test; ab.Show(); } ``` Form2: ``` public delegate void getHandler(); public event getHandler getEvent; private void button1_Click(object sender, EventArgs e) { ThreadPool.QueueUserWorkItem(new WaitCallback(test), new object()); } public void test(object val) { getEvent(); } ``` Form2这里是我写的一个模拟错误调用的demo,因为监听网络调用调式比较麻烦,所以用button模拟触发 收到消息这个动作。考虑过跨线程调用控件的问题,所以已经加了禁用语句。 问题奇怪的地方有三个:1.如果是Form1本身的线程调用,或者Form2中不加异步就不会抛异常;2,调试时发现循环第一次不会出错,也就是会往控件里加载一次数据,但是第N次循环就可能会出现问题,出错的循环次数是哪一次无法确定,但是肯定会有一次出错。3.报错的位置不在调试时卡住的地方而是回到main主函数报错 而且目前找到一种解决办法就是在循环提内加入延迟,就是Thread.Sleep(500),这样也不会报错,但是不理解这样做就不报错的原因
myeclipse 中tomcat已启动,localhost:8080正常显示,但是jsp显示404错误
我的操作步骤是 1.将tomcat7里的文件换成8.0版本 ![图片说明](https://img-ask.csdn.net/upload/201911/28/1574906541_978330.jpg)![图片说明](https://img-ask.csdn.net/upload/201911/28/1574906557_550828.jpg)![图片说明](https://img-ask.csdn.net/upload/201911/28/1574906572_663091.jpg)2.将jsp文件代码输入好,然后载入tomcat7 ![图片说明](https://img-ask.csdn.net/upload/201911/28/1574906587_875678.jpg) ![图片说明](https://img-ask.csdn.net/upload/201911/28/1574906597_957023.jpg)![图片说明](https://img-ask.csdn.net/upload/201911/28/1574909209_698568.jpg) 3.8080能打开,但是jsp文件显示404![图片说明](https://img-ask.csdn.net/upload/201911/28/1574906609_517173.jpg)![图片说明](https://img-ask.csdn.net/upload/201911/28/1574906618_74955.jpg) 求大佬帮帮忙(孩子都急哭了!!!)
模型训练后,进行识别时,权重不一致,怎么修改?
通过ModelTraining 训练的生成模型 ``` from imageai.Prediction.Custom import ModelTraining model_trainer = ModelTraining() model_trainer.setModelTypeAsResNet() model_trainer.setDataDirectory("datasets") # batch_size 训练类别的整除数 model_trainer.trainModel(num_objects=4, num_experiments=10, enhance_data=True, batch_size=2, show_network_summary=True) ``` 当与imageai的模型世界时报错 ``` # from imageai.Detection import ObjectDetection # import os # import time # #计时 # start = time.time() # execution_path = os.getcwd() # # detector = ObjectDetection() # detector.setModelTypeAsRetinaNet() # # #载入已训练好的文件 # detector.setModelPath( os.path.join(execution_path , "model_weights.h5")) # detector.loadModel('fastest') # # #将检测后的结果保存为新图片 # detections = detector.detectObjectsFromImage(input_image=os.path.join(execution_path , "./img/one.jpg"), output_image_path=os.path.join(execution_path , "./img/image3new.jpg")) # # #结束计时 # end = time.time() # # for eachObject in detections: # print(eachObject["name"] + " : " + eachObject["percentage_probability"] ) # print("--------------------------------") # # print ("\ncost time:",end-start) #!/usr/bin/env python3 from imageai.Detection import ObjectDetection import os execution_path = os.getcwd() detector = ObjectDetection() detector.setModelTypeAsRetinaNet() detector.setModelPath(os.path.join(execution_path , "./models/model_ex-010_acc-0.250000.h5")) detector.loadModel() detections = detector.detectObjectsFromImage(input_image=os.path.join(execution_path , "img/one.jpg"), output_image_path=os.path.join(execution_path , "image3new.jpg"), minimum_percentage_probability=30) for eachObject in detections: print(eachObject["name"] , " : ", eachObject["percentage_probability"]) print("--------------------------------") ``` 运行时报错: ValueError: You are trying to load a weight file containing 107 layers into a model with 116 layers. 应该怎么修改让他们可以进行识别
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私...
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小人工智障。 思路可以运用在不同地方,主要介绍的是思路。
面试官问我:什么是消息队列?什么场景需要他?用了会出现什么问题?
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式和人才交流群,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸...
8年经验面试官详解 Java 面试秘诀
作者 |胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。 Java程序员准备和投递简历的实...
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观...
MyBatis研习录(01)——MyBatis概述与入门
MyBatis 是一款优秀的持久层框架,它支持定制化 SQL、存储过程以及高级映射。MyBatis原本是apache的一个开源项目iBatis, 2010年该项目由apache software foundation 迁移到了google code并改名为MyBatis 。2013年11月MyBatis又迁移到Github。
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip installselenium -ihttps://pypi.tuna.tsinghua.edu.cn/simple/ ...
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca...
Python爬虫精简步骤1 获取数据
爬虫,从本质上来说,就是利用程序在网上拿到对我们有价值的数据。 爬虫能做很多事,能做商业分析,也能做生活助手,比如:分析北京近两年二手房成交均价是多少?广州的Python工程师平均薪资是多少?北京哪家餐厅粤菜最好吃?等等。 这是个人利用爬虫所做到的事情,而公司,同样可以利用爬虫来实现巨大的商业价值。比如你所熟悉的搜索引擎——百度和谷歌,它们的核心技术之一也是爬虫,而且是超级爬虫。 从搜索巨头到人工...
Python绘图,圣诞树,花,爱心 | Turtle篇
每周每日,分享Python实战代码,入门资料,进阶资料,基础语法,爬虫,数据分析,web网站,机器学习,深度学习等等。 公众号回复【进群】沟通交流吧,QQ扫码进群学习吧 微信群 QQ群 1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle()...
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东...
破14亿,Python分析我国存在哪些人口危机!
一、背景 二、爬取数据 三、数据分析 1、总人口 2、男女人口比例 3、人口城镇化 4、人口增长率 5、人口老化(抚养比) 6、各省人口 7、世界人口 四、遇到的问题 遇到的问题 1、数据分页,需要获取从1949-2018年数据,观察到有近20年参数:LAST20,由此推测获取近70年的参数可设置为:LAST70 2、2019年数据没有放上去,可以手动添加上去 3、将数据进行 行列转换 4、列名...
web前端javascript+jquery知识点总结
1.Javascript 语法.用途 javascript 在前端网页中占有非常重要的地位,可以用于验证表单,制作特效等功能,它是一种描述语言,也是一种基于对象(Object)和事件驱动并具有安全性的脚本语言 ...
Python实战:抓肺炎疫情实时数据,画2019-nCoV疫情地图
今天,群里白垩老师问如何用python画武汉肺炎疫情地图。白垩老师是研究海洋生态与地球生物的学者,国家重点实验室成员,于不惑之年学习python,实为我等学习楷模。先前我并没有关注武汉肺炎的具体数据,也没有画过类似的数据分布图。于是就拿了两个小时,专门研究了一下,遂成此文。
听说想当黑客的都玩过这个Monyer游戏(1~14攻略)
第零关 进入传送门开始第0关(游戏链接) 请点击链接进入第1关: 连接在左边→ ←连接在右边 看不到啊。。。。(只能看到一堆大佬做完的留名,也能看到菜鸡的我,在后面~~) 直接fn+f12吧 &lt;span&gt;连接在左边→&lt;/span&gt; &lt;a href="first.php"&gt;&lt;/a&gt; &lt;span&gt;←连接在右边&lt;/span&gt; o...
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ...... ...
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!...
渗透测试-灰鸽子远控木马
木马概述 灰鸽子( Huigezi),原本该软件适用于公司和家庭管理,其功能十分强大,不但能监视摄像头、键盘记录、监控桌面、文件操作等。还提供了黑客专用功能,如:伪装系统图标、随意更换启动项名称和表述、随意更换端口、运行后自删除、毫无提示安装等,并采用反弹链接这种缺陷设计,使得使用者拥有最高权限,一经破解即无法控制。最终导致被黑客恶意使用。原作者的灰鸽子被定义为是一款集多种控制方式于一体的木马程序...
Python:爬取疫情每日数据
前言 目前每天各大平台,如腾讯、今日头条都会更新疫情每日数据,他们的数据源都是一样的,主要都是通过各地的卫健委官网通报。 以全国、湖北和上海为例,分别为以下三个网站: 国家卫健委官网:http://www.nhc.gov.cn/xcs/yqtb/list_gzbd.shtml 湖北卫健委官网:http://wjw.hubei.gov.cn/bmdt/ztzl/fkxxgzbdgrfyyq/xxfb...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。 再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。 下文是原回答,希望能对你能有所启发。 如果我说,这个世界上人真的分三六九等,...
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合...
雷火神山直播超两亿,Web播放器事件监听是怎么实现的?
Web播放器解决了在手机浏览器和PC浏览器上播放音视频数据的问题,让视音频内容可以不依赖用户安装App,就能进行播放以及在社交平台进行传播。在视频业务大数据平台中,播放数据的统计分析非常重要,所以Web播放器在使用过程中,需要对其内部的数据进行收集并上报至服务端,此时,就需要对发生在其内部的一些播放行为进行事件监听。 那么Web播放器事件监听是怎么实现的呢? 01 监听事件明细表 名...
3万字总结,Mysql优化之精髓
本文知识点较多,篇幅较长,请耐心学习 MySQL已经成为时下关系型数据库产品的中坚力量,备受互联网大厂的青睐,出门面试想进BAT,想拿高工资,不会点MySQL优化知识,拿offer的成功率会大大下降。 为什么要优化 系统的吞吐量瓶颈往往出现在数据库的访问速度上 随着应用程序的运行,数据库的中的数据会越来越多,处理时间会相应变慢 数据是存放在磁盘上的,读写速度无法和内存相比 如何优化 设计...
Python新型冠状病毒疫情数据自动爬取+统计+发送报告+数据屏幕(三)发送篇
今天介绍的项目是使用 Itchat 发送统计报告 项目功能设计: 定时爬取疫情数据存入Mysql 进行数据分析制作疫情报告 使用itchat给亲人朋友发送分析报告 基于Django做数据屏幕 使用Tableau做数据分析 来看看最终效果 目前已经完成,预计2月12日前更新 使用 itchat 发送数据统计报告 itchat 是一个基于 web微信的一个框架,但微信官方并不允许使用这...
作为程序员的我,大学四年一直自学,全靠这些实用工具和学习网站!
我本人因为高中沉迷于爱情,导致学业荒废,后来高考,毫无疑问进入了一所普普通通的大学,实在惭愧???? 我又是那么好强,现在学历不行,没办法改变的事情了,所以,进入大学开始,我就下定决心,一定要让自己掌握更多的技能,尤其选择了计算机这个行业,一定要多学习技术。 在进入大学学习不久后,我就认清了一个现实:我这个大学的整体教学质量和学习风气,真的一言难尽,懂的人自然知道怎么回事? 怎么办?我该如何更好的提升自...
粒子群算法求解物流配送路线问题(python)
1.Matlab实现粒子群算法的程序代码:https://www.cnblogs.com/kexinxin/p/9858664.html matlab代码求解函数最优值:https://blog.csdn.net/zyqblog/article/details/80829043 讲解通俗易懂,有数学实例的博文:https://blog.csdn.net/daaikuaichuan/article/...
教你如何编写第一个简单的爬虫
很多人知道爬虫,也很想利用爬虫去爬取自己想要的数据,那么爬虫到底怎么用呢?今天就教大家编写一个简单的爬虫。 下面以爬取笔者的个人博客网站为例获取第一篇文章的标题名称,教大家学会一个简单的爬虫。 第一步:获取页面 #!/usr/bin/python # coding: utf-8 import requests #引入包requests link = "http://www.santostang....
前端JS初级面试题二 (。•ˇ‸ˇ•。)老铁们!快来瞧瞧自己都会了么
1. 传统事件绑定和符合W3C标准的事件绑定有什么区别? 传统事件绑定 &lt;div onclick=""&gt;123&lt;/div&gt; div1.onclick = function(){}; &lt;button onmouseover=""&gt;&lt;/button&gt; 注意: 如果给同一个元素绑定了两次或多次相同类型的事件,那么后面的绑定会覆盖前面的绑定 (不支持DOM事...
相关热词 c# 识别回车 c#生成条形码ean13 c#子控制器调用父控制器 c# 写大文件 c# 浏览pdf c#获取桌面图标的句柄 c# list反射 c# 句柄 进程 c# 倒计时 线程 c# 窗体背景色
立即提问