tf.app.run()显示tensorflow未配置app

代码最后加了

if __name__ == '__main__':
    tf.app.run()

显示错误

AttributeError: module 'tensorflow' has no attribute 'app'

一直没找到怎么回事,也在cmd里试了pip install app,也不行。想请问下这是什么问题导致的。

图片说明

图片说明

1个回答

老哥去看看tensorflow包下有没有app呗

qq_43520245
eat_ 回复Fire_dadada~: 你tf.app引用的是tensorflow包下的app,你这个直接是app模块,引用方式是不是该变一下了
3 个月之前 回复
Fireda
Fire_dadada~ 有的,路径如下E:\anaconda\Lib\site-packages\app和E:\anaconda\Lib\site-packages\app-0.0.1.dist-info
3 个月之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
cannot import name etree ------tf.app.run()命令行中运行脚本时报错
在py文件中运行时没有问题,在python shell中也可以,但是运行脚本时就会报错没有该模块,没有发现自己有用同名文件(夹), python 3.6 lxml 4.3.3 想问一下各位大神该怎么处理这一问题?
Tensorflow object detection API 训练自己数据时报错 Windows fatal exception: access violation
python3.6, tf 1.14.0,Tensorflow object detection API 跑demo图片和改为摄像头进行物体识别均正常, 训练自己的数据训练自己数据时报错 Windows fatal exception: access violation 用的ssd_mobilenet_v1_coco_2018_01_28模型, 命令:python model_main.py -pipeline_config_path=/pre_model/pipeline.config -model_dir=result -num_train_steps=2000 -alsologtostderr 其实就是按照网上基础的训练来的,一直报这个,具体错误输出如下: (py36) D:\pythonpro\TensorFlowLearn\face_tf_model>python model_main.py -pipeline_config_path=/pre_model/pipeline.config -model_dir=result -num_train_steps=2000 -alsologtostderr WARNING: Logging before flag parsing goes to stderr. W0622 16:50:30.230578 14180 lazy_loader.py:50] The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md * https://github.com/tensorflow/addons * https://github.com/tensorflow/io (for I/O related ops) If you depend on functionality not listed there, please file an issue. W0622 16:50:30.317274 14180 deprecation_wrapper.py:119] From D:\Anaconda3\libdata\tf_models\research\slim\nets\inception_resnet_v2.py:373: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead. W0622 16:50:30.355400 14180 deprecation_wrapper.py:119] From D:\Anaconda3\libdata\tf_models\research\slim\nets\mobilenet\mobilenet.py:397: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead. W0622 16:50:30.388313 14180 deprecation_wrapper.py:119] From model_main.py:109: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead. W0622 16:50:30.397290 14180 deprecation_wrapper.py:119] From D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\utils\config_util.py:98: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead. Windows fatal exception: access violation Current thread 0x00003764 (most recent call first): File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 84 in _preread_check File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 122 in read File "D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\utils\config_util.py", line 99 in get_configs_from_pipeline_file File "D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\model_lib.py", line 606 in create_estimator_and_inputs File "model_main.py", line 71 in main File "D:\Anaconda3\envs\py36\lib\site-packages\absl\app.py", line 251 in _run_main File "D:\Anaconda3\envs\py36\lib\site-packages\absl\app.py", line 300 in run File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\platform\app.py", line 40 in run File "model_main.py", line 109 in <module> (py36) D:\pythonpro\TensorFlowLearn\face_tf_model> 请大神指点下
深度学习Tensorflow-DCGAN训练图片出错
Traceback (most recent call last): File "main.py", line 147, in <module> tf.app.run() File "/home/Natalie2/anaconda3/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "main.py", line 70, in main flags_dict = {k:FLAGS[k].value for k in FLAGS} TypeError: '_FlagValues' object is not iterable
tensorflow cifar10教程如何实现断点续训?我希望每次能从上次的结果继续训练
cifar10_train.py FLAGS = tf.app.flags.FLAGS tf.app.flags.DEFINE_string('train_dir', 'D:/tmp/cifar10_trainn', """Directory where to write event logs """ """and checkpoint.""") tf.app.flags.DEFINE_integer('max_steps', 100000, """Number of batches to run.""") tf.app.flags.DEFINE_boolean('log_device_placement', False, """Whether to log device placement.""") tf.app.flags.DEFINE_integer('log_frequency', 10, """How often to log results to the console.""") def train(): """Train CIFAR-10 for a number of steps.""" with tf.Graph().as_default(): global_step = tf.train.get_or_create_global_step() # Get images and labels for CIFAR-10. # Force input pipeline to CPU:0 to avoid operations sometimes ending up on # GPU and resulting in a slow down. with tf.device('/cpu:0'): images, labels = cifar10.distorted_inputs() # Build a Graph that computes the logits predictions from the # inference model. logits = cifar10.inference(images) # Calculate loss. loss = cifar10.loss(logits, labels) # Build a Graph that trains the model with one batch of examples and # updates the model parameters. train_op = cifar10.train(loss, global_step) class _LoggerHook(tf.train.SessionRunHook): """Logs loss and runtime.""" def begin(self): self._step = -1 self._start_time = time.time() def before_run(self, run_context): self._step += 1 return tf.train.SessionRunArgs(loss) # Asks for loss value. def after_run(self, run_context, run_values): if self._step % FLAGS.log_frequency == 0: current_time = time.time() duration = current_time - self._start_time self._start_time = current_time loss_value = run_values.results examples_per_sec = FLAGS.log_frequency * FLAGS.batch_size / duration sec_per_batch = float(duration / FLAGS.log_frequency) format_str = ('%s: step %d, loss = %.2f (%.1f examples/sec; %.3f ' 'sec/batch)') print (format_str % (datetime.now(), self._step, loss_value, examples_per_sec, sec_per_batch)) saver = tf.train.Saver() with tf.train.MonitoredTrainingSession( checkpoint_dir=FLAGS.train_dir, hooks=[tf.train.StopAtStepHook(last_step=FLAGS.max_steps), tf.train.NanTensorHook(loss), _LoggerHook()], config=tf.ConfigProto( log_device_placement=FLAGS.log_device_placement)) as mon_sess: while not mon_sess.should_stop(): mon_sess.run(train_op) def main(argv=None): # pylint: disable=unused-argument cifar10.maybe_download_and_extract() if tf.gfile.Exists(FLAGS.train_dir): tf.gfile.DeleteRecursively(FLAGS.train_dir) tf.gfile.MakeDirs(FLAGS.train_dir) train() if __name__ == '__main__': tf.app.run()
Linux下 python import pdb出现无cmd属性
```求大神解答 ```Python 3.6.7 |Anaconda, Inc.| (default, Oct 23 2018, 19:16:44) [GCC 7.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/miniconda/envs/py36/lib/python3.6/site-packages/tensorflow/__init__.py", line 28, in <module> from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "/miniconda/envs/py36/lib/python3.6/site-packages/tensorflow/python/__init__.py", line 63, in <module> from tensorflow.python.framework.framework_lib import * # pylint: disable=redefined-builtin File "/miniconda/envs/py36/lib/python3.6/site-packages/tensorflow/python/framework/framework_lib.py", line 25, in <module> from tensorflow.python.framework.ops import Graph File "/miniconda/envs/py36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 54, in <module> from tensorflow.python.platform import app File "/miniconda/envs/py36/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 23, in <module> from absl.app import run as _run File "/miniconda/envs/py36/lib/python3.6/site-packages/absl/app.py", line 35, in <module> import pdb File "/miniconda/envs/py36/lib/python3.6/pdb.py", line 136, in <module> class Pdb(bdb.Bdb, cmd.Cmd): AttributeError: module 'cmd' has no attribute 'Cmd'
用create_pascal_tf_record.py时候出现的问题!
这是我用create_pascal_tf_record.py出现的错误 ``` D:\tensorflow\models\research\object_detection>python dataset_tools\create_pascal_tf_record.py --label_map=D:\tensorflow\pedestrain_train\data\label_map.pbtxt --data_dir=D:\pedestrain_data --year=VOC2012 --set=train --output_path=D:\pascal_train.record Traceback (most recent call last): File "dataset_tools\create_pascal_tf_record.py", line 185, in <module> tf.app.run() File "C:\anaconda\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run _sys.exit(main(argv)) File "dataset_tools\create_pascal_tf_record.py", line 167, in main examples_list = dataset_util.read_examples_list(examples_path) File "D:\ssd-detection\models-master\research\object_detection\utils\dataset_util.py", line 59, in read_examples_list lines = fid.readlines() File "C:\anaconda\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 188, in readlines self._preread_check() File "C:\anaconda\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 85, in _preread_check compat.as_bytes(self.__name), 1024 * 512, status) File "C:\anaconda\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 519, in __exit__ c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.NotFoundError: NewRandomAccessFile failed to Create/Open: D:\pedestrain_data\VOC2012\ImageSets\Main\aeroplane_train.txt : \u03f5\u0373\udcd5\u04b2\udcbb\udcb5\udcbd\u05b8\udcb6\udca8\udcb5\udcc4\udcce\u013c\udcfe\udca1\udca3 ; No such file or directory ``` 可是我的main文件夹里面是pedestrain_train.txt和pedestrain_val.txt为什么他要去找aeroplane_train.txt这个文件呢
TensorFlow 自编码器 placeholder错误
``` import numpy as np import tensorflow as tf def xavier_init(fan_in, fan_out, constant=1): low = -constant * np.sqrt(6.0 / (fan_in + fan_out)) high = constant * np.sqrt(6.0 / (fan_in + fan_out)) return tf.random_uniform((fan_in, fan_out), minval=low, maxval=high, dtype=tf.float32) class AdditiveGaussionNoiseAutoencoder(object): def __init__(self, n_input, n_hidden, transfer_function=tf.nn.relu, optimizer=tf.train.AdamOptimizer(), scale=0.1): self.n_input = n_input self.n_hidden = n_hidden self.transfer = transfer_function self.scale = tf.placeholder(tf.float32) self.training_scale = scale network_weights = self._initialize_weights() self.weights = network_weights self.x = tf.placeholder(tf.float32, [None, self.n_input]) self.hidden = self.transfer(tf.add(tf.matmul( self.x + scale * tf.random_normal((n_input,)), self.weights['w1']), self.weights['b1'])) self.reconstruction = tf.add(tf.matmul(self.hidden, self.weights['w2']), self.weights['b2']) self.cost = tf.sqrt(tf.reduce_mean(tf.pow(tf.subtract( self.reconstruction, self.x), 2.0))) self.optimizer = optimizer.minimize(self.cost) init = tf.global_variables_initializer() self.sess = tf.Session() self.sess.run(init) def _initialize_weights(self): all_weights = dict() all_weights['w1'] = tf.Variable(xavier_init(self.n_input, self.n_hidden)) all_weights['b1'] = tf.Variable(tf.zeros([self.n_hidden], dtype=tf.float32)) all_weights['w2'] = tf.Variable(tf.zeros([self.n_hidden, self.n_input], dtype=tf.float32)) all_weights['b2'] = tf.Variable(tf.zeros([self.n_input], dtype=tf.float32)) return all_weights def partial_fit(self, X): cost, opt = self.sess.run((self.cost, self.optimizer), feed_dict={self.x: X, self.scale: self.training_scale}) return cost def calc_total_cost(self, X): return self.sess.run(self.cost, feed_dict={self.x: X, self.scale: self.training_scale}) def transform(self, X): return self.sess.run(self.hidden, feed_dict={self.x: X, self.scale: self.training_scale}) def generate(self, hidden=None): if hidden is None: hidden = np.random.normal(size=self.weights['b1']) return self.sess.run(self.reconstruction, feed_dict={self.hidden: hidden}) def reconstruct(self, X): return self.sess.run(self.reconstruction, feed_dict={self.x: X, self.scale: self.training_scale}) def getweights(self): return self.sess.run(self.weights['w1']) def getbiases(self): return self.sess.run(self.weights['b1']) ``` ``` import numpy as np import tensorflow as tf from DSAE import AdditiveGaussionNoiseAutoencoder import xlrd import sklearn.preprocessing as prep #数据读取,可转换为csv文件,好处理,参见ConvertData train_input = "/Users/Patrick/Desktop/traffic_data/train_500010092_input.xls" train_output = "/Users/Patrick/Desktop/traffic_data/train_500010092_output.xls" test_input = "/Users/Patrick/Desktop/traffic_data/test_500010092_input.xls" test_output = "/Users/Patrick/Desktop/traffic_data/test_500010092_output.xls" book_train_input = xlrd.open_workbook(train_input, encoding_override='utf-8') book_train_output = xlrd.open_workbook(train_output, encoding_override='utf-8') book_test_input = xlrd.open_workbook(test_input, encoding_override='utf-8') book_test_output = xlrd.open_workbook(test_output, encoding_override='utf-8') sheet_train_input = book_train_input.sheet_by_index(0) sheet_train_output = book_train_output.sheet_by_index(0) sheet_test_input = book_test_input.sheet_by_index(0) sheet_test_output = book_test_output.sheet_by_index(0) data_train_input = np.asarray([sheet_train_input.row_values(i) for i in range(2, sheet_train_input.nrows)]) data_train_output = np.asarray(([sheet_train_output.row_values(i) for i in range(2, sheet_train_output.ncols)])) data_test_input = np.asarray([sheet_test_input.row_values(i) for i in range(2, sheet_test_input.nrows)]) data_test_output = np.asarray(([sheet_test_output.row_values(i) for i in range(2, sheet_test_output.ncols)])) def standard_scale(X_train, X_test): preprocessor=prep.StandardScaler().fit(X_train) X_train=preprocessor.transform(X_train) X_test=preprocessor.transform(X_test) return X_train, X_test X_train, X_test = standard_scale(data_train_input, data_test_input) def get_block_form_data(data, batch_size, k): #start_index =0 start_index = k * batch_size return data[start_index:(start_index+batch_size)] training_epochs = 20 batch_size = 288 n_samples = sheet_test_output.nrows display_step = 1 stack_size = 3 hidden_size = [10, 8, 10] sdae = [] for i in range(stack_size): if i == 0: ae = AdditiveGaussionNoiseAutoencoder(n_input=12, n_hidden=hidden_size[i], transfer_function=tf.nn.relu, optimizer=tf.train.AdamOptimizer(learning_rate=0.01), scale=0.01) ae._initialize_weights() sdae.append(ae) else: ae = AdditiveGaussionNoiseAutoencoder(n_input=hidden_size[i-1], n_hidden=hidden_size[i], transfer_function=tf.nn.relu, optimizer=tf.train.AdamOptimizer(learning_rate=0.01), scale=0.01) ae._initialize_weights() sdae.append(ae) W = [] b = [] hidden_feacture = [] X_train = np.array([0]) for j in range(stack_size): if j == 0: X_train = data_train_input X_test = data_test_input else: X_train_pre = X_train X_train = sdae[j-1].transform(X_train_pre) print(X_train.shape) hidden_feacture.append(X_train) for epoch in range(training_epochs): avg_cost = 0. total_batch = int(n_samples / batch_size) for i in range(total_batch): batch_xs = get_block_form_data(X_train, batch_size, i) cost = sdae[j].partial_fit(batch_xs) avg_cost += cost / n_samples * batch_size if epoch % display_step == 0: print("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(avg_cost)) weight = sdae[j].getweights() W.append(weight) print(np.shape(W)) b.append(sdae[j].getbiases()) print(np.shape(b)) ``` 然后报错如下: ``` File "/Applications/PyCharm.app/Contents/helpers/pydev/pydev_run_in_console.py", line 53, in run_file pydev_imports.execfile(file, globals, locals) # execute the script File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/Users/Patrick/PycharmProjects/DSAE-SVM/DLmain.py", line 80, in <module> X_train = sdae[j-1].transform(X_train_pre) File "/Users/Patrick/PycharmProjects/DSAE-SVM/DSAE.py", line 70, in transform feed_dict={self.x: X, self.scale: self.training_scale}) File "/Users/Patrick/anaconda3/envs/tensorflow/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 905, in run run_metadata_ptr) File "/Users/Patrick/anaconda3/envs/tensorflow/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1113, in _run str(subfeed_t.get_shape()))) ValueError: Cannot feed value of shape (18143, 3) for Tensor 'Placeholder_1:0', which has shape '(?, 12)' PyDev console: starting. Python 3.4.5 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:47:57) [GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] on darwin ``` 实在是不知道该如何修改palceholder的shape 求帮忙讲解
在运行tensorflow MNIST 里的例子时报错
/tensorflow-master/tensorflow/examples/tutorials/mnist$ python fully_connected_feed.py /usr/local/lib/python2.7/dist-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20. "This module will be removed in 0.20.", DeprecationWarning) Traceback (most recent call last): File "fully_connected_feed.py", line 277, in <module> tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) TypeError: run() got an unexpected keyword argument 'argv' 我是从GITHUB上下载的包,代码也没改,运行的fully_connceted_feed.py时报错
pycharm运行后显示Process finished with exit code 0
from __future__ import absolute_import from __future__ import division from __future__ import print_function import csv import os.path import time import numpy as np import tensorflow as tf import gpr import load_dataset import nngp tf.logging.set_verbosity(tf.logging.INFO) flags = tf.app.flags FLAGS = flags.FLAGS flags.DEFINE_string('hparams', '', 'Comma separated list of name=value hyperparameter pairs to' 'override the default setting.') flags.DEFINE_string('experiment_dir', '/tmp/nngp', 'Directory to put the experiment results.') flags.DEFINE_string('grid_path', 'pythonplace/nngp/grid_data', 'Directory to put or find the training data.') flags.DEFINE_integer('num_train', 1000, 'Number of training data.') flags.DEFINE_integer('num_eval', 1000, 'Number of evaluation data. Use 10_000 for full eval') flags.DEFINE_integer('seed', 1234, 'Random number seed for data shuffling') flags.DEFINE_boolean('save_kernel', False, 'Save Kernel do disk') flags.DEFINE_string('dataset', 'mnist', 'Which dataset to use ["mnist"]') flags.DEFINE_boolean('use_fixed_point_norm', False, 'Normalize input variance to fixed point variance') flags.DEFINE_integer('n_gauss', 501, 'Number of gaussian integration grid. Choose odd integer.') flags.DEFINE_integer('n_var', 501, 'Number of variance grid points.') flags.DEFINE_integer('n_corr', 500, 'Number of correlation grid points.') flags.DEFINE_integer('max_var', 100, 'Max value for variance grid.') flags.DEFINE_integer('max_gauss', 10, 'Range for gaussian integration.') def set_default_hparams(): return tf.contrib.training.HParams( nonlinearity='tanh', weight_var=1.3, bias_var=0.2, depth=2) def do_eval(sess, model, x_data, y_data, save_pred=False): """Run evaluation.""" gp_prediction, stability_eps = model.predict(x_data, sess) pred_1 = np.argmax(gp_prediction, axis=1) accuracy = np.sum(pred_1 == np.argmax(y_data, axis=1)) / float(len(y_data)) mse = np.mean(np.mean((gp_prediction - y_data)**2, axis=1)) pred_norm = np.mean(np.linalg.norm(gp_prediction, axis=1)) tf.logging.info('Accuracy: %.4f'%accuracy) tf.logging.info('MSE: %.8f'%mse) if save_pred: with tf.gfile.Open( os.path.join(FLAGS.experiment_dir, 'gp_prediction_stats.npy'), 'w') as f: np.save(f, gp_prediction) return accuracy, mse, pred_norm, stability_eps def run_nngp_eval(hparams, run_dir): """Runs experiments.""" tf.gfile.MakeDirs(run_dir) # Write hparams to experiment directory. with tf.gfile.GFile(run_dir + '/hparams', mode='w') as f: f.write(hparams.to_proto().SerializeToString()) tf.logging.info('Starting job.') tf.logging.info('Hyperparameters') tf.logging.info('---------------------') tf.logging.info(hparams) tf.logging.info('---------------------') tf.logging.info('Loading data') # Get the sets of images and labels for training, validation, and # # test on dataset. if FLAGS.dataset == 'mnist': (train_image, train_label, valid_image, valid_label, test_image, test_label) = load_dataset.load_mnist( num_train=FLAGS.num_train, mean_subtraction=True, random_roated_labels=False) else: raise NotImplementedError tf.logging.info('Building Model') if hparams.nonlinearity == 'tanh': nonlin_fn = tf.tanh elif hparams.nonlinearity == 'relu': nonlin_fn = tf.nn.relu else: raise NotImplementedError with tf.Session() as sess: # Construct NNGP kernel nngp_kernel = nngp.NNGPKernel( depth=hparams.depth, weight_var=hparams.weight_var, bias_var=hparams.bias_var, nonlin_fn=nonlin_fn, grid_path=FLAGS.grid_path, n_gauss=FLAGS.n_gauss, n_var=FLAGS.n_var, n_corr=FLAGS.n_corr, max_gauss=FLAGS.max_gauss, max_var=FLAGS.max_var, use_fixed_point_norm=FLAGS.use_fixed_point_norm) input("hello") # Construct Gaussian Process Regression model model = gpr.GaussianProcessRegression( train_image, train_label, kern=nngp_kernel) start_time = time.time() tf.logging.info('Training') # For large number of training points, we do not evaluate on full set to # save on training evaluation time. if FLAGS.num_train <= 5000: acc_train, mse_train, norm_train, final_eps = do_eval( sess, model, train_image[:FLAGS.num_eval], train_label[:FLAGS.num_eval]) tf.logging.info('Evaluation of training set (%d examples) took ' '%.3f secs'%( min(FLAGS.num_train, FLAGS.num_eval), time.time() - start_time)) else: acc_train, mse_train, norm_train, final_eps = do_eval( sess, model, train_image[:1000], train_label[:1000]) tf.logging.info('Evaluation of training set (%d examples) took ' '%.3f secs'%(1000, time.time() - start_time)) start_time = time.time() tf.logging.info('Validation') acc_valid, mse_valid, norm_valid, _ = do_eval( sess, model, valid_image[:FLAGS.num_eval], valid_label[:FLAGS.num_eval]) tf.logging.info('Evaluation of valid set (%d examples) took %.3f secs'%( FLAGS.num_eval, time.time() - start_time)) start_time = time.time() tf.logging.info('Test') acc_test, mse_test, norm_test, _ = do_eval( sess, model, test_image[:FLAGS.num_eval], test_label[:FLAGS.num_eval], save_pred=False) tf.logging.info('Evaluation of test set (%d examples) took %.3f secs'%( FLAGS.num_eval, time.time() - start_time)) metrics = { 'train_acc': float(acc_train), 'train_mse': float(mse_train), 'train_norm': float(norm_train), 'valid_acc': float(acc_valid), 'valid_mse': float(mse_valid), 'valid_norm': float(norm_valid), 'test_acc': float(acc_test), 'test_mse': float(mse_test), 'test_norm': float(norm_test), 'stability_eps': float(final_eps), } record_results = [ FLAGS.num_train, hparams.nonlinearity, hparams.weight_var, hparams.bias_var, hparams.depth, acc_train, acc_valid, acc_test, mse_train, mse_valid, mse_test, final_eps ] if nngp_kernel.use_fixed_point_norm: metrics['var_fixed_point'] = float(nngp_kernel.var_fixed_point_np[0]) record_results.append(nngp_kernel.var_fixed_point_np[0]) # Store data result_file = os.path.join(run_dir, 'results.csv') with tf.gfile.Open(result_file, 'a') as f: filewriter = csv.writer(f) filewriter.writerow(record_results) return metrics if __name__ == '__main__': # tf.app.run(main) hparams = set_default_hparams().parse(FLAGS.hparams) print("hparams:", hparams) x = FLAGS.experiment_dir print(x) run_nngp_eval(hparams, x)
ValueError: None values not supported.
Traceback (most recent call last): File "document_summarizer_training_testing.py", line 296, in <module> tf.app.run() File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "document_summarizer_training_testing.py", line 291, in main train() File "document_summarizer_training_testing.py", line 102, in train model = MY_Model(sess, len(vocab_dict)-2) File "/home/lyliu/Refresh-master-self-attention/my_model.py", line 70, in __init__ self.train_op_policynet_expreward = model_docsum.train_neg_expectedreward(self.rewardweighted_cross_entropy_loss_multi File "/home/lyliu/Refresh-master-self-attention/model_docsum.py", line 835, in train_neg_expectedreward grads_and_vars_capped_norm = [(tf.clip_by_norm(grad, 5.0), var) for grad, var in grads_and_vars] File "/home/lyliu/Refresh-master-self-attention/model_docsum.py", line 835, in <listcomp> grads_and_vars_capped_norm = [(tf.clip_by_norm(grad, 5.0), var) for grad, var in grads_and_vars] File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/ops/clip_ops.py", line 107,rm t = ops.convert_to_tensor(t, name="t") File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 676o_tensor as_ref=False) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 741convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", onstant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape)) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", ake_tensor_proto raise ValueError("None values not supported.") ValueError: None values not supported. 使用tensorflow gpu版本 tensorflow 1.2.0。希望找到解决方法或者出现这个错误的原因
用TensorFlow 训练mask rcnn时,总是在执行训练语句时报错,进行不下去了,求大神
用TensorFlow 训练mask rcnn时,总是在执行训练语句时报错,进行不下去了,求大神 执行语句是: ``` python model_main.py --model_dir=C:/Users/zoyiJiang/Desktop/mask_rcnn_test-master/training --pipeline_config_path=C:/Users/zoyiJiang/Desktop/mask_rcnn_test-master/training/mask_rcnn_inception_v2_coco.config ``` 报错信息如下: ``` WARNING:tensorflow:Forced number of epochs for all eval validations to be 1. WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered `eval_on_train_input_config.num_epochs` = 0. Overwriting `num_epochs` to 1. WARNING:tensorflow:Estimator's model_fn (<function create_model_fn.<locals>.model_fn at 0x000001C1EA335C80>) includes params argument, but params are not passed to Estimator. WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards. Traceback (most recent call last): File "model_main.py", line 109, in <module> tf.app.run() File "E:\Python3.6\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run _sys.exit(main(argv)) File "model_main.py", line 105, in main tf.estimator.train_and_evaluate(estimator, train_spec, eval_specs[0]) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 439, in train_and_evaluate executor.run() File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 518, in run self.run_local() File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 650, in run_local hooks=train_hooks) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 363, in train loss = self._train_model(input_fn, hooks, saving_listeners) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 843, in _train_model return self._train_model_default(input_fn, hooks, saving_listeners) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 853, in _train_model_default input_fn, model_fn_lib.ModeKeys.TRAIN)) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 691, in _get_features_and_labels_from_input_fn result = self._call_input_fn(input_fn, mode) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 798, in _call_input_fn return input_fn(**kwargs) File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 525, in _train_input_fn batch_size=params['batch_size'] if params else train_config.batch_size) File "D:\Tensorflow\tf\models\research\object_detection\builders\dataset_builder.py", line 149, in build dataset = data_map_fn(process_fn, num_parallel_calls=num_parallel_calls) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 853, in map return ParallelMapDataset(self, map_func, num_parallel_calls) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1870, in __init__ super(ParallelMapDataset, self).__init__(input_dataset, map_func) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1839, in __init__ self._map_func.add_to_graph(ops.get_default_graph()) File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 484, in add_to_graph self._create_definition_if_needed() File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 319, in _create_definition_if_needed self._create_definition_if_needed_impl() File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 336, in _create_definition_if_needed_impl outputs = self._func(*inputs) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1804, in tf_map_func ret = map_func(nested_args) File "D:\Tensorflow\tf\models\research\object_detection\builders\dataset_builder.py", line 130, in process_fn processed_tensors = transform_input_data_fn(processed_tensors) File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 515, in transform_and_pad_input_data_fn tensor_dict=transform_data_fn(tensor_dict), File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 129, in transform_input_data tf.expand_dims(tf.to_float(image), axis=0)) File "D:\Tensorflow\tf\models\research\object_detection\meta_architectures\faster_rcnn_meta_arch.py", line 543, in preprocess parallel_iterations=self._parallel_iterations) File "D:\Tensorflow\tf\models\research\object_detection\utils\shape_utils.py", line 237, in static_or_dynamic_map_fn outputs = [fn(arg) for arg in tf.unstack(elems)] File "D:\Tensorflow\tf\models\research\object_detection\utils\shape_utils.py", line 237, in <listcomp> outputs = [fn(arg) for arg in tf.unstack(elems)] File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2264, in resize_to_range lambda: _resize_portrait_image(image)) File "E:\Python3.6\lib\site-packages\tensorflow\python\util\deprecation.py", line 432, in new_func return func(*args, **kwargs) File "E:\Python3.6\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2063, in cond orig_res_t, res_t = context_t.BuildCondBranch(true_fn) File "E:\Python3.6\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 1913, in BuildCondBranch original_result = fn() File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2263, in <lambda> lambda: _resize_landscape_image(image), File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2245, in _resize_landscape_image align_corners=align_corners, preserve_aspect_ratio=True) TypeError: resize_images() got an unexpected keyword argument 'preserve_aspect_ratio' ``` 根据提示的最后一句,是说没有一个有效参数 我用的是TensorFlow1.8 python3.6,下载的最新的TensorFlow-models-master
我的mnist运行报错,请问是那出现问题了?
from __future__ import absolute_import from __future__ import division from __future__ import print_function import argparse #解析训练和检测数据模块 import sys from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf FLAGS = None def main(_): # Import data mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) # Create the model x = tf.placeholder(tf.float32, [None, 784]) #此函数可以理解为形参,用于定义过程,在执行的时候再赋具体的值 W = tf.Variable(tf.zeros([784, 10])) # tf.zeros表示所有的维度都为0 b = tf.Variable(tf.zeros([10])) y = tf.matmul(x, W) + b #对应每个分类概率值。 # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, 10]) # The raw formulation of cross-entropy, # # tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.nn.softmax(y)), # reduction_indices=[1])) # # can be numerically unstable. # # So here we use tf.nn.softmax_cross_entropy_with_logits on the raw # outputs of 'y', and then average across the batch. cross_entropy = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) sess = tf.InteractiveSession() tf.global_variables_initializer().run() # Train for _ in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--data_dir', type=str, default='/tmp/tensorflow/mnist/input_data', help='Directory for storing input data') FLAGS, unparsed = parser.parse_known_args() tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) ``` ```下面是报错: TimeoutError Traceback (most recent call last) ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args) 1317 h.request(req.get_method(), req.selector, req.data, headers, -> 1318 encode_chunked=req.has_header('Transfer-encoding')) 1319 except OSError as err: # timeout error ~\Anaconda3\envs\tensorflow\lib\http\client.py in request(self, method, url, body, headers, encode_chunked) 1238 """Send a complete request to the server.""" -> 1239 self._send_request(method, url, body, headers, encode_chunked) 1240 ~\Anaconda3\envs\tensorflow\lib\http\client.py in _send_request(self, method, url, body, headers, encode_chunked) 1284 body = _encode(body, 'body') -> 1285 self.endheaders(body, encode_chunked=encode_chunked) 1286 ~\Anaconda3\envs\tensorflow\lib\http\client.py in endheaders(self, message_body, encode_chunked) 1233 raise CannotSendHeader() -> 1234 self._send_output(message_body, encode_chunked=encode_chunked) 1235 ~\Anaconda3\envs\tensorflow\lib\http\client.py in _send_output(self, message_body, encode_chunked) 1025 del self._buffer[:] -> 1026 self.send(msg) 1027 ~\Anaconda3\envs\tensorflow\lib\http\client.py in send(self, data) 963 if self.auto_open: --> 964 self.connect() 965 else: ~\Anaconda3\envs\tensorflow\lib\http\client.py in connect(self) 1399 self.sock = self._context.wrap_socket(self.sock, -> 1400 server_hostname=server_hostname) 1401 if not self._context.check_hostname and self._check_hostname: ~\Anaconda3\envs\tensorflow\lib\ssl.py in wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session) 400 server_hostname=server_hostname, --> 401 _context=self, _session=session) 402 ~\Anaconda3\envs\tensorflow\lib\ssl.py in __init__(self, sock, keyfile, certfile, server_side, cert_reqs, ssl_version, ca_certs, do_handshake_on_connect, family, type, proto, fileno, suppress_ragged_eofs, npn_protocols, ciphers, server_hostname, _context, _session) 807 raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets") --> 808 self.do_handshake() 809 ~\Anaconda3\envs\tensorflow\lib\ssl.py in do_handshake(self, block) 1060 self.settimeout(None) -> 1061 self._sslobj.do_handshake() 1062 finally: ~\Anaconda3\envs\tensorflow\lib\ssl.py in do_handshake(self) 682 """Start the SSL/TLS handshake.""" --> 683 self._sslobj.do_handshake() 684 if self.context.check_hostname: TimeoutError: [WinError 10060] 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。 During handling of the above exception, another exception occurred: URLError Traceback (most recent call last) <ipython-input-1-eaf9732201f9> in <module>() 57 help='Directory for storing input data') 58 FLAGS, unparsed = parser.parse_known_args() ---> 59 tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) ~\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py in run(main, argv) 46 # Call the main function, passing through any arguments 47 # to the final program. ---> 48 _sys.exit(main(_sys.argv[:1] + flags_passthrough)) 49 50 <ipython-input-1-eaf9732201f9> in main(_) 15 def main(_): 16 # Import data ---> 17 mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) 18 19 # Create the model ~\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py in read_data_sets(train_dir, fake_data, one_hot, dtype, reshape, validation_size, seed) 238 239 local_file = base.maybe_download(TRAIN_LABELS, train_dir, --> 240 SOURCE_URL + TRAIN_LABELS) 241 with open(local_file, 'rb') as f: 242 train_labels = extract_labels(f, one_hot=one_hot) ~\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py in maybe_download(filename, work_directory, source_url) 206 filepath = os.path.join(work_directory, filename) 207 if not gfile.Exists(filepath): --> 208 temp_file_name, _ = urlretrieve_with_retry(source_url) 209 gfile.Copy(temp_file_name, filepath) 210 with gfile.GFile(filepath) as f: ~\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py in wrapped_fn(*args, **kwargs) 163 for delay in delays(): 164 try: --> 165 return fn(*args, **kwargs) 166 except Exception as e: # pylint: disable=broad-except) 167 if is_retriable is None: ~\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py in urlretrieve_with_retry(url, filename) 188 @retry(initial_delay=1.0, max_delay=16.0, is_retriable=_is_retriable) 189 def urlretrieve_with_retry(url, filename=None): --> 190 return urllib.request.urlretrieve(url, filename) 191 192 ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in urlretrieve(url, filename, reporthook, data) 246 url_type, path = splittype(url) 247 --> 248 with contextlib.closing(urlopen(url, data)) as fp: 249 headers = fp.info() 250 ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context) 221 else: 222 opener = _opener --> 223 return opener.open(url, data, timeout) 224 225 def install_opener(opener): ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in open(self, fullurl, data, timeout) 524 req = meth(req) 525 --> 526 response = self._open(req, data) 527 528 # post-process response ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in _open(self, req, data) 542 protocol = req.type 543 result = self._call_chain(self.handle_open, protocol, protocol + --> 544 '_open', req) 545 if result: 546 return result ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in _call_chain(self, chain, kind, meth_name, *args) 502 for handler in handlers: 503 func = getattr(handler, meth_name) --> 504 result = func(*args) 505 if result is not None: 506 return result ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in https_open(self, req) 1359 def https_open(self, req): 1360 return self.do_open(http.client.HTTPSConnection, req, -> 1361 context=self._context, check_hostname=self._check_hostname) 1362 1363 https_request = AbstractHTTPHandler.do_request_ ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args) 1318 encode_chunked=req.has_header('Transfer-encoding')) 1319 except OSError as err: # timeout error -> 1320 raise URLError(err) 1321 r = h.getresponse() 1322 except: URLError: <urlopen error [WinError 10060] 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。> In [ ]:
ValueError: No data files found in satellite/data\satellite_train_*.tfrecord
跟着书做"打造自己的图片识别模型"项目时候遇到报错,报错早不到数据文件,但是文件路径和数据都没问题 D:\Anaconda\anaconda\envs\tensorflow\python.exe D:/PyCharm/PycharmProjects/chapter_3/slim/train_image_classifier.py --train_dir=satellite/train_dir --dataset_name=satellite --dataset_split_name=train --dataset_dir=satellite/data --model_name=inception_v3 --checkpoint_path=satellite/pretrained/inception_v3.ckpt --checkpoint_exclude_scopes=InceptionV3/Logits,InceptionV3/AuxLogits --trainable_scopes=InceptionV3/Logits,InceptionV3/AuxLogits --max_number_of_steps=100000 --batch_size=32 --learning_rate=0.001 --learning_rate_decay_type=fixed --save_interval_secs=300 --save_summaries_secs=2 --log_every_n_steps=10 --optimizer=rmsprop --weight_decay=0.00004 WARNING:tensorflow:From D:/PyCharm/PycharmProjects/chapter_3/slim/train_image_classifier.py:397: create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.create_global_step Traceback (most recent call last): File "D:/PyCharm/PycharmProjects/chapter_3/slim/train_image_classifier.py", line 572, in <module> tf.app.run() File "D:\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "D:/PyCharm/PycharmProjects/chapter_3/slim/train_image_classifier.py", line 430, in main common_queue_min=10 * FLAGS.batch_size) File "D:\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\contrib\slim\python\slim\data\dataset_data_provider.py", line 94, in __init__ scope=scope) File "D:\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\contrib\slim\python\slim\data\parallel_reader.py", line 238, in parallel_read data_files = get_data_files(data_sources) File "D:\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\contrib\slim\python\slim\data\parallel_reader.py", line 311, in get_data_files raise ValueError('No data files found in %s' % (data_sources,)) ValueError: No data files found in satellite/data\satellite_train_*.tfrecord
运行python批处理文件,报了一个缺少属性“key”错误,找到源码但是看不懂,求大佬们解答
在使用自己的数据集做训练时,用到一个retrain.py的文件,之后通过批处理文件运行 以下是批处理文件代码: ``` python D:\python\Anaconda\envs\tensorflow\tensorflow-master\tensorflow\examples\image_retraining\retrain.py ^ --bottleneck_dir bottleneck ^ --how_many_training_steps 200 ^ --model_dir D:\python\Anaconda\envs\tensorflow\inception_model ^ --output_graph output_graph.pb ^ --output_labels output_labels.txt ^ --image_dir data/train/ pause ``` 调用时,报了 File "D:\python\Anaconda\envs\tensorflow\tensorflow-master\tensorflow\examples\image_retraining\retrain.py", line 1313, in <module> tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) File "D:\python\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run _sys.exit(main(argv)) File "D:\python\Anaconda\envs\tensorflow\tensorflow-master\tensorflow\examples\image_retraining\retrain.py", line 982, in main class_count = len(image_lists.keys()) AttributeError: 'NoneType' object has no attribute 'keys' 错误最后是说982行缺少keys属性,但不知如何修改 retrain代码太长,只能把链接发上来
使用tesorflow中model_main.py遇到的问题!
直接上代码了 ``` D:\tensorflow\models\research\object_detection>python model_main.py --pipeline_config_path=E:\python_demo\pedestrian_demo\pedestrian_train\models\pipeline.config --model_dir=E:\python_demo\pedestrian_demo\pedestrian_train\models\train --num_train_steps=5000 --sample_1_of_n_eval_examples=1 --alsologstderr Traceback (most recent call last): File "model_main.py", line 109, in <module> tf.app.run() File "C:\anaconda\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run _sys.exit(main(argv)) File "model_main.py", line 71, in main FLAGS.sample_1_of_n_eval_on_train_examples)) File "D:\ssd-detection\models-master\research\object_detection\model_lib.py", line 589, in create_estimator_and_inputs pipeline_config_path, config_override=config_override) File "D:\ssd-detection\models-master\research\object_detection\utils\config_util.py", line 98, in get_configs_from_pipeline_file text_format.Merge(proto_str, pipeline_config) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 574, in Merge descriptor_pool=descriptor_pool) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 631, in MergeLines return parser.MergeLines(lines, message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 654, in MergeLines self._ParseOrMerge(lines, message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 676, in _ParseOrMerge self._MergeField(tokenizer, message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 801, in _MergeField merger(tokenizer, message, field) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 875, in _MergeMessageField self._MergeField(tokenizer, sub_message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 801, in _MergeField merger(tokenizer, message, field) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 875, in _MergeMessageField self._MergeField(tokenizer, sub_message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 801, in _MergeField merger(tokenizer, message, field) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 875, in _MergeMessageField self._MergeField(tokenizer, sub_message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 768, in _MergeField (message_descriptor.full_name, name)) google.protobuf.text_format.ParseError: 35:7 : Message type "object_detection.protos.SsdFeatureExtractor" has no field named "batch_norm_trainable". ``` 这个错误怎么解决,求大神指导~
pyqt5中如何通过OpenCV读取一帧图像喂入网络呢?
我想通过pyqt5制作一个UI界面封装google object detection api的示例代码,源代码中是识别单张图片,我想通过摄像头输入一帧的图像然后进行识别显示。整个程序如下: ``` # coding:utf-8 ''' V3.0A版本,尝试实现摄像头识别 ''' import numpy as np import cv2 import os import os.path import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile import pylab from distutils.version import StrictVersion from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5.QtWidgets import * from PyQt5.QtCore import * from PyQt5.QtGui import * class UiForm(): openfile_name_pb = '' openfile_name_pbtxt = '' openpic_name = '' num_class = 0 def setupUi(self, Form): Form.setObjectName("Form") Form.resize(600, 690) Form.setMinimumSize(QtCore.QSize(600, 690)) Form.setMaximumSize(QtCore.QSize(600, 690)) self.frame = QtWidgets.QFrame(Form) self.frame.setGeometry(QtCore.QRect(20, 20, 550, 100)) self.frame.setFrameShape(QtWidgets.QFrame.StyledPanel) self.frame.setFrameShadow(QtWidgets.QFrame.Raised) self.frame.setObjectName("frame") self.horizontalLayout_2 = QtWidgets.QHBoxLayout(self.frame) self.horizontalLayout_2.setObjectName("horizontalLayout_2") # 加载模型文件按钮 self.btn_add_file = QtWidgets.QPushButton(self.frame) self.btn_add_file.setObjectName("btn_add_file") self.horizontalLayout_2.addWidget(self.btn_add_file) # 加载pbtxt文件按钮 self.btn_add_pbtxt = QtWidgets.QPushButton(self.frame) self.btn_add_pbtxt.setObjectName("btn_add_pbtxt") self.horizontalLayout_2.addWidget(self.btn_add_pbtxt) # 输入检测类别数目按钮 self.btn_enter = QtWidgets.QPushButton(self.frame) self.btn_enter.setObjectName("btn_enter") self.horizontalLayout_2.addWidget(self.btn_enter) # 打开摄像头 self.btn_opencam = QtWidgets.QPushButton(self.frame) self.btn_opencam.setObjectName("btn_objdec") self.horizontalLayout_2.addWidget(self.btn_opencam) # 开始识别按钮 self.btn_objdec = QtWidgets.QPushButton(self.frame) self.btn_objdec.setObjectName("btn_objdec") self.horizontalLayout_2.addWidget(self.btn_objdec) # 退出按钮 self.btn_exit = QtWidgets.QPushButton(self.frame) self.btn_exit.setObjectName("btn_exit") self.horizontalLayout_2.addWidget(self.btn_exit) # 显示识别后的画面 self.lab_rawimg_show = QtWidgets.QLabel(Form) self.lab_rawimg_show.setGeometry(QtCore.QRect(50, 140, 500, 500)) self.lab_rawimg_show.setMinimumSize(QtCore.QSize(500, 500)) self.lab_rawimg_show.setMaximumSize(QtCore.QSize(500, 500)) self.lab_rawimg_show.setObjectName("lab_rawimg_show") self.lab_rawimg_show.setStyleSheet(("border:2px solid red")) self.retranslateUi(Form) # 这里将按钮和定义的动作相连,通过click信号连接openfile槽? self.btn_add_file.clicked.connect(self.openpb) # 用于打开pbtxt文件 self.btn_add_pbtxt.clicked.connect(self.openpbtxt) # 用于用户输入类别数 self.btn_enter.clicked.connect(self.enter_num_cls) # 打开摄像头 self.btn_opencam.clicked.connect(self.opencam) # 开始识别 # ~ self.btn_objdec.clicked.connect(self.object_detection) # 这里是将btn_exit按钮和Form窗口相连,点击按钮发送关闭窗口命令 self.btn_exit.clicked.connect(Form.close) QtCore.QMetaObject.connectSlotsByName(Form) def retranslateUi(self, Form): _translate = QtCore.QCoreApplication.translate Form.setWindowTitle(_translate("Form", "目标检测")) self.btn_add_file.setText(_translate("Form", "加载模型文件")) self.btn_add_pbtxt.setText(_translate("Form", "加载pbtxt文件")) self.btn_enter.setText(_translate("From", "指定识别类别数")) self.btn_opencam.setText(_translate("Form", "打开摄像头")) self.btn_objdec.setText(_translate("From", "开始识别")) self.btn_exit.setText(_translate("Form", "退出")) self.lab_rawimg_show.setText(_translate("Form", "识别效果")) def openpb(self): global openfile_name_pb openfile_name_pb, _ = QFileDialog.getOpenFileName(self.btn_add_file,'选择pb文件','/home/kanghao/','pb_files(*.pb)') print('加载模型文件地址为:' + str(openfile_name_pb)) def openpbtxt(self): global openfile_name_pbtxt openfile_name_pbtxt, _ = QFileDialog.getOpenFileName(self.btn_add_pbtxt,'选择pbtxt文件','/home/kanghao/','pbtxt_files(*.pbtxt)') print('加载标签文件地址为:' + str(openfile_name_pbtxt)) def opencam(self): self.camcapture = cv2.VideoCapture(0) self.timer = QtCore.QTimer() self.timer.start() self.timer.setInterval(100) # 0.1s刷新一次 self.timer.timeout.connect(self.camshow) def camshow(self): global camimg _ , camimg = self.camcapture.read() print(_) camimg = cv2.resize(camimg, (512, 512)) camimg = cv2.cvtColor(camimg, cv2.COLOR_BGR2RGB) print(type(camimg)) #strcamimg = camimg.tostring() showImage = QtGui.QImage(camimg.data, camimg.shape[1], camimg.shape[0], QtGui.QImage.Format_RGB888) self.lab_rawimg_show.setPixmap(QtGui.QPixmap.fromImage(showImage)) def enter_num_cls(self): global num_class num_class, okPressed = QInputDialog.getInt(self.btn_enter,'指定训练类别数','你的目标有多少类?',1,1,28,1) if okPressed: print('识别目标总类为:' + str(num_class)) def img2pixmap(self, image): Y, X = image.shape[:2] self._bgra = np.zeros((Y, X, 4), dtype=np.uint8, order='C') self._bgra[..., 0] = image[..., 2] self._bgra[..., 1] = image[..., 1] self._bgra[..., 2] = image[..., 0] qimage = QtGui.QImage(self._bgra.data, X, Y, QtGui.QImage.Format_RGB32) pixmap = QtGui.QPixmap.fromImage(qimage) return pixmap def object_detection(self): sys.path.append("..") from object_detection.utils import ops as utils_ops if StrictVersion(tf.__version__) < StrictVersion('1.9.0'): raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!') from utils import label_map_util from utils import visualization_utils as vis_util # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_FROZEN_GRAPH = openfile_name_pb # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = openfile_name_pbtxt NUM_CLASSES = num_class detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) def load_image_into_numpy_array(image): (im_width, im_height) = image.size return np.array(image.getdata()).reshape( (im_height, im_width, 3)).astype(np.uint8) # For the sake of simplicity we will use only 2 images: # image1.jpg # image2.jpg # If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS. TEST_IMAGE_PATHS = camimg print(TEST_IMAGE_PATHS) # Size, in inches, of the output images. IMAGE_SIZE = (12, 8) def run_inference_for_single_image(image, graph): with graph.as_default(): with tf.Session() as sess: # Get handles to input and output tensors ops = tf.get_default_graph().get_operations() all_tensor_names = {output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks' ]: tensor_name = key + ':0' if tensor_name in all_tensor_names: tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # Run inference output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image, 0)}) # all outputs are float32 numpy arrays, so convert types as appropriate output_dict['num_detections'] = int(output_dict['num_detections'][0]) output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) output_dict['detection_boxes'] = output_dict['detection_boxes'][0] output_dict['detection_scores'] = output_dict['detection_scores'][0] if 'detection_masks' in output_dict: output_dict['detection_masks'] = output_dict['detection_masks'][0] return output_dict #image = Image.open(TEST_IMAGE_PATHS) # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. image_np = load_image_into_numpy_array(TEST_IMAGE_PATHS) # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # Actual detection. output_dict = run_inference_for_single_image(image_np, detection_graph) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, line_thickness=8) plt.figure(figsize=IMAGE_SIZE) plt.imshow(image_np) #plt.savefig(str(TEST_IMAGE_PATHS)+".jpg") ## 用于显示ui界面的命令 if __name__ == "__main__": app = QtWidgets.QApplication(sys.argv) Window = QtWidgets.QWidget() # ui为根据类Ui_From()创建的实例 ui = UiForm() ui.setupUi(Window) Window.show() sys.exit(app.exec_()) ``` 但是运行提示: ![图片说明](https://img-ask.csdn.net/upload/201811/30/1543567054_511116.png) 求助
利用ajax动态的提取mysql中的数据,并且在前端页面中展示出来
代码如下: 前端html: ``` <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title></title> </head> <!--<script type="text/javascript" src="jquery.js"></script>--> <script type="text/javascript" src="http://echarts.baidu.com/gallery/vendors/echarts/echarts.min.js"></script> <script src="http://apps.bdimg.com/libs/jquery/2.1.4/jquery.min.js"></script> <body> <div id="main" style="width: 600px;height:400px;"></div> </body> </html> <script> var app = { xvalue: [], yvalue: [], z:[], }; // 发送ajax请求,从后台获取json数据 $(document).ready(function () { getData(); console.log(app.value1); console.log(app.timepoint) console.log(app.predictvalue1) }); function getData() { $.ajax({ url: '/test', data: {}, type: 'POST', async: false, dataType: 'json', success: function (data) { app.value1 = data.value1; app.predictvalue1=data.predictvalue1; value1 = app.value1; predictvalue1=app.predictvalue1; function trueData(i) { now = new Date(+now + oneDay); value = value1[i]; return { name: now.toString(), value: [ [now.getFullYear(), now.getMonth() + 1, now.getDate()].join('/'), Math.round(value) ] } } function predictData(i) { now1 = new Date(+now1 + oneDay); predictvalue = predictvalue1[i]; return { name: now1.toString(), value: [ [now1.getFullYear(), now1.getMonth() + 1, now1.getDate()].join('/'), Math.round(predictvalue) ] } } var data = []; var predictdata=[]; var now = +new Date(1997, 9, 3); var now1 = +new Date(1997, 9, 4); var oneDay = 24 * 3600 * 1000; for (var i = 0; i < value1.length; i++) { data.push(trueData(i)); } for (var i = 0; i < predictvalue1.length; i++) { predictdata.push(predictData(i)); } // 基于准备好的dom,初始化echarts实例 var myChart = echarts.init(document.getElementById('main')); option = { title: { text: '动态数据 + 时间坐标轴' }, tooltip: { trigger: 'axis', formatter: function (params) { params = params[0]; var date = new Date(params.name); return date.getDate() + '/' + (date.getMonth() + 1) + '/' + date.getFullYear() + ' : ' + params.value[1]; }, axisPointer: { animation: false } }, xAxis: { type: 'time', splitLine: { show: false } }, yAxis: { type: 'value', boundaryGap: [0, '100%'], splitLine: { show: false } }, series: [{ name: '真实数据', type: 'line', showSymbol: false, hoverAnimation: false, data: [], markLine: { itemStyle: { normal: { borderWidth: 1, lineStyle: { type: "dash", color: 'red', width: 2 }, show: true, color: '#4c5336' } }, data: [{ yAxis: 900 }] } }, { name: '预测数据', type: 'line', showSymbol: false, hoverAnimation: false, data: [], markLine: { itemStyle: { normal: { borderWidth: 1, lineStyle: { type: "dash", color: 'blue', width: 2 }, show: true, color: '#4c5336' } }, data: [{ yAxis: 900 }] } }] }; // 使用刚指定的配置项和数据显示图表。 myChart.setOption(option); setInterval(function () { for (var i = 0; i < 1; i++) { data.shift(); data.push(trueData(i)); } for (var i = 0; i < 1; i++) { predictdata.shift(); predictdata.push(predictData(i)); } myChart.setOption({ series: [{ data: data }, { data: predictdata }] }); }, 1000); } }) } </script> </body> </html> ``` 后端py,用的是flask框架: ``` import MySQLdb from flask import Flask, render_template, url_for import pymysql import pandas as pd import numpy as np from pandas import read_csv import matplotlib.pyplot as plt from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error from keras.models import Sequential from keras.layers import LSTM, Dense, Activation,Dropout import json import operator from functools import reduce import math import tensorflow as tf from keras import initializers import time # 生成Flask实例 app = Flask(__name__) @app.route("/") def hello(): return render_template('new_file.html') # /test路由 接收前端的Ajax请求 @app.route('/test', methods=['POST']) def my_echart(): # 连接数据库 conn = MySQLdb.connect(host='127.0.0.1', port=3306, user='root', passwd='123456', db='test', charset='utf8') cur = conn.cursor() sql = 'SELECT timepoint,value1 from timeseries' cur.execute(sql) u = cur.fetchall() timepoint = [] value1 = [] for data in u: value1.append(data[1]) timepoint.append(data[0]) print(value1) # 转换成json格式 jsonData = {} jsonData['value1'] = value1 jsonData['timepoint']=timepoint # json.dumps()用于将dict类型的数据转换成str,因为如果直接将dict类型的数据写入json会报错,因此将数据写入时需要用到此函数 j = json.dumps(jsonData) cur.close() conn.close() # 在浏览器上渲染my_template.html模板(为了查看输出数据) return (j) if __name__ == '__main__': app.run(debug=True,port='5000') ``` 返回的数据是从mysql中读取的,现在我想用ajax的方法定时请求数据库的下一个数据到达前台,并且刷新页面显示出来,应该怎么修改代码? 数据库如下: ![图片说明](https://img-ask.csdn.net/upload/201905/24/1558685991_221903.jpg)
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、PDF搜索网站推荐 对于大部
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入  假设现有4个人
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
写在前面: 我是 扬帆向海,这个昵称来源于我的名字以及女朋友的名字。我热爱技术、热爱开源、热爱编程。技术是开源的、知识是共享的。 这博客是对自己学习的一点点总结及记录,如果您对 Java、算法 感兴趣,可以关注我的动态,我们一起学习。 用知识改变命运,让我们的家人过上更好的生活。 目录一、杨辉三角的介绍二、杨辉三角的算法思想三、代码实现1.第一种写法2.第二种写法 一、杨辉三角的介绍 百度
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
一 说明 如果是初学者,建议去网上寻找安装Mysql的文章安装,以及使用navicat连接数据库,以后的示例基本是使用mysql数据库管理系统; 二 准备前提 需要建立一张学生表,列分别是id,名称,年龄,学生信息;本示例中文章篇幅原因SQL注释略; 建表语句: CREATE TABLE `student` ( `id` int(11) NOT NULL AUTO_INCREMENT, `
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // dosho
【图解经典算法题】如何用一行代码解决约瑟夫环问题
约瑟夫环问题算是很经典的题了,估计大家都听说过,然后我就在一次笔试中遇到了,下面我就用 3 种方法来详细讲解一下这道题,最后一种方法学了之后保证让你可以让你装逼。 问题描述:编号为 1-N 的 N 个士兵围坐在一起形成一个圆圈,从编号为 1 的士兵开始依次报数(1,2,3…这样依次报),数到 m 的 士兵会被杀死出列,之后的士兵再从 1 开始报数。直到最后剩下一士兵,求这个士兵的编号。 1、方
致 Python 初学者
文章目录1. 前言2. 明确学习目标,不急于求成,不好高骛远3. 在开始学习 Python 之前,你需要做一些准备2.1 Python 的各种发行版2.2 安装 Python2.3 选择一款趁手的开发工具3. 习惯使用IDLE,这是学习python最好的方式4. 严格遵从编码规范5. 代码的运行、调试5. 模块管理5.1 同时安装了py2/py35.2 使用Anaconda,或者通过IDE来安装模
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,
程序员:我终于知道post和get的区别
IT界知名的程序员曾说:对于那些月薪三万以下,自称IT工程师的码农们,其实我们从来没有把他们归为我们IT工程师的队伍。他们虽然总是以IT工程师自居,但只是他们一厢情愿罢了。 此话一出,不知激起了多少(码农)程序员的愤怒,却又无可奈何,于是码农问程序员。 码农:你知道get和post请求到底有什么区别? 程序员:你看这篇就知道了。 码农:你月薪三万了? 程序员:嗯。 码农:你是怎么做到的? 程序员:
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
      11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI 算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下
日均350000亿接入量,腾讯TubeMQ性能超过Kafka
整理 | 夕颜出品 | AI科技大本营(ID:rgznai100) 【导读】近日,腾讯开源动作不断,相继开源了分布式消息中间件TubeMQ,基于最主流的 OpenJDK8开发的
8年经验面试官详解 Java 面试秘诀
    作者 | 胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。   Java程序员准备和投递简历的实
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车? 某胡同口的煎饼摊一年能卖出多少个煎饼? 深圳有多少个产品经理? 一辆公交车里能装下多少个乒乓球? 一
so easy! 10行代码写个"狗屁不通"文章生成器
前几天,GitHub 有个开源项目特别火,只要输入标题就可以生成一篇长长的文章。背后实现代码一定很复杂吧,里面一定有很多高深莫测的机器学习等复杂算法不过,当我看了源代码之后这程序不到50
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的 回答 - Bravo Yeung,获得该问题下回答中得最高赞(236赞和1枚专业勋章),对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalk
相关热词 c# 二进制截断字符串 c#实现窗体设计器 c#检测是否为微信 c# plc s1200 c#里氏转换原则 c# 主界面 c# do loop c#存为组套 模板 c# 停掉协程 c# rgb 读取图片
立即提问