使用spring3.X+Hibernate4.X来做项目,报异常 No Session found for current thread

org.hibernate.HibernateException: No Session found for current thread
at org.springframework.orm.hibernate4.SpringSessionContext.currentSession(SpringSessionContext.java:97)
at org.hibernate.internal.SessionFactoryImpl.getCurrentSession(SessionFactoryImpl.java:941)
at com.isoft.dao.impl.HibernateDaoImpl.getSession(HibernateDaoImpl.java:51)
at com.isoft.dao.impl.HibernateDaoImpl.save(HibernateDaoImpl.java:77)
at com.isoft.service.impl.BaseServiceImpl.save(BaseServiceImpl.java:42)
at com.isoft.test.MessageInvoiceServiceImplTest.testAdd(MessageInvoiceServiceImplTest.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:74)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:83)
at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:72)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:231)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:88)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61)
at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:71)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:174)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:38)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)

1个回答

试试在hibernate.cfg.xml添加:
<property name="current_session_context_class">thread</property>

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Tensorflow 测试helloWorld,显示 The session graph is empty
准备学Tensorflow ,win10装好了环境,Anaconda +pycharm+tensorflow, 准备测试简单的hello world程序,结果显示“The session graph is empty”,是什么问题呀?大神们帮忙解决一下! 原始代码是 ``` import tensorflow as tf a = tf.constant("Hello,world!") sess = tf.Session() print(sess.run(a)) sess.close() ``` 这个显示错误:**module 'tensorflow' has no attribute 'Session'** 后按照网上提示,只有tensorflow 1 有,于是改成了下面的: ``` import tensorflow as tf a = tf.constant("Hello,world!") sess = tf.compat.v1.Session() print(sess.run(a)) sess.close() ``` 这个还是有错误,显示“The session graph is empty” Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 Traceback (most recent call last): File "C:/Users/Dell/PycharmProjects/Helloworld/helloWorldTen.py", line 5, in <module> print(sess.run(a)) File "E:\Anaconda3\lib\site-packages\tensorflow_core\python\client\session.py", line 956, in run run_metadata_ptr) File "E:\Anaconda3\lib\site-packages\tensorflow_core\python\client\session.py", line 1105, in _run raise RuntimeError('The Session graph is empty. Add operations to the ' RuntimeError: **The Session graph is empty.** Add operations to the graph before calling run().
神经网络模型加载后测试效果不对
tensorflow框架训练好的神经网络模型,加载之后再去测试准确率特别低 图中是我的加载方法 麻烦大神帮忙指正,是不是网络加载出现问题 首先手动重新构建了模型:以下代码省略了权值、偏置和网络搭建 ``` # 构建模型 pred = alex_net(x, weights, biases, keep_prob) # 定义损失函数和优化器 cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y,logits=pred))#softmax和交叉熵结合 optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # 评估函数 correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # 3.训练模型和评估模型 # 初始化变量 init = tf.global_variables_initializer() saver = tf.train.Saver() with tf.Session() as sess: # 初始化变量 sess.run(init) saver.restore(sess, tf.train.latest_checkpoint(model_dir)) pred_test=sess.run(pred,{x:test_x, keep_prob:1.0}) result=sess.run(tf.argmax(pred_test, 1)) ```
运行 tensorflow ,能正确输出结果,但同时有其他的一些提示,不懂是为什么
代码很简短,目前正在学习中。 ``` import tensorflow as tf a = [[1, 0, 3], [8, -3, 6], [5, 1, 7]] with tf.Session() as sess: print(sess.run(tf.argmax(a, 1))) print(sess.run(tf.argmax(a, 0))) ``` 输出结果如下,不知道为什么会出现这些提示,求高人指点。 E:\anaconda\python.exe D:/pycharm文件/try.py 2020-01-09 22:18:54.342546: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll WARNING:tensorflow:From D:/pycharm文件/try.py:5: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. 2020-01-09 22:18:56.348961: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll 2020-01-09 22:18:57.534582: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: GeForce GTX 1660 Ti with Max-Q Design major: 7 minor: 5 memoryClockRate(GHz): 1.335 pciBusID: 0000:01:00.0 2020-01-09 22:18:57.535147: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll 2020-01-09 22:18:57.539354: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll 2020-01-09 22:18:57.543360: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_100.dll 2020-01-09 22:18:57.545647: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_100.dll 2020-01-09 22:18:57.550333: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_100.dll 2020-01-09 22:18:57.554235: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_100.dll 2020-01-09 22:18:57.556270: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudnn64_7.dll'; dlerror: cudnn64_7.dll not found 2020-01-09 22:18:57.556681: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1641] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... 2020-01-09 22:18:57.558041: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2020-01-09 22:18:57.560801: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-01-09 22:18:57.561152: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] [2 0 2] [1 2 2] Process finished with exit code 0
tensorflow训练完模型直接测试和导入模型进行测试的结果不同,一个很好,一个略差,这是为什么?
在tensorflow训练完模型,我直接采用同一个session进行测试,得到结果较好,但是采用训练完保存的模型,进行重新载入进行测试,结果较差,不懂是为什么会出现这样的结果。注:测试数据是一样的。以下是模型结果: 训练集:loss:0.384,acc:0.931. 验证集:loss:0.212,acc:0.968. 训练完在同一session内的测试集:acc:0.96。导入保存的模型进行测试:acc:0.29 ``` def create_model(hps): global_step = tf.Variable(tf.zeros([], tf.float64), name = 'global_step', trainable = False) scale = 1.0 / math.sqrt(hps.num_embedding_size + hps.num_lstm_nodes[-1]) / 3.0 print(type(scale)) gru_init = tf.random_normal_initializer(-scale, scale) with tf.variable_scope('Bi_GRU_nn', initializer = gru_init): for i in range(hps.num_lstm_layers): cell_bw = tf.contrib.rnn.GRUCell(hps.num_lstm_nodes[i], activation = tf.nn.relu, name = 'cell-bw') cell_bw = tf.contrib.rnn.DropoutWrapper(cell_bw, output_keep_prob = dropout_keep_prob) cell_fw = tf.contrib.rnn.GRUCell(hps.num_lstm_nodes[i], activation = tf.nn.relu, name = 'cell-fw') cell_fw = tf.contrib.rnn.DropoutWrapper(cell_fw, output_keep_prob = dropout_keep_prob) rnn_outputs, _ = tf.nn.bidirectional_dynamic_rnn(cell_bw, cell_fw, inputs, dtype=tf.float32) embeddedWords = tf.concat(rnn_outputs, 2) finalOutput = embeddedWords[:, -1, :] outputSize = hps.num_lstm_nodes[-1] * 2 # 因为是双向LSTM,最终的输出值是fw和bw的拼接,因此要乘以2 last = tf.reshape(finalOutput, [-1, outputSize]) # reshape成全连接层的输入维度 last = tf.layers.batch_normalization(last, training = is_training) fc_init = tf.uniform_unit_scaling_initializer(factor = 1.0) with tf.variable_scope('fc', initializer = fc_init): fc1 = tf.layers.dense(last, hps.num_fc_nodes, name = 'fc1') fc1_batch_normalization = tf.layers.batch_normalization(fc1, training = is_training) fc_activation = tf.nn.relu(fc1_batch_normalization) logits = tf.layers.dense(fc_activation, hps.num_classes, name = 'fc2') with tf.name_scope('metrics'): softmax_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits = logits, labels = tf.argmax(outputs, 1)) loss = tf.reduce_mean(softmax_loss) # [0, 1, 5, 4, 2] ->argmax:2 因为在第二个位置上是最大的 y_pred = tf.argmax(tf.nn.softmax(logits), 1, output_type = tf.int64, name = 'y_pred') # 计算准确率,看看算对多少个 correct_pred = tf.equal(tf.argmax(outputs, 1), y_pred) # tf.cast 将数据转换成 tf.float32 类型 accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) with tf.name_scope('train_op'): tvar = tf.trainable_variables() for var in tvar: print('variable name: %s' % (var.name)) grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvar), hps.clip_lstm_grads) optimizer = tf.train.AdamOptimizer(hps.learning_rate) train_op = optimizer.apply_gradients(zip(grads, tvar), global_step) # return((inputs, outputs, is_training), (loss, accuracy, y_pred), (train_op, global_step)) return((inputs, outputs), (loss, accuracy, y_pred), (train_op, global_step)) placeholders, metrics, others = create_model(hps) content, labels = placeholders loss, accuracy, y_pred = metrics train_op, global_step = others def val_steps(sess, x_batch, y_batch, writer = None): loss_val, accuracy_val = sess.run([loss,accuracy], feed_dict = {inputs: x_batch, outputs: y_batch, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) return loss_val, accuracy_val loss_summary = tf.summary.scalar('loss', loss) accuracy_summary = tf.summary.scalar('accuracy', accuracy) # 将所有的变量都集合起来 merged_summary = tf.summary.merge_all() # 用于test测试的summary merged_summary_test = tf.summary.merge([loss_summary, accuracy_summary]) LOG_DIR = '.' run_label = 'run_Bi-GRU_Dropout_tensorboard' run_dir = os.path.join(LOG_DIR, run_label) if not os.path.exists(run_dir): os.makedirs(run_dir) train_log_dir = os.path.join(run_dir, timestamp, 'train') test_los_dir = os.path.join(run_dir, timestamp, 'test') if not os.path.exists(train_log_dir): os.makedirs(train_log_dir) if not os.path.join(test_los_dir): os.makedirs(test_los_dir) # saver得到的文件句柄,可以将文件训练的快照保存到文件夹中去 saver = tf.train.Saver(tf.global_variables(), max_to_keep = 5) # train 代码 init_op = tf.global_variables_initializer() train_keep_prob_value = 0.2 test_keep_prob_value = 1.0 # 由于如果按照每一步都去计算的话,会很慢,所以我们规定每100次存储一次 output_summary_every_steps = 100 num_train_steps = 1000 # 每隔多少次保存一次 output_model_every_steps = 500 # 测试集测试 test_model_all_steps = 4000 i = 0 session_conf = tf.ConfigProto( gpu_options = tf.GPUOptions(allow_growth=True), allow_soft_placement = True, log_device_placement = False) with tf.Session(config = session_conf) as sess: sess.run(init_op) # 将训练过程中,将loss,accuracy写入文件里,后面是目录和计算图,如果想要在tensorboard中显示计算图,就想sess.graph加上 train_writer = tf.summary.FileWriter(train_log_dir, sess.graph) # 同样将测试的结果保存到tensorboard中,没有计算图 test_writer = tf.summary.FileWriter(test_los_dir) batches = batch_iter(list(zip(x_train, y_train)), hps.batch_size, hps.num_epochs) for batch in batches: train_x, train_y = zip(*batch) eval_ops = [loss, accuracy, train_op, global_step] should_out_summary = ((i + 1) % output_summary_every_steps == 0) if should_out_summary: eval_ops.append(merged_summary) # 那三个占位符输进去 # 计算loss, accuracy, train_op, global_step的图 eval_ops.append(merged_summary) outputs_train = sess.run(eval_ops, feed_dict={ inputs: train_x, outputs: train_y, dropout_keep_prob: train_keep_prob_value, is_training: hps.train_is_training }) loss_train, accuracy_train = outputs_train[0:2] if should_out_summary: # 由于我们想在100steps之后计算summary,所以上面 should_out_summary = ((i + 1) % output_summary_every_steps == 0)成立, # 即为真True,那么我们将训练的内容放入eval_ops的最后面了,因此,我们想获得summary的结果得在eval_ops_results的最后一个 train_summary_str = outputs_train[-1] # 将获得的结果写训练tensorboard文件夹中,由于训练从0开始,所以这里加上1,表示第几步的训练 train_writer.add_summary(train_summary_str, i + 1) test_summary_str = sess.run([merged_summary_test], feed_dict = {inputs: x_dev, outputs: y_dev, dropout_keep_prob: 1.0, is_training: hps.val_is_training })[0] test_writer.add_summary(test_summary_str, i + 1) current_step = tf.train.global_step(sess, global_step) if (i + 1) % 100 == 0: print("Step: %5d, loss: %3.3f, accuracy: %3.3f" % (i + 1, loss_train, accuracy_train)) # 500个batch校验一次 if (i + 1) % 500 == 0: loss_eval, accuracy_eval = val_steps(sess, x_dev, y_dev) print("Step: %5d, val_loss: %3.3f, val_accuracy: %3.3f" % (i + 1, loss_eval, accuracy_eval)) if (i + 1) % output_model_every_steps == 0: path = saver.save(sess,os.path.join(out_dir, 'ckp-%05d' % (i + 1))) print("Saved model checkpoint to {}\n".format(path)) print('model saved to ckp-%05d' % (i + 1)) if (i + 1) % test_model_all_steps == 0: # test_loss, test_acc, all_predictions= sess.run([loss, accuracy, y_pred], feed_dict = {inputs: x_test, outputs: y_test, dropout_keep_prob: 1.0}) test_loss, test_acc, all_predictions= sess.run([loss, accuracy, y_pred], feed_dict = {inputs: x_test, outputs: y_test, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) print("test_loss: %3.3f, test_acc: %3.3d" % (test_loss, test_acc)) batches = batch_iter(list(x_test), 128, 1, shuffle=False) # Collect the predictions here all_predictions = [] for x_test_batch in batches: batch_predictions = sess.run(y_pred, {inputs: x_test_batch, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) all_predictions = np.concatenate([all_predictions, batch_predictions]) correct_predictions = float(sum(all_predictions == y.flatten())) print("Total number of test examples: {}".format(len(y_test))) print("Accuracy: {:g}".format(correct_predictions/float(len(y_test)))) test_y = y_test.argmax(axis = 1) #生成混淆矩阵 conf_mat = confusion_matrix(test_y, all_predictions) fig, ax = plt.subplots(figsize = (4,2)) sns.heatmap(conf_mat, annot=True, fmt = 'd', xticklabels = cat_id_df.category_id.values, yticklabels = cat_id_df.category_id.values) font_set = FontProperties(fname = r"/usr/share/fonts/truetype/wqy/wqy-microhei.ttc", size=15) plt.ylabel(u'实际结果',fontsize = 18,fontproperties = font_set) plt.xlabel(u'预测结果',fontsize = 18,fontproperties = font_set) plt.savefig('./test.png') print('accuracy %s' % accuracy_score(all_predictions, test_y)) print(classification_report(test_y, all_predictions,target_names = cat_id_df['category_name'].values)) print(classification_report(test_y, all_predictions)) i += 1 ``` 以上的模型代码,请求各位大神帮我看看,为什么出现这样的结果?
初学Hibernate遇到问题,求解答~
运行一个hibernate例子,但是报错,小白不知道该怎么解决,麻烦大神帮看看。 我把代码贴出来~ (1). package hibernate; import org.hibernate.*; import org.hibernate.cfg.*; import org.hibernate.service.*; import org.hibernate.boot.registry.*; public class NewsManager { public static void main(String[] args) throws Exception { Configuration conf = new Configuration() .configure(); ServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder() .applySettings(conf.getProperties()).build(); SessionFactory sf = conf.buildSessionFactory(serviceRegistry); Session sess = sf.openSession(); Transaction tx = sess.beginTransaction(); News n = new News(); n.setTitle("Java"); n.setContent("Java"); sess.save(n); tx.commit(); sess.close(); sf.close(); } } -------------------------------------------------------------- (2). package hibernate; import javax.persistence.*; @Entity @Table(name="news_info") public class News { @Id @GeneratedValue(strategy=GenerationType.IDENTITY) private Integer id; private String title; private String content; public void setId(Integer id) { this.id = id; } public Integer getId() { return this.id; } public void setTitle(String title) { this.title = title; } public String getTitle() { return this.title; } public void setContent(String content) { this.content = content; } public String getContent() { return this.content; } } ---------------------------------------------------------- (3). hibernate.cfg.xml文件 <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <property name="hibernate.connection.driver_class">org.gjt.mm.mysql.Driver</property> <property name="hibernate.connection.password">root</property> <property name="hibernate.connection.url">jdbc:mysql://localhost:3306/hibernate</property> <property name="hibernate.connection.username">root</property> <property name="hibernate.dialect">org.hibernate.dialect.MySQL5InnoDBDialect</property> <mapping class="hibernate.News"/> </session-factory> </hibernate-configuration> (标签贴不出来) ------------------------------------------------------------- (4)错误信息 Exception in thread "main" org.hibernate.MappingException: Unknown entity: hibernate.News at org.hibernate.internal.SessionFactoryImpl.getEntityPersister(SessionFactoryImpl.java:781) at org.hibernate.internal.SessionImpl.getEntityPersister(SessionImpl.java:1520) at org.hibernate.event.internal.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:100) at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.saveWithGeneratedOrRequestedId(DefaultSaveOrUpdateEventListener.java:192) at org.hibernate.event.internal.DefaultSaveEventListener.saveWithGeneratedOrRequestedId(DefaultSaveEventListener.java:38) at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.entityIsTransient(DefaultSaveOrUpdateEventListener.java:177) at org.hibernate.event.internal.DefaultSaveEventListener.performSaveOrUpdate(DefaultSaveEventListener.java:32) at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.onSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:73) at org.hibernate.internal.SessionImpl.fireSave(SessionImpl.java:679) at org.hibernate.internal.SessionImpl.save(SessionImpl.java:671) at org.hibernate.internal.SessionImpl.save(SessionImpl.java:666) at hibernate.NewsManager.main(NewsManager.java:24)
单独使用Hibernate时,出现Unknown entity异常的问题
先把该帖的贴出来 ![图片说明](https://img-ask.csdn.net/upload/201606/04/1465019517_304198.png) News.java ``` package dong.domain; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; @Entity @Table(name="news_inf") public class News { @Id @GeneratedValue(strategy=GenerationType.IDENTITY) private Integer id; private String title; private String content; public void setId(Integer id) { this.id = id; } public Integer getId() { return this.id; } public void setTitle(String title) { this.title = title; } public String getTitle() { return this.title; } public void setContent(String content) { this.content = content; } public String getContent() { return this.content; } } ``` hibernate.cfg.xml ``` <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate//Hibernate Configuration DTD 3.0//EN" "http://www.hibernate.org/dtd/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <property name="connection.driver_class">com.mysql.jdbc.Driver</property> <property name="connection.url">jdbc:mysql://localhost/hibernate</property> <property name="connection.username">root</property> <property name="connection.password">dongzhong1990</property> <property name="hibernate.c3p0.max_size">20</property> <property name="hibernate.c3p0.min_size">1</property> <property name="hibernate.c3p0.timeout">5000</property> <property name="hibernate.c3p0.max_statements">100</property> <property name="hibernate.c3p0.idle_test_period">3000</property> <property name="hibernate.c3p0.acquire_increment">2</property> <property name="hibernate.c3p0.validate">true</property> <property name="dialect">org.hibernate.dialect.MySQL5InnoDBDialect</property> <property name="hbm2ddl.auto">update</property> <property name="show_sql">true</property> <property name="hibernate.format_sql">true</property> <!-- 此处罗列所有持久化类的类名 --> <mapping class="dong.domain.News" /> </session-factory> </hibernate-configuration> ``` NewsManager.java ``` package dong; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.Transaction; import org.hibernate.boot.registry.StandardServiceRegistryBuilder; import org.hibernate.cfg.Configuration; import org.hibernate.service.ServiceRegistry; import dong.domain.News; public class NewsManager { public static void main(String[] args) throws Exception { Configuration conf = new Configuration().configure(); ServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder().applySettings(conf.getProperties()).build(); SessionFactory sf = conf.buildSessionFactory(serviceRegistry); Session sess = sf.openSession(); Transaction tx = sess.beginTransaction(); News news = new News(); news.setTitle("aaa"); news.setContent("欧耶欧耶欧耶"); sess.save(news); System.out.println("aaa"); tx.commit(); sess.close(); sf.close(); } } ``` 运行后就会出现Unknown entity的异常,在网上搜的方法都没有用不知道该怎么弄,求各位大神帮帮我呀
做SSH整合时,先测试一个hibernate测试,总是出现初始化错误
**_News.java_** package mc.bean; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; @Entity @Table(name="news_inf") public class News { // 消息类的标识属性 @Id @GeneratedValue(strategy=GenerationType.IDENTITY) private Integer id; // 消息标题 private String title; // 消息内容 private String content; 省略setter和getter方法 **_Main方法_** package mc.bean; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.Transaction; import org.hibernate.boot.registry.StandardServiceRegistryBuilder; import org.hibernate.cfg.Configuration; import org.hibernate.service.ServiceRegistry; public class MainTest { public static void main(String[] args) { // TODO Auto-generated method stub // 实例化Configuration,不加载任何配置文件 Configuration conf = new Configuration() // 通过addAnnotatedClass()方法添加持久化类 .addAnnotatedClass(mc.bean.News.class)* // 通过setProperty设置Hibernate的连接属性。 .setProperty("hibernate.connection.driver_class" , "com.mysql.jdbc.Driver") .setProperty("hibernate.connection.url" , "jdbc:mysql://localhost:3306/mctest") .setProperty("hibernate.connection.username" , "root") .setProperty("hibernate.connection.password" , "123457") .setProperty("hibernate.c3p0.max_size" , "20") .setProperty("hibernate.c3p0.min_size" , "1") .setProperty("hibernate.c3p0.timeout" , "5000") .setProperty("hibernate.c3p0.max_statements" , "100") .setProperty("hibernate.c3p0.idle_test_period" , "3000") .setProperty("hibernate.c3p0.acquire_increment" , "2") .setProperty("hibernate.c3p0.validate" , "true") .setProperty("hibernate.dialect" , "org.hibernate.dialect.MySQLDialect") .setProperty("hibernate.hbm2ddl.auto" , "update"); // 以Configuration实例创建SessionFactory实例 ServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder() .applySettings(conf.getProperties()).build(); SessionFactory sf = conf.buildSessionFactory(serviceRegistry); // 实例化Session Session sess = sf.openSession(); // 开始事务 Transaction tx = sess.beginTransaction(); // 创建消息实例 News n = new News(); // 设置消息标题和消息内容 n.setTitle("00000"); n.setContent("0000000"); // 保存消息 sess.save(n); // 提交事务 tx.commit(); System.out.println("操作成功!"); // 关闭Session sess.close(); } } **_异常_** 十一月 12, 2016 3:11:45 下午 org.hibernate.annotations.common.reflection.java.JavaReflectionManager <clinit> INFO: HCANN000001: Hibernate Commons Annotations {4.0.4.Final} 十一月 12, 2016 3:11:45 下午 org.hibernate.Version logVersion INFO: HHH000412: Hibernate Core {4.3.5.Final} Exception in thread "main" java.lang.ExceptionInInitializerError at org.hibernate.cfg.Configuration.reset(Configuration.java:324) at org.hibernate.cfg.Configuration.<init>(Configuration.java:289) at org.hibernate.cfg.Configuration.<init>(Configuration.java:293) at mc.bean.MainTest.main(MainTest.java:15) Caused by: java.lang.NullPointerException at org.hibernate.internal.util.ConfigHelper.getResourceAsStream(ConfigHelper.java:170) at org.hibernate.cfg.Environment.<clinit>(Environment.java:221) ... 4 more
hibernate 日志异常
最近学习hibernate,看书感觉可以,实际操作怎么就那么难呢! 我用的是:hibernate3.2 eclipse3.3 (没有Myeclipse插件) Mysql6.0 一:main()函数 package org.first; import org.hibernate.*; import org.hibernate.cfg.*; public abstract class FirstM { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub Configuration conf=new Configuration().configure(); SessionFactory sf=conf.buildSessionFactory(); Session sess=sf.openSession(); Transaction tx=sess.beginTransaction(); FirstBean fb=new FirstBean(); fb.setUserId(12); fb.setUserName("thmei123"); sess.save(fb); tx.commit(); sess.close(); } } 二:持久化类:很简单,一个主键userId,和一个属性userName; public class FirstBean { private int userId; private String userName; public void setUserId(int userId) { this.userId=userId; } public int getUserid() { return this.userId; } public void setUserName(String userName) { this.userName=userName; } public String getUserName() { return this.userName; } } 三:影射文件和配置文件: 1:<?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="FirstBean" table="student"> <id name="userId" unsaved-value="null"> <generator class="identity"/> </id> <property name="userName"/> </class> </hibernate-mapping> 2: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <property name="hibernate.connection.driver_class">org.gjt.mm.mysql.Driver</property> <property name="hibernate.connection.password">thmei123</property> <property name="hibernate.connection.url">jdbc:mysql://localhost/user</property> <property name="hibernate.connection.username">root</property> <property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property> <property name="hibernate.c3p0.max_size">20</property> <property name="hibernate.c3p0.min_size">1</property> <property name="hibernate.c3p0.timeout">5000</property> <property name="hibernate.c3p0.max_statements">100</property> <property name="hibernate.c3p0.idle_test_period">3000</property> <property name="hibernate.c3p0.acquire_increment">2</property> <property name="hibernate.c3p0.validate">true</property> <mapping resource="hib.hbm.xml"> </session-factory> </hibernate-configuration> 说明;在主函数main()中运行后提示: Exception in thread "main" java.lang.NoClassDefFoundError: org/dom4j/DocumentException at org.first.FirstM.main(FirstM.java:12) Caused by: java.lang.ClassNotFoundException: org.dom4j.DocumentException at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) 提示的是要文件: log4j-1.2.11.jar 和commons-logging-1.0.4.jar 这两个jar文件我都拷贝到了SRC下面了,也不可以阿 在网上查询说的是好像main()函数中不可以用日志文件, 希望大家帮我了, 另外 请大家给我一个详细的例子吧,最好是web工程的和java工程 各一个例子, 同时天涯沦落人啊!! [b]问题补充:[/b] 谢谢 楼上兄弟的给你资料和建议,我现在除了拷贝了log4j-1.2.11.jar 和commons-logging-1.0.4.jar 这两个文件外,还在hibernate下拷贝了一个log4j.property的属性文件,都放置到了src下面,但是现在又出下面错误: 22:11:54,599 INFO Environment:514 - Hibernate 3.2.6 22:11:54,619 INFO Environment:547 - hibernate.properties not found 22:11:54,629 INFO Environment:681 - Bytecode provider name : cglib 22:11:54,639 INFO Environment:598 - using JDK 1.4 java.sql.Timestamp handling 22:11:54,850 INFO Configuration:1432 - configuring from resource: /hibernate.cfg.xml 22:11:54,850 INFO Configuration:1409 - Configuration resource: /hibernate.cfg.xml 22:11:55,501 INFO Configuration:559 - Reading mappings from resource : hib.hbm.xml 22:11:56,212 INFO HbmBinder:300 - Mapping class: FirstBean -> student org.hibernate.InvalidMappingException: Could not parse mapping document from resource hib.hbm.xml at org.hibernate.cfg.Configuration.addResource(Configuration.java:575) at org.hibernate.cfg.Configuration.parseMappingElement(Configuration.java:1593) 其中: 一:22:11:54,619 INFO Environment:547 - hibernate.properties not found 提示没有找到hibernate.properties 是怎么会事?我已经放置hibernate.properties,这个文件不是自己写,因为我不会写,就在hibernate下拷贝的一个,请大家给出建议,谢谢 二:org.hibernate.InvalidMappingException: Could not parse mapping document from resource hib.hbm.xml 这个实际上是提示没有找到hib.hbm.xml映射文件,我很是奇怪,无论我怎么修改<mapping resource="hib.hbm.xml">的映射路径,都是不可以的!请大家帮助!! 学习中,望大家指点 谢谢
运行tensorflow时出现tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed这个错误
运行tensorflow时出现tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed这个错误,查了一下说是gpu被占用了,从下面这里开始出问题的: ``` 2019-10-17 09:28:49.495166: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6382 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) (60000, 28, 28) (60000, 10) 2019-10-17 09:28:51.275415: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cublas64_100.dll'; dlerror: cublas64_100.dll not found ``` ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277238_292620.png) 最后显示的问题: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277311_655722.png) 试了一下网上的方法,比如加代码: ``` gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) ``` 但最后提示: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277460_72752.png) 现在不知道要怎么解决了。新手想试下简单的数字识别,步骤也是按教程一步步来的,可能用的版本和教程不一样,我用的是刚下的:2.0tensorflow和以下: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277627_439100.png) 不知道会不会有版本问题,现在紧急求助各位大佬,还有没有其它可以尝试的方法。测试程序加法运算可以执行,数字识别图片运行的时候我看了下,GPU最大占有率才0.2%,下面是完整数字图片识别代码: ``` import os import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers, optimizers, datasets os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' #gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.2) #sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) (x, y), (x_val, y_val) = datasets.mnist.load_data() x = tf.convert_to_tensor(x, dtype=tf.float32) / 255. y = tf.convert_to_tensor(y, dtype=tf.int32) y = tf.one_hot(y, depth=10) print(x.shape, y.shape) train_dataset = tf.data.Dataset.from_tensor_slices((x, y)) train_dataset = train_dataset.batch(200) model = keras.Sequential([ layers.Dense(512, activation='relu'), layers.Dense(256, activation='relu'), layers.Dense(10)]) optimizer = optimizers.SGD(learning_rate=0.001) def train_epoch(epoch): # Step4.loop for step, (x, y) in enumerate(train_dataset): with tf.GradientTape() as tape: # [b, 28, 28] => [b, 784] x = tf.reshape(x, (-1, 28 * 28)) # Step1. compute output # [b, 784] => [b, 10] out = model(x) # Step2. compute loss loss = tf.reduce_sum(tf.square(out - y)) / x.shape[0] # Step3. optimize and update w1, w2, w3, b1, b2, b3 grads = tape.gradient(loss, model.trainable_variables) # w' = w - lr * grad optimizer.apply_gradients(zip(grads, model.trainable_variables)) if step % 100 == 0: print(epoch, step, 'loss:', loss.numpy()) def train(): for epoch in range(30): train_epoch(epoch) if __name__ == '__main__': train() ``` 希望能有人给下建议或解决方法,拜谢!
fashion_mnist识别准确率问题
fashion_mnist识别准确率一般为多少呢?我看好多人都是92%左右,但是我用一个网络达到了94%,想问问做过的小伙伴到底是多少? ``` #这是我的结果示意 x_shape: (60000, 28, 28) y_shape: (60000,) epoches: 0 val_acc: 0.4991 train_acc 0.50481665 epoches: 1 val_acc: 0.6765 train_acc 0.66735 epoches: 2 val_acc: 0.755 train_acc 0.7474 epoches: 3 val_acc: 0.7846 train_acc 0.77915 epoches: 4 val_acc: 0.798 train_acc 0.7936 epoches: 5 val_acc: 0.8082 train_acc 0.80365 epoches: 6 val_acc: 0.8146 train_acc 0.8107 epoches: 7 val_acc: 0.8872 train_acc 0.8872333 epoches: 8 val_acc: 0.896 train_acc 0.89348334 epoches: 9 val_acc: 0.9007 train_acc 0.8986 epoches: 10 val_acc: 0.9055 train_acc 0.90243334 epoches: 11 val_acc: 0.909 train_acc 0.9058833 epoches: 12 val_acc: 0.9112 train_acc 0.90868336 epoches: 13 val_acc: 0.9126 train_acc 0.91108334 epoches: 14 val_acc: 0.9151 train_acc 0.9139 epoches: 15 val_acc: 0.9172 train_acc 0.91595 epoches: 16 val_acc: 0.9191 train_acc 0.91798335 epoches: 17 val_acc: 0.9204 train_acc 0.91975 epoches: 18 val_acc: 0.9217 train_acc 0.9220333 epoches: 19 val_acc: 0.9252 train_acc 0.9234667 epoches: 20 val_acc: 0.9259 train_acc 0.92515 epoches: 21 val_acc: 0.9281 train_acc 0.9266667 epoches: 22 val_acc: 0.9289 train_acc 0.92826664 epoches: 23 val_acc: 0.9301 train_acc 0.93005 epoches: 24 val_acc: 0.9315 train_acc 0.93126667 epoches: 25 val_acc: 0.9322 train_acc 0.9328 epoches: 26 val_acc: 0.9331 train_acc 0.9339667 epoches: 27 val_acc: 0.9342 train_acc 0.93523335 epoches: 28 val_acc: 0.9353 train_acc 0.93665 epoches: 29 val_acc: 0.9365 train_acc 0.9379333 epoches: 30 val_acc: 0.9369 train_acc 0.93885 epoches: 31 val_acc: 0.9387 train_acc 0.9399 epoches: 32 val_acc: 0.9395 train_acc 0.9409 epoches: 33 val_acc: 0.94 train_acc 0.9417667 epoches: 34 val_acc: 0.9403 train_acc 0.94271666 epoches: 35 val_acc: 0.9409 train_acc 0.9435167 epoches: 36 val_acc: 0.9418 train_acc 0.94443333 epoches: 37 val_acc: 0.942 train_acc 0.94515 epoches: 38 val_acc: 0.9432 train_acc 0.9460667 epoches: 39 val_acc: 0.9443 train_acc 0.9468833 epoches: 40 val_acc: 0.9445 train_acc 0.94741666 epoches: 41 val_acc: 0.9462 train_acc 0.9482 epoches: 42 val_acc: 0.947 train_acc 0.94893336 epoches: 43 val_acc: 0.9472 train_acc 0.94946665 epoches: 44 val_acc: 0.948 train_acc 0.95028335 epoches: 45 val_acc: 0.9486 train_acc 0.95095 epoches: 46 val_acc: 0.9488 train_acc 0.9515833 epoches: 47 val_acc: 0.9492 train_acc 0.95213336 epoches: 48 val_acc: 0.9495 train_acc 0.9529833 epoches: 49 val_acc: 0.9498 train_acc 0.9537 val_acc: 0.9498 ``` ``` import tensorflow as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt def to_onehot(y,num): lables = np.zeros([num,len(y)]) for i in range(len(y)): lables[y[i],i] = 1 return lables.T # 预处理数据 mnist = keras.datasets.fashion_mnist (train_images,train_lables),(test_images,test_lables) = mnist.load_data() print('x_shape:',train_images.shape) #(60000) print('y_shape:',train_lables.shape) X_train = train_images.reshape((-1,train_images.shape[1]*train_images.shape[1])) / 255.0 #X_train = tf.reshape(X_train,[-1,X_train.shape[1]*X_train.shape[2]]) Y_train = to_onehot(train_lables,10) X_test = test_images.reshape((-1,test_images.shape[1]*test_images.shape[1])) / 255.0 Y_test = to_onehot(test_lables,10) #双隐层的神经网络 input_nodes = 784 output_nodes = 10 layer1_nodes = 100 layer2_nodes = 50 batch_size = 100 learning_rate_base = 0.8 learning_rate_decay = 0.99 regularization_rate = 0.0000001 epochs = 50 mad = 0.99 learning_rate = 0.005 # def inference(input_tensor,avg_class,w1,b1,w2,b2): # if avg_class == None: # layer1 = tf.nn.relu(tf.matmul(input_tensor,w1)+b1) # return tf.nn.softmax(tf.matmul(layer1,w2) + b2) # else: # layer1 = tf.nn.relu(tf.matmul(input_tensor,avg_class.average(w1)) + avg_class.average(b1)) # return tf.matual(layer1,avg_class.average(w2)) + avg_class.average(b2) def train(mnist): X = tf.placeholder(tf.float32,[None,input_nodes],name = "input_x") Y = tf.placeholder(tf.float32,[None,output_nodes],name = "y_true") w1 = tf.Variable(tf.truncated_normal([input_nodes,layer1_nodes],stddev=0.1)) b1 = tf.Variable(tf.constant(0.1,shape=[layer1_nodes])) w2 = tf.Variable(tf.truncated_normal([layer1_nodes,layer2_nodes],stddev=0.1)) b2 = tf.Variable(tf.constant(0.1,shape=[layer2_nodes])) w3 = tf.Variable(tf.truncated_normal([layer2_nodes,output_nodes],stddev=0.1)) b3 = tf.Variable(tf.constant(0.1,shape=[output_nodes])) layer1 = tf.nn.relu(tf.matmul(X,w1)+b1) A2 = tf.nn.relu(tf.matmul(layer1,w2)+b2) A3 = tf.nn.relu(tf.matmul(A2,w3)+b3) y_hat = tf.nn.softmax(A3) # y_hat = inference(X,None,w1,b1,w2,b2) # global_step = tf.Variable(0,trainable=False) # variable_averages = tf.train.ExponentialMovingAverage(mad,global_step) # varible_average_op = variable_averages.apply(tf.trainable_variables()) #y = inference(x,variable_averages,w1,b1,w2,b2) cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=A3,labels=Y)) regularizer = tf.contrib.layers.l2_regularizer(regularization_rate) regularization = regularizer(w1) + regularizer(w2) +regularizer(w3) loss = cross_entropy + regularization * regularization_rate # learning_rate = tf.train.exponential_decay(learning_rate_base,global_step,epchos,learning_rate_decay) # train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_step) train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # with tf.control_dependencies([train_step,varible_average_op]): # train_op = tf.no_op(name="train") correct_prediction = tf.equal(tf.argmax(y_hat,1),tf.argmax(Y,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) total_loss = [] val_acc = [] total_train_acc = [] x_Xsis = [] with tf.Session() as sess: tf.global_variables_initializer().run() for i in range(epochs): # x,y = next_batch(X_train,Y_train,batch_size) batchs = int(X_train.shape[0] / batch_size + 1) loss_e = 0. for j in range(batchs): batch_x = X_train[j*batch_size:min(X_train.shape[0],j*(batch_size+1)),:] batch_y = Y_train[j*batch_size:min(X_train.shape[0],j*(batch_size+1)),:] sess.run(train_step,feed_dict={X:batch_x,Y:batch_y}) loss_e += sess.run(loss,feed_dict={X:batch_x,Y:batch_y}) # train_step.run(feed_dict={X:x,Y:y}) validate_acc = sess.run(accuracy,feed_dict={X:X_test,Y:Y_test}) train_acc = sess.run(accuracy,feed_dict={X:X_train,Y:Y_train}) print("epoches: ",i,"val_acc: ",validate_acc,"train_acc",train_acc) total_loss.append(loss_e / batch_size) val_acc.append(validate_acc) total_train_acc.append(train_acc) x_Xsis.append(i) validate_acc = sess.run(accuracy,feed_dict={X:X_test,Y:Y_test}) print("val_acc: ",validate_acc) return (x_Xsis,total_loss,total_train_acc,val_acc) result = train((X_train,Y_train,X_test,Y_test)) def plot_acc(total_train_acc,val_acc,x): plt.figure() plt.plot(x,total_train_acc,'--',color = "red",label="train_acc") plt.plot(x,val_acc,color="green",label="val_acc") plt.xlabel("Epoches") plt.ylabel("acc") plt.legend() plt.show() ```
hibernate 的缓存使用问题
问题描述见: 在测试Hibernate二级缓存的时候 把ehcache.xml和hibernate.cfg.xml里的相关二级缓存的配置 都删掉 , 直接在hbm.xml中配置 <cache usage= "read-write " /> , 发现查询结果看起来仍是通过缓存来查询的 因为sql没有打印两次 [code="java"] package com.vavi.test; import org.hibernate.Session; import com.vavi.dao.HibernateSessionFactory; import com.vavi.pojo.Tuser; public class Test { public static void main(String[] args) { Session sess = HibernateSessionFactory.getCurrentSession(); Tuser user_load = (Tuser) sess.load(com.vavi.pojo.Tuser.class, new Long(1)); System.out.println(user_load.getName()); sess.close(); sess = HibernateSessionFactory.getCurrentSession(); Tuser user_get = (Tuser) sess.get(com.vavi.pojo.Tuser.class, new Long(1)); System.out.println(user_get.getName()); sess.close(); } } [/code] HibernateSessionFactory 是MyEclipse 生成的 [b]问题补充:[/b] package com.vavi.test; /** * 结论:Session级别的缓存 仅在当前session 生命期内有效 * 一旦该session关闭 ,再次查询时仍需要访问数据库 */ import org.hibernate.Session; import com.vavi.dao.HibernateSessionFactory; import com.vavi.pojo.Tuser; public class Test_SessionClose { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub Session sess = HibernateSessionFactory.getCurrentSession(); Tuser user_load =(Tuser) sess.load(com.vavi.pojo.Tuser.class, new Long(1)); System.out.println(user_load.getName()); sess.close(); sess = HibernateSessionFactory.getCurrentSession(); Tuser user_get =(Tuser) sess.get(com.vavi.pojo.Tuser.class, new Long(1)); System.out.println(user_get.getName()); sess.close(); } } 这段代码跟上面的一样 唯一 不同的地方就是 不在 Tuser.hbm.xml 加上 <cache usage="read-write" /> 这时候打印的结果是 : Hibernate: select tuser0_.ID as ID0_, tuser0_.NAME as NAME0_0_, tuser0_.AGE as AGE0_0_ from GHJ.TUSER tuser0_ where tuser0_.ID=? ghj Hibernate: select tuser0_.ID as ID0_, tuser0_.NAME as NAME0_0_, tuser0_.AGE as AGE0_0_ from GHJ.TUSER tuser0_ where tuser0_.ID=? ghj 说明session cache 还是失效了的 按照您的说法 我打印了下 System.out.println("session:"+sess); 这两个程序(加不加 <cache usage="read-write" /> )都是 session:org.hibernate.impl.SessionImpl(<closed>) 根据这个打印结果并不能看出什么 所以关键是 这段话 <cache usage="read-write" /> 不太清楚 他到底有什么作用 应该不是 一级缓存 二级缓存 或 查询缓存 最后谢谢您的解答 ^^ [b]问题补充:[/b] 多谢gotothework的 回答 你说的这几种策略我也知道 但是关键是加上这段话后 <cache usage="read-write" /> 不需要访问数据库 就可以直接出现结果了 (详细描述见上) 而我认为 这个查询结果不是 一级缓存 二级缓存 或 查询缓存 里面的内容 但是如果不是 hibernate 又是如何管理这部分缓存的 抑或还是我的理解有误? [b]问题补充:[/b] To gotothework : 在调用sess.close()这个方法以前,一级缓存是一直有效的.所以应该是由一级缓存里调用出来的. 当你查询时,会首先从一级缓存中寻找,如果查找不到,还会从二级缓存中寻找,如果还未找到,就会从数据库中查询. 我明白 你见我第二个问题补充,现在问题是 其他配置(二级缓存 查询缓存 都不配置),仅在Tuser.hbm.xml 加上 <cache usage="read-write" /> 打印结果表明: 第二次查询结果 没有访问数据库 就直接返回结果了 [b]问题补充:[/b] To gotothework : 在您说的这种情况下 hibernate是使用哪种缓存呢? 一级缓存? 但是如果同样的程序 不在 Tuser.hbm.xml 加上 <cache usage="read-write" /> 仍是需要先后访问两次数据库的 所以这段 <cache usage="read-write" /> 代码作用很诡异 难道加了这段话就保存了一级缓存的数据? 您说的openSession() 我试了下 同时加上了 sess.contains(user_load) 和hashCode() 这个方法 package com.vavi.dao; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.cfg.Configuration; //import com.vavi.dao.HibernateSessionFactory; import com.vavi.pojo.Tuser; public class SFTest { public static void main(String[] args) { SessionFactory sf = new Configuration().configure().buildSessionFactory(); Session sess = sf.openSession(); System.out.println(sess.hashCode()); Tuser user_load = (Tuser) sess.load(com.vavi.pojo.Tuser.class, new Long(1)); System.out.println("sess.contains user_load? "+sess.contains(user_load)); System.out.println(user_load.getName()); sess.close(); System.out.println("sess.contains user_load? "+sess.contains(user_load)); sess =sf.openSession(); System.out.println(sess.hashCode()); Tuser user_get = (Tuser) sess.get(com.vavi.pojo.Tuser.class, new Long(1)); System.out.println("sess.contains user_load? "+sess.contains(user_load)); System.out.println("sess.contains user_get? "+sess.contains(user_get)); System.out.println(user_get.getName()); sess.close(); System.out.println("sess.contains user_get? "+sess.contains(user_get)); } } 打印结果如下: 31966667 sess.contains user_load? true Hibernate: select tuser0_.ID as ID0_, tuser0_.NAME as NAME0_0_, tuser0_.AGE as AGE0_0_ from GHJ.TUSER tuser0_ where tuser0_.ID=? ghj sess.contains user_load? false 22375698 sess.contains user_load? false sess.contains user_get? true ghj sess.contains user_get? false 所以这个现象就比较诡异了 发现sess关闭后,sess.contains(user_load)返回值是false的。 但是仍未查询数据库就获得结果了 [b]问题补充:[/b] 哈哈 问题搞定 忘了hibernate 是默认启动二级缓存 以及使用 Cache provider: org.hibernate.cache.EhCacheProvider. 我上述的问题现象其实还是使用Hibernate的 二级缓存的 [main] (SettingsFactory.java:209) - Second-level cache: enabled INFO [main] (SettingsFactory.java:213) - Query cache: disabled INFO [main] (SettingsFactory.java:321) - Cache provider: org.hibernate.cache.EhCacheProvider INFO [main] (SettingsFactory.java:228) - Optimize cache for minimal puts: disabled INFO [main] (SettingsFactory.java:237) - Structured second-level cache entries: disabled 总结: 1、使用二级缓存的话 仅仅必须在 Tuser.hbm.xml 加上 <cache usage="read-write" /> 其他设置使用的是Hibernate 里面jar包的默认配置文件 当然如果需要高级应用,那就需要自定义配置文件了 2、对于Hibernate,遇到不明白的问题,建议加上<property name="generate_statistics">true</property> 并打印debug级别日志 3、Session sess = HibernateSessionFactory.getCurrentSession(); 仍是获得新的session 实例的 同时 一级缓存(Session Cache)是在session.close() 调用后就失效的,跟当前没关系 (不知道理解有没有问题) 4、没有莫名其妙的问题,想起老大对我说的一句公告。 哈哈 ,这就是我中秋节的礼物了 ^^
Tensorflow中训练cifar10出现name 'train_data' is not defined问题
用神经网络训练cifar10 ``` init=tf.global_variables_initializer() batch_size=20 train_steps=1000 with tf.Session() as sess: for i in range(train_steps): batch_data,batch_labels=train_data.next_batch(batch_size) loss_val,acc_val,_=sess.run( [loss,accuracy,train_op], feed_dict= {x: batch_data, y: batch_labels}) if i%500==0: print('[Train] Step:%d,loss:%4.5f,acc:%4.5f'\ %(i,loss_val,acc_val)) # ``` 出现name 'train_data' is not defined问题不知道怎么解决了
tensorflow种子操作,为什么我没有设置种子,得到的是同样的序列
``` import tensorflow as tf # Repeatedly running this block with the same graph will generate the same # sequences of 'a' and 'b'. a = tf.random_uniform([1]) b = tf.random_normal([1]) print("Session 3") with tf.Session() as sess3: print(sess3.run(a)) # generates 'A1' print(sess3.run(a)) # generates 'A2' print(sess3.run(b)) # generates 'B1' print(sess3.run(b)) # generates 'B2' print("Session 4") with tf.Session() as sess4: print(sess4.run(a)) # generates 'A3' print(sess4.run(a)) # generates 'A4' print(sess4.run(b)) # generates 'B1' print(sess4.run(b)) # generates 'B2' ``` 运行结果 ``` Session 3 [0.02795756] [0.7401881] [1.0417315] [-1.4342024] Session 4 [0.02795756] [0.7401881] [1.0417315] [-1.4342024] ```
加载resnet网络 训练好PB模型加载的时候遇到如下错误? 如何解决? 求助
``` 2019-11-27 02:18:29 UTC [MainThread ] - /home/mind/app.py[line:121] - INFO: args: Namespace(model_name='serve', model_path='/home/mind/model/1', service_file='/home/mind/model/1/customize_service.py', tf_server_name='127.0.0.1') 2019-11-27 02:18:36.823910: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA Using TensorFlow backend. [2019-11-27 02:18:37 +0000] [68] [ERROR] Exception in worker process Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker worker.init_process() File "/usr/local/lib/python3.6/site-packages/gunicorn/workers/base.py", line 129, in init_process self.load_wsgi() File "/usr/local/lib/python3.6/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi self.wsgi = self.app.wsgi() File "/usr/local/lib/python3.6/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/usr/local/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 52, in load return self.load_wsgiapp() File "/usr/local/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp return util.import_app(self.app_uri) File "/usr/local/lib/python3.6/site-packages/gunicorn/util.py", line 350, in import_app __import__(module) File "/home/mind/app.py", line 145, in model_service = class_defs[0](model_name, model_path) File "/home/mind/model/1/customize_service.py", line 39, in __init__ meta_graph_def = tf.saved_model.loader.load(self.sess, [tag_constants.SERVING], self.model_path) File "/usr/local/lib/python3.6/site-packages/tensorflow/python/saved_model/loader_impl.py", line 219, in load saver = tf_saver.import_meta_graph(meta_graph_def_to_load, **saver_kwargs) File "/usr/local/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1955, in import_meta_graph **kwargs) File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/meta_graph.py", line 743, in import_scoped_meta_graph producer_op_list=producer_op_list) File "/usr/local/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 432, in new_func return func(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 460, in import_graph_def _RemoveDefaultAttrs(op_dict, producer_op_list, graph_def) File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 227, in _RemoveDefaultAttrs op_def = op_dict[node.op] KeyError: 'DivNoNan' ```
InvalidArgumentError: Input to reshape is a tensor with 152000 values, but the requested shape requires a multiple of 576
运行无提示,也没有输出数据,求大神帮助! # -*- coding: utf-8 -*- """ Created on Fri Oct 4 10:01:03 2019 @author: xxj """ import numpy as np from sklearn import preprocessing import tensorflow as tf from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt import pandas as pd #读取CSV文件数据 # 从CSV文件中读取数据,返回DataFrame类型的数据集合。 def zc_func_read_csv(): zc_var_dataframe = pd.read_csv("highway.csv", sep=",") # 打乱数据集合的顺序。有时候数据文件有可能是根据某种顺序排列的,会影响到我们对数据的处理。 zc_var_dataframe = zc_var_dataframe.reindex(np.random.permutation(zc_var_dataframe.index)) return zc_var_dataframe # 预处理特征值 def preprocess_features(highway): processed_features = highway[ ["line1","line2","line3","line4","line5", "brige1","brige2","brige3","brige4","brige5", "tunnel1","tunnel2","tunnel3","tunnel4","tunnel5", "inter1","inter2","inter3","inter4","inter5", "econmic1","econmic2","econmic3","econmic4","econmic5"] ] return processed_features # 预处理标签 highway=zc_func_read_csv() x= preprocess_features(highway) outtarget=np.array(pd.read_csv("highway1.csv")) y=np.array(outtarget[:,[0]]) print('##################################################################') # 随机挑选 train_x_disorder, test_x_disorder, train_y_disorder, test_y_disorder = train_test_split(x, y,train_size=0.8, random_state=33) #数据标准化 ss_x = preprocessing.StandardScaler() train_x_disorder = ss_x.fit_transform(train_x_disorder) test_x_disorder = ss_x.transform(test_x_disorder) ss_y = preprocessing.StandardScaler() train_y_disorder = ss_y.fit_transform(train_y_disorder.reshape(-1, 1)) test_y_disorder=ss_y.transform(test_y_disorder.reshape(-1, 1)) #变厚矩阵 def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) #偏置 def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) #卷积处理 变厚过程 def conv2d(x, W): # stride [1, x_movement, y_movement, 1] x_movement、y_movement就是步长 # Must have strides[0] = strides[3] = 1 padding='SAME'表示卷积后长宽不变 return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') #pool 长宽缩小一倍 def max_pool_2x2(x): # stride [1, x_movement, y_movement, 1] return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME') # define placeholder for inputs to network xs = tf.placeholder(tf.float32, [None, 25]) #原始数据的维度:25 ys = tf.placeholder(tf.float32, [None, 1])#输出数据为维度:1 keep_prob = tf.placeholder(tf.float32)#dropout的比例 x_image = tf.reshape(xs, [-1, 5, 5, 1])#原始数据25变成二维图片5*5 ## conv1 layer ##第一卷积层 W_conv1 = weight_variable([2,2, 1,32]) # patch 2x2, in size 1, out size 32,每个像素变成32个像素,就是变厚的过程 b_conv1 = bias_variable([32]) h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) # output size 2x2x32,长宽不变,高度为32的三维图像 #h_pool1 = max_pool_2x2(h_conv1) # output size 2x2x32 长宽缩小一倍 ## conv2 layer ##第二卷积层 W_conv2 = weight_variable([2,2, 32, 64]) # patch 2x2, in size 32, out size 64 b_conv2 = bias_variable([64]) h_conv2 = tf.nn.relu(conv2d(h_conv1, W_conv2) + b_conv2) #输入第一层的处理结果 输出shape 4*4*64 ## fc1 layer ## full connection 全连接层 W_fc1 = weight_variable([3*3*64, 512])#4x4 ,高度为64的三维图片,然后把它拉成512长的一维数组 b_fc1 = bias_variable([512]) h_pool2_flat = tf.reshape(h_conv2, [-1, 3*3*64])#把3*3,高度为64的三维图片拉成一维数组 降维处理 h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)#把数组中扔掉比例为keep_prob的元素 ## fc2 layer ## full connection W_fc2 = weight_variable([512, 1])#512长的一维数组压缩为长度为1的数组 b_fc2 = bias_variable([1])#偏置 #最后的计算结果 prediction = tf.matmul(h_fc1_drop, W_fc2) + b_fc2 #prediction = tf.nn.relu(tf.matmul(h_fc1_drop, W_fc2) + b_fc2) # 计算 predition与y 差距 所用方法很简单就是用 suare()平方,sum()求和,mean()平均值 cross_entropy = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1])) # 0.01学习效率,minimize(loss)减小loss误差 train_step = tf.train.AdamOptimizer(0.01).minimize(cross_entropy) sess = tf.Session() # important step # tf.initialize_all_variables() no long valid from # 2017-03-02 if using tensorflow >= 0.12 sess.run(tf.global_variables_initializer()) #训练500次 for i in range(100): sess.run(train_step, feed_dict={xs: train_x_disorder, ys: train_y_disorder, keep_prob: 0.7}) print(i,'误差=',sess.run(cross_entropy, feed_dict={xs: train_x_disorder, ys: train_y_disorder, keep_prob: 1.0})) # 输出loss值 # 可视化 prediction_value = sess.run(prediction, feed_dict={xs: test_x_disorder, ys: test_y_disorder, keep_prob: 1.0}) ###画图########################################################################### fig = plt.figure(figsize=(20, 3)) # dpi参数指定绘图对象的分辨率,即每英寸多少个像素,缺省值为80 axes = fig.add_subplot(1, 1, 1) line1,=axes.plot(range(len(prediction_value)), prediction_value, 'b--',label='cnn',linewidth=2) #line2,=axes.plot(range(len(gbr_pridict)), gbr_pridict, 'r--',label='优选参数') line3,=axes.plot(range(len(test_y_disorder)), test_y_disorder, 'g',label='实际') axes.grid() fig.tight_layout() #plt.legend(handles=[line1, line2,line3]) plt.legend(handles=[line1, line3]) plt.title('卷积神经网络') plt.show()
tensorflow载入训练好的模型进行预测,同一张图片预测的结果却不一样????
最近在跑deeplabv1,在测试代码的时候,跑通了训练程序,但是用训练好的模型进行与测试却发现相同的图片预测的结果不一样??请问有大神知道怎么回事吗? 用的是saver.restore()方法载入模型。代码如下: ``` def main(): """Create the model and start the inference process.""" args = get_arguments() # Prepare image. img = tf.image.decode_jpeg(tf.read_file(args.img_path), channels=3) # Convert RGB to BGR. img_r, img_g, img_b = tf.split(value=img, num_or_size_splits=3, axis=2) img = tf.cast(tf.concat(axis=2, values=[img_b, img_g, img_r]), dtype=tf.float32) # Extract mean. img -= IMG_MEAN # Create network. net = DeepLabLFOVModel() # Which variables to load. trainable = tf.trainable_variables() # Predictions. pred = net.preds(tf.expand_dims(img, dim=0)) # Set up TF session and initialize variables. config = tf.ConfigProto() config.gpu_options.allow_growth = True sess = tf.Session(config=config) #init = tf.global_variables_initializer() sess.run(tf.global_variables_initializer()) # Load weights. saver = tf.train.Saver(var_list=trainable) load(saver, sess, args.model_weights) # Perform inference. preds = sess.run([pred]) print(preds) if not os.path.exists(args.save_dir): os.makedirs(args.save_dir) msk = decode_labels(np.array(preds)[0, 0, :, :, 0]) im = Image.fromarray(msk) im.save(args.save_dir + 'mask1.png') print('The output file has been saved to {}'.format( args.save_dir + 'mask.png')) if __name__ == '__main__': main() ``` 其中load是 ``` def load(saver, sess, ckpt_path): '''Load trained weights. Args: saver: TensorFlow saver object. sess: TensorFlow session. ckpt_path: path to checkpoint file with parameters. ''' ckpt = tf.train.get_checkpoint_state(ckpt_path) if ckpt and ckpt.model_checkpoint_path: saver.restore(sess, ckpt.model_checkpoint_path) print("Restored model parameters from {}".format(ckpt_path)) ``` DeepLabLFOVMode类如下: ``` class DeepLabLFOVModel(object): """DeepLab-LargeFOV model with atrous convolution and bilinear upsampling. This class implements a multi-layer convolutional neural network for semantic image segmentation task. This is the same as the model described in this paper: https://arxiv.org/abs/1412.7062 - please look there for details. """ def __init__(self, weights_path=None): """Create the model. Args: weights_path: the path to the cpkt file with dictionary of weights from .caffemodel. """ self.variables = self._create_variables(weights_path) def _create_variables(self, weights_path): """Create all variables used by the network. This allows to share them between multiple calls to the loss function. Args: weights_path: the path to the ckpt file with dictionary of weights from .caffemodel. If none, initialise all variables randomly. Returns: A dictionary with all variables. """ var = list() index = 0 if weights_path is not None: with open(weights_path, "rb") as f: weights = cPickle.load(f) # Load pre-trained weights. for name, shape in net_skeleton: var.append(tf.Variable(weights[name], name=name)) del weights else: # Initialise all weights randomly with the Xavier scheme, # and # all biases to 0's. for name, shape in net_skeleton: if "/w" in name: # Weight filter. w = create_variable(name, list(shape)) var.append(w) else: b = create_bias_variable(name, list(shape)) var.append(b) return var def _create_network(self, input_batch, keep_prob): """Construct DeepLab-LargeFOV network. Args: input_batch: batch of pre-processed images. keep_prob: probability of keeping neurons intact. Returns: A downsampled segmentation mask. """ current = input_batch v_idx = 0 # Index variable. # Last block is the classification layer. for b_idx in xrange(len(dilations) - 1): for l_idx, dilation in enumerate(dilations[b_idx]): w = self.variables[v_idx * 2] b = self.variables[v_idx * 2 + 1] if dilation == 1: conv = tf.nn.conv2d(current, w, strides=[ 1, 1, 1, 1], padding='SAME') else: conv = tf.nn.atrous_conv2d( current, w, dilation, padding='SAME') current = tf.nn.relu(tf.nn.bias_add(conv, b)) v_idx += 1 # Optional pooling and dropout after each block. if b_idx < 3: current = tf.nn.max_pool(current, ksize=[1, ks, ks, 1], strides=[1, 2, 2, 1], padding='SAME') elif b_idx == 3: current = tf.nn.max_pool(current, ksize=[1, ks, ks, 1], strides=[1, 1, 1, 1], padding='SAME') elif b_idx == 4: current = tf.nn.max_pool(current, ksize=[1, ks, ks, 1], strides=[1, 1, 1, 1], padding='SAME') current = tf.nn.avg_pool(current, ksize=[1, ks, ks, 1], strides=[1, 1, 1, 1], padding='SAME') elif b_idx <= 6: current = tf.nn.dropout(current, keep_prob=keep_prob) # Classification layer; no ReLU. # w = self.variables[v_idx * 2] w = create_variable(name='w', shape=[1, 1, 1024, n_classes]) # b = self.variables[v_idx * 2 + 1] b = create_bias_variable(name='b', shape=[n_classes]) conv = tf.nn.conv2d(current, w, strides=[1, 1, 1, 1], padding='SAME') current = tf.nn.bias_add(conv, b) return current def prepare_label(self, input_batch, new_size): """Resize masks and perform one-hot encoding. Args: input_batch: input tensor of shape [batch_size H W 1]. new_size: a tensor with new height and width. Returns: Outputs a tensor of shape [batch_size h w 18] with last dimension comprised of 0's and 1's only. """ with tf.name_scope('label_encode'): # As labels are integer numbers, need to use NN interp. input_batch = tf.image.resize_nearest_neighbor( input_batch, new_size) # Reducing the channel dimension. input_batch = tf.squeeze(input_batch, squeeze_dims=[3]) input_batch = tf.one_hot(input_batch, depth=n_classes) return input_batch def preds(self, input_batch): """Create the network and run inference on the input batch. Args: input_batch: batch of pre-processed images. Returns: Argmax over the predictions of the network of the same shape as the input. """ raw_output = self._create_network( tf.cast(input_batch, tf.float32), keep_prob=tf.constant(1.0)) raw_output = tf.image.resize_bilinear( raw_output, tf.shape(input_batch)[1:3, ]) raw_output = tf.argmax(raw_output, dimension=3) raw_output = tf.expand_dims(raw_output, dim=3) # Create 4D-tensor. return tf.cast(raw_output, tf.uint8) def loss(self, img_batch, label_batch): """Create the network, run inference on the input batch and compute loss. Args: input_batch: batch of pre-processed images. Returns: Pixel-wise softmax loss. """ raw_output = self._create_network( tf.cast(img_batch, tf.float32), keep_prob=tf.constant(0.5)) prediction = tf.reshape(raw_output, [-1, n_classes]) # Need to resize labels and convert using one-hot encoding. label_batch = self.prepare_label( label_batch, tf.stack(raw_output.get_shape()[1:3])) gt = tf.reshape(label_batch, [-1, n_classes]) # Pixel-wise softmax loss. loss = tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=gt) reduced_loss = tf.reduce_mean(loss) return reduced_loss ``` 按理说载入模型应该没有问题,可是不知道为什么结果却不一样? 图片:![图片说明](https://img-ask.csdn.net/upload/201911/15/1573810836_83106.jpg) ![图片说明](https://img-ask.csdn.net/upload/201911/15/1573810850_924663.png) 预测的结果: ![图片说明](https://img-ask.csdn.net/upload/201911/15/1573810884_985680.png) ![图片说明](https://img-ask.csdn.net/upload/201911/15/1573810904_577649.png) 两次结果不一样,与保存的模型算出来的结果也不一样。 我用的是GitHub上这个人的代码: https://github.com/minar09/DeepLab-LFOV-TensorFlow 急急急,请问有大神知道吗???
tensorflow训练网络报错Invalid argument
##1.问题 程序报错,提示:Invalid argument: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,24] ##2.代码 ``` import time import numpy as np import pandas as pd import tensorflow as tf import matplotlib.pyplot as plt # import dataset input_Dir = 'E:/data/input_H.csv' output_Dir = 'E:/data/output_H.csv' x_data = pd.read_csv(input_Dir, header = None) y_data = pd.read_csv(output_Dir, header = None) x_data = x_data.values y_data = y_data.values x_data = x_data.astype('float32') y_data = y_data.astype('float32') print("DATASET READY") # from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.2, random_state=1) row, column = x_train.shape row = float(row) # define structure of neural network n_hidden_1 = 250 n_hidden_2 = 128 n_input = 250 n_classes = 24 #initialize parameters x = tf.placeholder(tf.float32, [None, n_input]) y = tf.placeholder(tf.float32, [None, n_classes]) keep_prob = tf.placeholder(tf.float32) stddev = 0.1 weights = { 'w1': tf.Variable(tf.random_normal([n_input, n_hidden_1], stddev=stddev)), 'w2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], stddev=stddev)), 'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes], stddev=stddev)) } biases = { 'b1': tf.Variable(tf.random_normal([n_hidden_1], stddev=stddev)), 'b2': tf.Variable(tf.random_normal([n_hidden_2], stddev=stddev)), 'out': tf.Variable(tf.random_normal([n_classes], stddev=stddev)) } print("NETWORK READY") # forward propagation def multilayer_perceptron(_X, _weights, _biases): layer_1 = tf.nn.leaky_relu(tf.add(tf.matmul(_X, _weights['w1']), _biases['b1'])) layer_2 = tf.nn.leaky_relu(tf.add(tf.matmul(layer_1, _weights['w2']), _biases['b2'])) return (tf.add(tf.matmul(layer_2, _weights['out']), _biases['out'])) # pred = multilayer_perceptron(x, weights, biases) cost = tf.reduce_mean(tf.square(y - pred)) optm = tf.train.GradientDescentOptimizer(learning_rate=0.03).minimize(cost) init = tf.global_variables_initializer() print("FUNCTIONS READY") n_epochs = 100000 batch_size = 512 n_batches = np.int(np.ceil(row / batch_size)) def fetch_batch(epoch, batch_index, batch_size): # 随机获取小批量数据 np.random.seed(epoch * n_batches + batch_index) indices = np.random.randint(row, size = batch_size) return x_train[indices], y_train[indices] iter = 10000 sess = tf.Session() sess.run(tf.global_variables_initializer()) feeds_test = {x: x_test, y: y_test, keep_prob: 1} for epoch in range(n_epochs): # 总共循环次数 for batch_index in range(n_batches): x_batch, y_batch = fetch_batch(epoch, batch_index, batch_size) feeds_train = {x: x_batch, y: y_batch, keep_prob: 1} sess.run(optm, feed_dict=feeds_train) print("EPOCH %d HAS FINISHED" % (epoch)) print("COST %d :" % (epoch)) print(sess.run(cost),feed_dict=feeds_train) print("\n") sess.close() print("FINISHED") ``` ##3.报错信息 Traceback (most recent call last): File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1356, in _do_call return fn(*args) File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1341, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1429, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,24] [[{{node Placeholder_1}}]] [[Mean/_7]] (1) Invalid argument: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,24] [[{{node Placeholder_1}}]] 0 successful operations. 0 derived errors ignored. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\IPython\core\interactiveshell.py", line 3296, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-762bc58e4306>", line 1, in <module> runfile('C:/Users/Administrator/Desktop/main/demo3.py', wdir='C:/Users/Administrator/Desktop/main') File "E:\Program Files\PyCharm 2019.1.3\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile #求问问题出在什么地方?
大神时间到了~下面这段代码每一个函数和里边的方法都什么意思,用来干嘛的?谁能说得详细~虚心求教
大神时间到了~下面这段代码每一个函数和里边的方法都什么意思,用来干嘛的?谁能说得详细~虚心求教 ``` package com.sysgrrj.module.ZheJiuSheZhi.dao; import com.sysgrrj.module.ZheJiuSheZhi.valueobject.ZheJiuSheZhi; import org.hibernate.*; import org.springframework.orm.hibernate3.support.HibernateDaoSupport; import java.util.List; public class ZheJiuSheZhiDao extends HibernateDaoSupport { public int save(ZheJiuSheZhi obj) { Session sess = this.getSessionFactory().openSession(); Transaction tran = sess.beginTransaction(); try { sess.saveOrUpdate(obj); tran.commit(); } catch (HibernateException e) { tran.rollback(); } finally { sess.close(); } return obj.getId(); } public void delete(ZheJiuSheZhi obj) { Session sess = this.getSessionFactory().openSession(); Transaction tran = sess.beginTransaction(); try { sess.delete(obj); tran.commit(); } catch (HibernateException e) { tran.rollback(); } finally { sess.close(); } } // 删除多个记录 public void deleteByIds(String ids) { Session sess = this.getSessionFactory().openSession(); Transaction tran = sess.beginTransaction(); try { String hql = "Delete from ZheJiuSheZhi where id in ("+ids+")"; Query query = sess.createQuery(hql); query.executeUpdate(); tran.commit(); } catch (Exception e) { e.printStackTrace(); tran.rollback(); } finally { sess.close(); } } public ZheJiuSheZhi get(int id) { Session sess = this.getSessionFactory().openSession(); try { return (ZheJiuSheZhi) sess.get(ZheJiuSheZhi.class, id); } finally { sess.close(); } } private List<ZheJiuSheZhi> findAll(String where) { Session sess = this.getSessionFactory().openSession(); try { Query query = sess.createQuery(" From ZheJiuSheZhi " + where + " order by id desc "); return query.list(); } finally { sess.close(); } } public List<ZheJiuSheZhi> getList(int shengChanXianID) { return this.findAll(" where shengChanXianID="+shengChanXianID); } } ```
tensorflow RNN LSTM代码运行不正确?
报错显示是ValueError: None values not supported. 在cross_entropy处有问题。谢谢大家 ``` #7.2 RNN import tensorflow as tf #tf.reset_default_graph() from tensorflow.examples.tutorials.mnist import input_data #载入数据集 mnist = input_data.read_data_sets("MNIST_data/", one_hot = True) #输入图片是28*28 n_inputs = 28 #输入一行,一行有28个数据 max_time = 28 #一共28行 lstm_size = 100 #隐层单元 n_classes = 10 #10个分量 batch_size = 50 #每批次50个样本 n_batch = mnist.train.num_examples//batch_size #计算共由多少个批次 #这里的none表示第一个维度可以是任意长度 x = tf.placeholder(tf.float32, [batch_size, 784]) #正确的标签 y = tf.placeholder(tf.float32, [batch_size, 10]) #初始化权值 weights = tf.Variable(tf.truncated_normal([lstm_size, n_classes], stddev = 0.1)) #初始化偏置 biases = tf.Variable(tf.constant(0.1, shape = [n_classes])) #定义RNN网络 def RNN(X, weights, biases): #input = [batch_size, max_size, n_inputs] inputs = tf.reshape(X, [-1, max_time, n_inputs]) #定义LSTM基本CELL lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(lstm_size) #final_state[0]是cell_state #final_state[1]是hidden_state outputs, final_state = tf.nn.dynamic_rnn(lstm_cell, inputs, dtype = tf.float32) results = tf.nn.softmax(tf.matmul(final_state[1], weights) + biases) #计算RNN的返回结果 prediction = RNN(x, weights, biases) #损失函数 cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels = y,logits = prediction)) #使用AdamOptimizer进行优化 train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) #结果存放在一个布尔型列表中 correct_prediction = tf.equal(tf.argmax(y, 1),tf.argmax(prediction, 1)) #求准确率 accuracy = tf.reduce_mean(tf.cast(correct_precdition,tf.float32)) #初始化 init = tf.global_variable_initializer() with tf.Session() as sess: sess.run(init) for epoch in range(6): for batch in range(n_batch): batch_xs,batch_ys=mnist.train.next_batch(batch_size) sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys}) acc = sess.run(accuracy, feed_dict={x:mnist.test.images,y:mnist.test.labels}) print('Iter' + str(epoch) + ',Testing Accuracy = ' + str(acc)) ```
在网上找到一个DQN的神经网络代码。可以运行,但是没有读取模型的部分
代码可以运行,但是没用读取模型的代码,我在网上找了一段时间,还是没有找到教程。自己写的读写代码不能正常工作 这是原代码 ``` import pygame import random from pygame.locals import * import numpy as np from collections import deque import tensorflow as tf import cv2 BLACK = (0 ,0 ,0 ) WHITE = (255,255,255) SCREEN_SIZE = [320,400] BAR_SIZE = [50, 5] BALL_SIZE = [15, 15] # 神经网络的输出 MOVE_STAY = [1, 0, 0,0] MOVE_LEFT = [0, 1, 0,0] MOVE_RIGHT = [0, 0, 1,0] MOVE_RIGHT1=[0,0,0,1] class Game(object): def __init__(self): pygame.init() self.clock = pygame.time.Clock() self.screen = pygame.display.set_mode(SCREEN_SIZE) pygame.display.set_caption('Simple Game') self.ball_pos_x = SCREEN_SIZE[0]//2 - BALL_SIZE[0]/2 self.ball_pos_y = SCREEN_SIZE[1]//2 - BALL_SIZE[1]/2 self.ball_dir_x = -1 # -1 = left 1 = right self.ball_dir_y = -1 # -1 = up 1 = down self.ball_pos = pygame.Rect(self.ball_pos_x, self.ball_pos_y, BALL_SIZE[0], BALL_SIZE[1]) self.bar_pos_x = SCREEN_SIZE[0]//2-BAR_SIZE[0]//2 self.bar_pos = pygame.Rect(self.bar_pos_x, SCREEN_SIZE[1]-BAR_SIZE[1], BAR_SIZE[0], BAR_SIZE[1]) # action是MOVE_STAY、MOVE_LEFT、MOVE_RIGHT # ai控制棒子左右移动;返回游戏界面像素数和对应的奖励。(像素->奖励->强化棒子往奖励高的方向移动) def step(self, action): if action == MOVE_LEFT: self.bar_pos_x = self.bar_pos_x - 2 elif action == MOVE_RIGHT: self.bar_pos_x = self.bar_pos_x + 2 elif action == MOVE_RIGHT1: self.bar_pos_x = self.bar_pos_x + 1 else: pass if self.bar_pos_x < 0: self.bar_pos_x = 0 if self.bar_pos_x > SCREEN_SIZE[0] - BAR_SIZE[0]: self.bar_pos_x = SCREEN_SIZE[0] - BAR_SIZE[0] self.screen.fill(BLACK) self.bar_pos.left = self.bar_pos_x pygame.draw.rect(self.screen, WHITE, self.bar_pos) self.ball_pos.left += self.ball_dir_x * 2 self.ball_pos.bottom += self.ball_dir_y * 3 pygame.draw.rect(self.screen, WHITE, self.ball_pos) if self.ball_pos.top <= 0 or self.ball_pos.bottom >= (SCREEN_SIZE[1] - BAR_SIZE[1]+1): self.ball_dir_y = self.ball_dir_y * -1 if self.ball_pos.left <= 0 or self.ball_pos.right >= (SCREEN_SIZE[0]): self.ball_dir_x = self.ball_dir_x * -1 reward = 0 if self.bar_pos.top <= self.ball_pos.bottom and (self.bar_pos.left < self.ball_pos.right and self.bar_pos.right > self.ball_pos.left): reward = 1 # 击中奖励 elif self.bar_pos.top <= self.ball_pos.bottom and (self.bar_pos.left > self.ball_pos.right or self.bar_pos.right < self.ball_pos.left): reward = -1 # 没击中惩罚 # 获得游戏界面像素 screen_image = pygame.surfarray.array3d(pygame.display.get_surface()) #np.save(r'C:\Users\Administrator\Desktop\game\model\112454.npy',screen_image) pygame.display.update() # 返回游戏界面像素和对应的奖励 return reward, screen_image # learning_rate LEARNING_RATE = 0.99 # 更新梯度 INITIAL_EPSILON = 1.0 FINAL_EPSILON = 0.05 # 测试观测次数 EXPLORE = 500000 OBSERVE = 50000 # 存储过往经验大小 REPLAY_MEMORY = 500000 BATCH = 100 output = 4 # 输出层神经元数。代表3种操作-MOVE_STAY:[1, 0, 0] MOVE_LEFT:[0, 1, 0] MOVE_RIGHT:[0, 0, 1] input_image = tf.placeholder("float", [None, 80, 100, 4]) # 游戏像素 action = tf.placeholder("float", [None, output]) # 操作 # 定义CNN-卷积神经网络 参考:http://blog.topspeedsnail.com/archives/10451 def convolutional_neural_network(input_image): weights = {'w_conv1':tf.Variable(tf.zeros([8, 8, 4, 32])), 'w_conv2':tf.Variable(tf.zeros([4, 4, 32, 64])), 'w_conv3':tf.Variable(tf.zeros([3, 3, 64, 64])), 'w_fc4':tf.Variable(tf.zeros([3456, 784])), 'w_out':tf.Variable(tf.zeros([784, output]))} biases = {'b_conv1':tf.Variable(tf.zeros([32])), 'b_conv2':tf.Variable(tf.zeros([64])), 'b_conv3':tf.Variable(tf.zeros([64])), 'b_fc4':tf.Variable(tf.zeros([784])), 'b_out':tf.Variable(tf.zeros([output]))} conv1 = tf.nn.relu(tf.nn.conv2d(input_image, weights['w_conv1'], strides = [1, 4, 4, 1], padding = "VALID") + biases['b_conv1']) conv2 = tf.nn.relu(tf.nn.conv2d(conv1, weights['w_conv2'], strides = [1, 2, 2, 1], padding = "VALID") + biases['b_conv2']) conv3 = tf.nn.relu(tf.nn.conv2d(conv2, weights['w_conv3'], strides = [1, 1, 1, 1], padding = "VALID") + biases['b_conv3']) conv3_flat = tf.reshape(conv3, [-1, 3456]) fc4 = tf.nn.relu(tf.matmul(conv3_flat, weights['w_fc4']) + biases['b_fc4']) output_layer = tf.matmul(fc4, weights['w_out']) + biases['b_out'] return output_layer # 深度强化学习入门: https://www.nervanasys.com/demystifying-deep-reinforcement-learning/ # 训练神经网络 def train_neural_network(input_image): predict_action = convolutional_neural_network(input_image) argmax = tf.placeholder("float", [None, output]) gt = tf.placeholder("float", [None]) action = tf.reduce_sum(tf.multiply(predict_action, argmax), reduction_indices = 1) cost = tf.reduce_mean(tf.square(action - gt)) optimizer = tf.train.AdamOptimizer(1e-6).minimize(cost) game = Game() D = deque() _, image = game.step(MOVE_STAY) # 转换为灰度值 image = cv2.cvtColor(cv2.resize(image, (100, 80)), cv2.COLOR_BGR2GRAY) # 转换为二值 ret, image = cv2.threshold(image, 1, 255, cv2.THRESH_BINARY) input_image_data = np.stack((image, image, image, image), axis = 2) with tf.Session() as sess: sess.run(tf.initialize_all_variables()) saver = tf.train.Saver() n = 0 epsilon = INITIAL_EPSILON while True: action_t = predict_action.eval(feed_dict = {input_image : [input_image_data]})[0] argmax_t = np.zeros([output], dtype=np.int) if(random.random() <= INITIAL_EPSILON): maxIndex = random.randrange(output) else: maxIndex = np.argmax(action_t) argmax_t[maxIndex] = 1 if epsilon > FINAL_EPSILON: epsilon -= (INITIAL_EPSILON - FINAL_EPSILON) / EXPLORE for event in pygame.event.get(): #macOS需要事件循环,否则白屏 if event.type == QUIT: pygame.quit() sys.exit() reward, image = game.step(list(argmax_t)) image = cv2.cvtColor(cv2.resize(image, (100, 80)), cv2.COLOR_BGR2GRAY) ret, image = cv2.threshold(image, 1, 255, cv2.THRESH_BINARY) image = np.reshape(image, (80, 100, 1)) input_image_data1 = np.append(image, input_image_data[:, :, 0:3], axis = 2) D.append((input_image_data, argmax_t, reward, input_image_data1)) if len(D) > REPLAY_MEMORY: D.popleft() if n > OBSERVE: minibatch = random.sample(D, BATCH) input_image_data_batch = [d[0] for d in minibatch] argmax_batch = [d[1] for d in minibatch] reward_batch = [d[2] for d in minibatch] input_image_data1_batch = [d[3] for d in minibatch] gt_batch = [] out_batch = predict_action.eval(feed_dict = {input_image : input_image_data1_batch}) for i in range(0, len(minibatch)): gt_batch.append(reward_batch[i] + LEARNING_RATE * np.max(out_batch[i])) optimizer.run(feed_dict = {gt : gt_batch, argmax : argmax_batch, input_image : input_image_data_batch}) input_image_data = input_image_data1 n = n+1 if n % 100 == 0: saver.save(sess, 'D:/lolAI/model/game', global_step = n) # 保存模型 print(n, "epsilon:", epsilon, " " ,"action:", maxIndex, " " ,"reward:", reward) train_neural_network(input_image) ``` 这是我根据教程写的读取模型并且运行的代码 ``` import tensorflow as tf tf.reset_default_graph() with tf.Session() as sess: new_saver = tf.train.import_meta_graph('D:/lolAI/model/game-400.meta') new_saver.restore(sess, tf.train.latest_checkpoint('D:/lolAI/model')) print(sess.run(tf.initialize_all_variables())) ``` 代码我还没有看的很明白,希望大佬给点意见
相见恨晚的超实用网站
相见恨晚的超实用网站 持续更新中。。。
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载 点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。 ...
字节跳动视频编解码面经
三四月份投了字节跳动的实习(图形图像岗位),然后hr打电话过来问了一下会不会opengl,c++,shador,当时只会一点c++,其他两个都不会,也就直接被拒了。 七月初内推了字节跳动的提前批,因为内推没有具体的岗位,hr又打电话问要不要考虑一下图形图像岗,我说实习投过这个岗位不合适,不会opengl和shador,然后hr就说秋招更看重基础。我当时想着能进去就不错了,管他哪个岗呢,就同意了面试...
开源一个功能完整的SpringBoot项目框架
福利来了,给大家带来一个福利。 最近想了解一下有关Spring Boot的开源项目,看了很多开源的框架,大多是一些demo或者是一个未成形的项目,基本功能都不完整,尤其是用户权限和菜单方面几乎没有完整的。 想到我之前做的框架,里面通用模块有:用户模块,权限模块,菜单模块,功能模块也齐全了,每一个功能都是完整的。 打算把这个框架分享出来,供大家使用和学习。 为什么用框架? 框架可以学习整体...
源码阅读(19):Java中主要的Map结构——HashMap容器(下1)
HashMap容器从字面的理解就是,基于Hash算法构造的Map容器。从数据结构的知识体系来说,HashMap容器是散列表在Java中的具体表达(并非线性表结构)。具体来说就是,利用K-V键值对中键对象的某个属性(默认使用该对象的“内存起始位置”这一属性)作为计算依据进行哈希计算(调用hashCode方法),然后再以计算后的返回值为依据,将当前K-V键值对在符合HashMap容器构造原则的基础上,放置到HashMap容器的某个位置上,且这个位置和之前添加的K-V键值对的存储位置完全独立,不一定构成连续的存储
c++制作的植物大战僵尸,开源,一代二代结合游戏
此游戏全部由本人自己制作完成。游戏大部分的素材来源于原版游戏素材,少部分搜集于网络,以及自己制作。 此游戏为同人游戏而且仅供学习交流使用,任何人未经授权,不得对本游戏进行更改、盗用等,否则后果自负。目前有六种僵尸和六种植物,植物和僵尸的动画都是本人做的。qq:2117610943 开源代码下载 提取码:3vzm 点击下载--&gt; 11月28日 新增四种植物 统一植物画风,全部修...
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
Python——画一棵漂亮的樱花树(不同种樱花+玫瑰+圣诞树喔)
最近翻到一篇知乎,上面有不少用Python(大多是turtle库)绘制的树图,感觉很漂亮,我整理了一下,挑了一些我觉得不错的代码分享给大家(这些我都测试过,确实可以生成) one 樱花树 动态生成樱花 效果图(这个是动态的): 实现代码 import turtle as T import random import time # 画樱花的躯干(60,t) def Tree(branch, ...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
Python 基础(一):入门必备知识
Python 入门必备知识,你都掌握了吗?
深度学习图像算法在内容安全领域的应用
互联网给人们生活带来便利的同时也隐含了大量不良信息,防范互联网平台有害内容传播引起了多方面的高度关注。本次演讲从技术层面分享网易易盾在内容安全领域的算法实践经验,包括深度...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发...
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 演示地点演示 html代码如下` music 这个年纪 七月的风 音乐 ` 然后就是css`*{ margin: 0; padding: 0; text-decoration: none; list-...
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。
数据库优化 - SQL优化
以实际SQL入手,带你一步一步走上SQL优化之路!
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 cpp 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7 p...
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
经典算法(5)杨辉三角
写在前面: 我是 扬帆向海,这个昵称来源于我的名字以及女朋友的名字。我热爱技术、热爱开源、热爱编程。技术是开源的、知识是共享的。 这博客是对自己学习的一点点总结及记录,如果您对 Java、算法 感兴趣,可以关注我的动态,我们一起学习。 用知识改变命运,让我们的家人过上更好的生活。 目录一、杨辉三角的介绍二、杨辉三角的算法思想三、代码实现1.第一种写法2.第二种写法 一、杨辉三角的介绍 百度
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹...
面试官:你连RESTful都不知道我怎么敢要你?
干货,2019 RESTful最贱实践
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看...
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
Python 编程实用技巧
Python是一门很灵活的语言,也有很多实用的方法,有时候实现一个功能可以用多种方法实现,我这里总结了一些常用的方法,并会持续更新。
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,
程序员:我终于知道post和get的区别
IT界知名的程序员曾说:对于那些月薪三万以下,自称IT工程师的码农们,其实我们从来没有把他们归为我们IT工程师的队伍。他们虽然总是以IT工程师自居,但只是他们一厢情愿罢了。 此话一出,不知激起了多少(码农)程序员的愤怒,却又无可奈何,于是码农问程序员。 码农:你知道get和post请求到底有什么区别? 程序员:你看这篇就知道了。 码农:你月薪三万了? 程序员:嗯。 码农:你是怎么做到的? 程序员:
"狗屁不通文章生成器"登顶GitHub热榜,分分钟写出万字形式主义大作
前言 GitHub 被誉为全球最大的同性交友网站,……,陪伴我们已经走过 10+ 年时间,它托管了大量的软件代码,同时也承载了程序员无尽的欢乐。 上周给大家分享了一篇10个让你笑的合不拢嘴的Github项目,而且还拿了7万+个Star哦,有兴趣的朋友,可以看看, 印象最深刻的是 “ 呼吸不止,码字不停 ”: 老实交代,你是不是经常准备写个技术博客,打开word后瞬间灵感便秘,码不出字? 有什么
推荐几款比较实用的工具,网站
1.盘百度PanDownload 这个云盘工具是免费的,可以进行资源搜索,提速(偶尔会抽风????) 不要去某站买付费的???? PanDownload下载地址 2.BeJSON 这是一款拥有各种在线工具的网站,推荐它的主要原因是网站简洁,功能齐全,广告相比其他广告好太多了 bejson网站 3.二维码美化 这个网站的二维码美化很好看,网站界面也很...
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU
相关热词 如何提升c#开发能力 矩阵乘法c# c#调用谷歌浏览器 c# 去空格去转义符 c#用户登录窗体代码 c# 流 c# linux 可视化 c# mvc 返回图片 c# 像素空间 c# 日期 最后一天
立即提问