structured streaming 时运行一段时间报临时文件不存在,想知道该临时文件是什么,有什么作用
Job aborted due to stage failure: Task 1 in stage 9.0 failed 4 times, most recent failure: Lost task 1.3 in stage 9.0 (TID 1018, 34.55.0.164, executor 0): java.lang.IllegalStateException: Error reading delta file /tmp/temporary-01933c45-4657-47d1-a0ab-651476698d08/state/0/1/1.delta of HDFSStateStoreProvider[id = (op=0, part=1), dir = /tmp/temporary-01933c45-4657-47d1-a0ab-651476698d08/state/0/1]: /tmp/temporary-01933c45-4657-47d1-a0ab-651476698d08/state/0/1/1.delta does not exist
    at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$updateFromDeltaFile(HDFSBackedStateStoreProvider.scala:410)
    at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$$anonfun$org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$loadMap$1$$anonfun$6.apply(HDFSBackedStateStoreProvider.scala:362)
    at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$$anonfun$org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$loadMap$1$$anonfun$6.apply(HDFSBackedStateStoreProvider.scala:359)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$$anonfun$org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$loadMap$1.apply(HDFSBackedStateStoreProvider.scala:359)
    at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$$anonfun$org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$loadMap$1.apply(HDFSBackedStateStoreProvider.scala:358)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$loadMap(HDFSBackedStateStoreProvider.scala:358)
    at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.getStore(HDFSBackedStateStoreProvider.scala:265)
    at org.apache.spark.sql.execution.streaming.state.StateStore$.get(StateStore.scala:200)
    at org.apache.spark.sql.execution.streaming.state.StateStoreRDD.compute(StateStoreRDD.scala:61)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:108)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException: File /tmp/temporary-01933c45-4657-47d1-a0ab-651476698d08/state/0/1/1.delta does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:142)
    at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346)
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769)
    at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$updateFromDeltaFile(HDFSBackedStateStoreProvider.scala:407)
    ... 21 more

structured streaming 时运行一段时间报临时文件不存在,想知道该临时文件是什么,有什么作用,spark 2.2.0版本,standalone模式,代码中未设置checkpointLocation,初次写spark 任务,请大佬支点

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
spark structured streaming实现每30秒计算前30分钟的用户增长率

spark structured streaming实现每30秒计算前30分钟的用户增长率,spark structured stream是否可以实现?如何实现

spark streaming运行一段时间报以下异常,怎么解决

Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 1568735.0 failed 4 times, most recent failure: Lost task 2.3 in stage 1568735.0 (TID 11808399, iZ94pshi327Z): java.lang.Exception: Could not compute split, block input-0-1438413230200 not found at org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:51) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:64) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1393) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 15/08/01 08:53:09 WARN AkkaUtils: Error sending message [message = Heartbeat(0,[Lscala.Tuple2;@544fc1ff,BlockManagerId(0, iZ94w2tczvjZ, 41595))] in 2 attempts java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:195) at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:427) 15/08/01 08:53:28 WARN AkkaUtils: Error sending message [message = UpdateBlockInfo(BlockManagerId(0, iZ94w2tczvjZ, 41595),input-0-1438385673800,StorageLevel(false, false, false, false, 1),0,0,0)] in 1 attempts java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:195) at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:221) at org.apache.spark.storage.BlockManagerMaster.updateBlockInfo(BlockManagerMaster.scala:62) at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$tryToReportBlockStatus(BlockManager.scala:384) at org.apache.spark.storage.BlockManager.reportBlockStatus(BlockManager.scala:360) at org.apache.spark.storage.BlockManager.dropOldBlocks(BlockManager.scala:1138) at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$dropOldNonBroadcastBlocks(BlockManager.scala:1115) at org.apache.spark.storage.BlockManager$$anonfun$1.apply$mcVJ$sp(BlockManager.scala:149) at org.apache.spark.util.MetadataCleaner$$anon$1.run(MetadataCleaner.scala:43) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) 15/08/01 08:53:42 WARN AkkaUtils: Error sending message [message = Heartbeat(0,[Lscala.Tuple2;@544fc1ff,BlockManagerId(0, iZ94w2tczvjZ, 41595))] in 3 attempts java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:195) at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:427) 15/08/01 08:53:45 WARN Executor: Issue communicating with driver in heartbeater org.apache.spark.SparkException: Error sending message [message = Heartbeat(0,[Lscala.Tuple2;@544fc1ff,BlockManagerId(0, iZ94w2tczvjZ, 41595))] at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:209) at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:427) Caused by: java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:195) ... 1 more

【求助】structed streaming 在消费kafka数据时,怎么保证数据完整&有且仅有1次被消费

看了下官方文档,原本streaming在使用direct模式时,可以自己维护offset,感觉上还比较靠谱。 现在structed streaming 使用kafka时,enable.auto.commit 是不可设置的,按照文档说的是,structed streaming 不提交任何offset, 那spark在新版本的消费kafka中,如何保证有且仅有一次,或者是至少被消费一次。

Structured forests for fast edge detection

最近在研究P. Dollár的《Structured forests for fast edge detection》。在MATLAB中使用其提供的工具包能够得到很好的边缘提取效果,但是在使用OPENCV中的函数时,关于模型的训练函数: Ptr<cv::StructuredEdgeDetection> createStructuredEdgeDetection(String model) Parameters: model – model file name 中的模型不知道如何获得,使用MATLAB中的模型并不能够进行训练。目前网上找到了一个模型效果并不好,求大神指导如何将MATLAB中使用的BSD500应用于OPENCV的训练中。

spark structed streamig集成kafka0.8版本的问题

spark官网要求structed streaming 消费kafka要求kafka为0.10版本以上,但是我集群kakfa 是0.8版本的,githup上有个接口,需要maven编译源码后使用,哪位帅哥帮我编译一下,我编译的有问题,源码地址https://github.com/jerryshao/spark-kafka-0-8-sql,编译出来的jar不能太大

system32文件夹下的文件不能获取的问题

我需要获取c:\windows\system32\drivers文件夹下的驱动程序,但是发现明明存在的文件,在程序中始终获取不到,并且发现system32下的所有文件都获取不到,我感觉应该是权限的问题,尝试修改了system32文件夹的权限但是还是获取失败,希望有大神能够帮助解决这个问题。对了我的运行环境是win10 64的,在win7 32环境下是能获取到的。

SparkStreaming程序报错,yarn模式,求解答!

WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1543366370005_9010_01_000005 on host: hadoop009. Exit status: -100. Diagnostics: Container released on a *lost* node WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1543366370005_9010_01_000011 on host: hadoop009. Exit status: -100. Diagnostics: Container released on a *lost* node ERROR YarnScheduler: Lost executor 4 on hadoop009: Container marked as failed: container_1543366370005_9010_01_000005 on host: hadoop009. Exit status: -100. Diagnostics: Container released on a *lost* node ERROR YarnScheduler: Lost executor 10 on hadoop009: Container marked as failed: container_1543366370005_9010_01_000011 on host: hadoop009. Exit status: -100. Diagnostics: Container released on a *lost* node ERROR YarnScheduler: Lost an executor 10 (already removed): Pending loss reason. ERROR YarnScheduler: Lost an executor 4 (already removed): Pending loss reason.

php多文件上传获取用户上传文件的确切数量,而不是数组中所有输入字段的数量

<div class="post-text" itemprop="text"> <p>Ok. I give up on this. When uploading multiple files to the server using php, what is a fail safe method to return the count of files the user has actually uploaded?</p> <p>Whatever I have done so far, returns the count of all the fields in the form, and the count of the files a user uploads. So if the total fields in the form were 3 and a user uploaded only 2 files, I still get 3 as the count of file uploaded.</p> <p>One place suggested using <code>array_filter</code> to do this, but that's totally beyond me.</p> <pre><code>echo count($_FILES['file']['tmp_name']); //3 echo count($_FILES['file']); //3 </code></pre> <p>Any fail safe method you follow and can suggest other than looping through the <code>FILES</code> array <code>size</code> to check for this?</p> <p>My form is structured like any other:</p> <pre><code>&lt;form action="process.php" method="post" enctype="multipart/form-data"&gt; &lt;div&gt;&lt;input type="file" name="file[]"&gt;&lt;/div&gt; &lt;div&gt;&lt;input type="file" name="file[]"&gt;&lt;/div&gt; &lt;div&gt;&lt;input type="file" name="file[]"&gt;&lt;/div&gt; &lt;div&gt;&lt;input type="submit"&gt;&lt;/div&gt; &lt;/form&gt; </code></pre> </div>

在PHP中解析Lucene / SOLR debug.explain.structured xml输出

<div class="post-text" itemprop="text"> <p>The default "human readable" formatting of solr's debug-mode explain feature is completely useless. You can get some structured xml output by passing debug.explain.structured=true.</p> <p>However the xml it generates isn't really usable either and I need to be able to use this debugging information elsewhere in my code.</p> <p>Before I re-invent the wheel, I have two questions:</p> <p>1) Does anyone know of an existing PHP class (or function) that will parse this xml and turn it into a useful object? (googling didn't turn up anything obvious)</p> <p>2) For those familiar with SOLR's debug mode, is there a better way to approach this than parsing the debug.explain.structured xml? </p> <p>(I'm Using SOLR 3.6)</p> </div>

php打开一个psd文件

<div class="post-text" itemprop="text"> <p>I apologize that this isn't so much a programming question as it is a general information question.</p> <p>I do know C and C++ as well as my web languages, but I ran across a php class that extracted a .psd image. And I thought to myself, as a somewhat inexperienced programmer, I would have no idea where to even start there. How does one discover how a particular type of file is structured? How would you find how to process the bytes of the file? Because really my only idea would be to call Adobe.</p> <p>Thanks for your help and advice!</p> </div>

检测文件中的空行

<div class="post-text" itemprop="text"> <p>I have a little script that use the function file() for reading the entire file into an array.</p> <pre><code>$arr = file('file.txt'); </code></pre> <p>The file.txt is structured like this:</p> <pre><code>john david james </code></pre> <p>So if I debug the array it returns:</p> <pre><code>array(4) { [0]=&gt; string(6) "john " [1]=&gt; string(7) "david " [2]=&gt; string(2) " " [3]=&gt; &amp;string(5) "james" } </code></pre> <p>I'm trying to find a way to detect the empty line when I analyze every value of the array in a foreach loop:</p> <pre><code>foreach ($arr as $value) { </code></pre> <p>I tried with the empty function but nothing.. I also tried with comparing the value with <code>PHP_EOL</code> or <code> </code>, also nothing.</p> <p>I know that I can avoid empty lines with FILE_IGNORE_NEW_LINES, but I don't need it. I'm just searching for a way to detect this empty lines.</p> </div>

hibernate 的缓存使用问题

问题描述见: 在测试Hibernate二级缓存的时候 把ehcache.xml和hibernate.cfg.xml里的相关二级缓存的配置 都删掉 , 直接在hbm.xml中配置 <cache usage= "read-write " /> , 发现查询结果看起来仍是通过缓存来查询的 因为sql没有打印两次 [code="java"] package com.vavi.test; import org.hibernate.Session; import com.vavi.dao.HibernateSessionFactory; import com.vavi.pojo.Tuser; public class Test { public static void main(String[] args) { Session sess = HibernateSessionFactory.getCurrentSession(); Tuser user_load = (Tuser) sess.load(com.vavi.pojo.Tuser.class, new Long(1)); System.out.println(user_load.getName()); sess.close(); sess = HibernateSessionFactory.getCurrentSession(); Tuser user_get = (Tuser) sess.get(com.vavi.pojo.Tuser.class, new Long(1)); System.out.println(user_get.getName()); sess.close(); } } [/code] HibernateSessionFactory 是MyEclipse 生成的 [b]问题补充:[/b] package com.vavi.test; /** * 结论:Session级别的缓存 仅在当前session 生命期内有效 * 一旦该session关闭 ,再次查询时仍需要访问数据库 */ import org.hibernate.Session; import com.vavi.dao.HibernateSessionFactory; import com.vavi.pojo.Tuser; public class Test_SessionClose { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub Session sess = HibernateSessionFactory.getCurrentSession(); Tuser user_load =(Tuser) sess.load(com.vavi.pojo.Tuser.class, new Long(1)); System.out.println(user_load.getName()); sess.close(); sess = HibernateSessionFactory.getCurrentSession(); Tuser user_get =(Tuser) sess.get(com.vavi.pojo.Tuser.class, new Long(1)); System.out.println(user_get.getName()); sess.close(); } } 这段代码跟上面的一样 唯一 不同的地方就是 不在 Tuser.hbm.xml 加上 <cache usage="read-write" /> 这时候打印的结果是 : Hibernate: select tuser0_.ID as ID0_, tuser0_.NAME as NAME0_0_, tuser0_.AGE as AGE0_0_ from GHJ.TUSER tuser0_ where tuser0_.ID=? ghj Hibernate: select tuser0_.ID as ID0_, tuser0_.NAME as NAME0_0_, tuser0_.AGE as AGE0_0_ from GHJ.TUSER tuser0_ where tuser0_.ID=? ghj 说明session cache 还是失效了的 按照您的说法 我打印了下 System.out.println("session:"+sess); 这两个程序(加不加 <cache usage="read-write" /> )都是 session:org.hibernate.impl.SessionImpl(<closed>) 根据这个打印结果并不能看出什么 所以关键是 这段话 <cache usage="read-write" /> 不太清楚 他到底有什么作用 应该不是 一级缓存 二级缓存 或 查询缓存 最后谢谢您的解答 ^^ [b]问题补充:[/b] 多谢gotothework的 回答 你说的这几种策略我也知道 但是关键是加上这段话后 <cache usage="read-write" /> 不需要访问数据库 就可以直接出现结果了 (详细描述见上) 而我认为 这个查询结果不是 一级缓存 二级缓存 或 查询缓存 里面的内容 但是如果不是 hibernate 又是如何管理这部分缓存的 抑或还是我的理解有误? [b]问题补充:[/b] To gotothework : 在调用sess.close()这个方法以前,一级缓存是一直有效的.所以应该是由一级缓存里调用出来的. 当你查询时,会首先从一级缓存中寻找,如果查找不到,还会从二级缓存中寻找,如果还未找到,就会从数据库中查询. 我明白 你见我第二个问题补充,现在问题是 其他配置(二级缓存 查询缓存 都不配置),仅在Tuser.hbm.xml 加上 <cache usage="read-write" /> 打印结果表明: 第二次查询结果 没有访问数据库 就直接返回结果了 [b]问题补充:[/b] To gotothework : 在您说的这种情况下 hibernate是使用哪种缓存呢? 一级缓存? 但是如果同样的程序 不在 Tuser.hbm.xml 加上 <cache usage="read-write" /> 仍是需要先后访问两次数据库的 所以这段 <cache usage="read-write" /> 代码作用很诡异 难道加了这段话就保存了一级缓存的数据? 您说的openSession() 我试了下 同时加上了 sess.contains(user_load) 和hashCode() 这个方法 package com.vavi.dao; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.cfg.Configuration; //import com.vavi.dao.HibernateSessionFactory; import com.vavi.pojo.Tuser; public class SFTest { public static void main(String[] args) { SessionFactory sf = new Configuration().configure().buildSessionFactory(); Session sess = sf.openSession(); System.out.println(sess.hashCode()); Tuser user_load = (Tuser) sess.load(com.vavi.pojo.Tuser.class, new Long(1)); System.out.println("sess.contains user_load? "+sess.contains(user_load)); System.out.println(user_load.getName()); sess.close(); System.out.println("sess.contains user_load? "+sess.contains(user_load)); sess =sf.openSession(); System.out.println(sess.hashCode()); Tuser user_get = (Tuser) sess.get(com.vavi.pojo.Tuser.class, new Long(1)); System.out.println("sess.contains user_load? "+sess.contains(user_load)); System.out.println("sess.contains user_get? "+sess.contains(user_get)); System.out.println(user_get.getName()); sess.close(); System.out.println("sess.contains user_get? "+sess.contains(user_get)); } } 打印结果如下: 31966667 sess.contains user_load? true Hibernate: select tuser0_.ID as ID0_, tuser0_.NAME as NAME0_0_, tuser0_.AGE as AGE0_0_ from GHJ.TUSER tuser0_ where tuser0_.ID=? ghj sess.contains user_load? false 22375698 sess.contains user_load? false sess.contains user_get? true ghj sess.contains user_get? false 所以这个现象就比较诡异了 发现sess关闭后,sess.contains(user_load)返回值是false的。 但是仍未查询数据库就获得结果了 [b]问题补充:[/b] 哈哈 问题搞定 忘了hibernate 是默认启动二级缓存 以及使用 Cache provider: org.hibernate.cache.EhCacheProvider. 我上述的问题现象其实还是使用Hibernate的 二级缓存的 [main] (SettingsFactory.java:209) - Second-level cache: enabled INFO [main] (SettingsFactory.java:213) - Query cache: disabled INFO [main] (SettingsFactory.java:321) - Cache provider: org.hibernate.cache.EhCacheProvider INFO [main] (SettingsFactory.java:228) - Optimize cache for minimal puts: disabled INFO [main] (SettingsFactory.java:237) - Structured second-level cache entries: disabled 总结: 1、使用二级缓存的话 仅仅必须在 Tuser.hbm.xml 加上 <cache usage="read-write" /> 其他设置使用的是Hibernate 里面jar包的默认配置文件 当然如果需要高级应用,那就需要自定义配置文件了 2、对于Hibernate,遇到不明白的问题,建议加上<property name="generate_statistics">true</property> 并打印debug级别日志 3、Session sess = HibernateSessionFactory.getCurrentSession(); 仍是获得新的session 实例的 同时 一级缓存(Session Cache)是在session.close() 调用后就失效的,跟当前没关系 (不知道理解有没有问题) 4、没有莫名其妙的问题,想起老大对我说的一句公告。 哈哈 ,这就是我中秋节的礼物了 ^^

使用GO从文件解组JSON内容,并使用GO模板包生成.go文件

<div class="post-text" itemprop="text"> <p>I am interested in reading a schemas(json formatted text file) and unmarshall it as schemas (for which i have some JSON structures defined in a .GO file) and For each type of structure in the Schema, I want to generate a corresponding .go file which has the code for performing CRUD operations using the template package (<a href="http://golang.org/pkg/text/template/" rel="nofollow">http://golang.org/pkg/text/template/</a>) to generate these files. </p> <p>Example of a structure in a schema file - {</p> <pre><code>type struct XYZ { Type string `json:"type,omitempty"` ResourceType string `json:"resourceType,omitempty"` Links map[string]string `json:"links,omitempty"` } The text file has a JSON structured data which is something of this form - { "type": "collection", "resourceType": "schema", "links": { "self": "…/v1/schemas", }, "createTypes": { }, "actions": { }, "data": [ 86 items { "id": "schema", "type": "schema", "links": { "self": "/schemas/schema", "collection": "…/schemas", }, ... } </code></pre> <p>}</p> <p>Could somebody help me how could i possibly generate the code for these CRUD operations for different structs using the GO template package.</p> </div>

批量定时(使用quartz.jar) hibernate3.1.3+Spring1.2.8 程序运行一段时间停止

程序使用Hibernate 3.1.3+Spring 1.2.8,使用dbcp数据库连接,在程序中使用Quartz.jar定时四个任务,每天定时执行,运行几天后所有任务都不执行了,好像死锁了一样,无异常、无错误抛出,重新启动后又可正常执行。求高手指点,日志如下: 2013-09-17 21:00:41,859 org.hibernate.cfg.SettingsFactory - JDBC driver: Oracle JDBC driver, version: 10.2.0.1.0 2013-09-17 21:00:41,890 org.hibernate.dialect.Dialect - Using dialect: org.hibernate.dialect.Oracle9Dialect 2013-09-17 21:00:41,890 org.hibernate.transaction.TransactionFactoryFactory - Using default transaction strategy (direct JDBC transactions) 2013-09-17 21:00:41,890 org.hibernate.transaction.TransactionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Automatic flush during beforeCompletion(): disabled 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Automatic session close at end of transaction: disabled 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - JDBC batch size: 15 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - JDBC batch updates for versioned data: disabled 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Scrollable result sets: enabled 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - JDBC3 getGeneratedKeys(): enabled 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Connection release mode: on_close 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Maximum outer join fetch depth: 1 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Default batch fetch size: 1 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Generate SQL with comments: disabled 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Order SQL updates by primary key: disabled 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Query translator: org.hibernate.hql.classic.ClassicQueryTranslatorFactory 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Query language substitutions: {} 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Second-level cache: enabled 2013-09-17 21:00:41,890 org.hibernate.cfg.SettingsFactory - Query cache: disabled 2013-09-17 21:00:41,906 org.hibernate.cfg.SettingsFactory - Cache provider: org.hibernate.cache.EhCacheProvider 2013-09-17 21:00:41,906 org.hibernate.cfg.SettingsFactory - Optimize cache for minimal puts: disabled 2013-09-17 21:00:41,906 org.hibernate.cfg.SettingsFactory - Structured second-level cache entries: disabled 2013-09-17 21:00:41,906 org.hibernate.cfg.SettingsFactory - Echoing all SQL to stdout 2013-09-17 21:00:41,906 org.hibernate.cfg.SettingsFactory - Statistics: enabled 2013-09-17 21:00:41,906 org.hibernate.cfg.SettingsFactory - Deleted entity synthetic identifier rollback: disabled 2013-09-17 21:00:41,906 org.hibernate.cfg.SettingsFactory - Default entity-mode: pojo 2013-09-17 21:00:41,921 org.hibernate.impl.SessionFactoryImpl - building session factory 2013-09-17 21:00:41,937 net.sf.ehcache.config.Configurator - No configuration found. Configuring ehcache from ehcache-failsafe.xml found in the classpath: jar:file:/D:/workspace/ZCCL_BatchServer/lib/ehcache-1.1.jar!/ehcache-failsafe.xml 2013-09-17 21:00:42,500 org.hibernate.impl.SessionFactoryObjectFactory - Not binding factory to JNDI, no JNDI name configured 2013-09-17 21:00:42,578 org.springframework.aop.framework.DefaultAopProxyFactory - CGLIB2 available: proxyTargetClass feature enabled 2013-09-17 21:00:42,640 com.dcits.server.StartServer - Roman-----------systemEnv serverPort-----18171 2013-09-17 21:00:42,640 com.dcits.server.ListenServer - =======管理线程就绪======= 2013-09-17 21:00:42,640 com.dcits.server.ListenServer - 线程[1]启动完成 2013-09-17 21:00:42,640 com.dcits.server.ListenServer - 服务器忙,增加新线程 2013-09-17 21:00:42,640 com.dcits.server.ListenServer - 线程[2]启动完成 2013-09-17 21:00:42,640 com.dcits.server.AcceptServer - G0_L1_CR:=======正在监听端口[18171]======= 2013-09-17 21:00:42,640 com.dcits.server.AcceptServer - G0_L2_CR:=======正在监听端口[18171]======= 2013-09-17 21:00:42,671 org.springframework.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [org/springframework/jdbc/support/sql-error-codes.xml] 2013-09-17 21:00:42,687 org.springframework.jdbc.support.SQLErrorCodesFactory - SQLErrorCodes loaded: [DB2, HSQL, MS-SQL, MySQL, Oracle, Informix, PostgreSQL, Sybase] Hibernate: select * from ( select this_.JOB_ID as JOB1_15_0_, this_.JOB_NAME as JOB2_15_0_, this_.JOB_CLASS as JOB3_15_0_, this_.JOB_TYPE as JOB4_15_0_, this_.JOB_TRIGGER_STATE as JOB5_15_0_, this_.JOB_CRON_EXPRESSION as JOB6_15_0_, this_.JOB_METHOD as JOB7_15_0_, this_.JOB_GROUP as JOB8_15_0_, this_.JOB_SPRING_BEAN as JOB9_15_0_, this_.JOB_EXE_LAST_TIME as JOB10_15_0_, this_.JOB_EXE_NEXT_TIME as JOB11_15_0_, this_.JOB_START_TIME as JOB12_15_0_, this_.JOB_END_TIME as JOB13_15_0_, this_.JOB_TRIGGER_TYPE as JOB14_15_0_, this_.JOB_PRI as JOB15_15_0_, this_.JOB_TRIGGER_NAME as JOB16_15_0_, this_.JOB_TRIGGER_GROUP as JOB17_15_0_, this_.JOB_EXE_STATE as JOB18_15_0_ from CHQB_JOB this_ ) where rownum <= ? 2013-09-17 21:00:42,859 com.dcits.job.util.JobUtil - Roman--------------jobList's size------5 2013-09-17 21:00:42,859 com.dcits.job.util.JobUtil - (JobInfo)job's class -----0 0/3 * * * ? 2013-09-17 21:00:42,921 org.quartz.simpl.SimpleThreadPool - Job execution threads will use class loader of thread: main 2013-09-17 21:00:42,953 org.quartz.simpl.RAMJobStore - RAMJobStore initialized. 2013-09-17 21:00:42,953 org.quartz.impl.StdSchedulerFactory - Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties' 2013-09-17 21:00:42,953 org.quartz.impl.StdSchedulerFactory - Quartz scheduler version: 1.4.2 2013-09-17 21:00:42,968 com.dcits.job.util.JobUtil - (JobInfo)job's class -----0 01 11 * * ? 2013-09-17 21:00:42,984 com.dcits.job.util.JobUtil - (JobInfo)job's class -----0 00 16 * * ? 2013-09-17 21:00:43,000 com.dcits.job.util.JobUtil - (JobInfo)job's class -----0 0/1 * * * ? 2013-09-17 21:00:43,046 com.dcits.job.util.JobUtil - (JobInfo)job's class -----0 50 16 * * ? 2013-09-17 21:00:43,062 org.quartz.core.QuartzScheduler - Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started. 2013-09-17 21:01:00,015 com.dcits.job.base.BaseJob - ----START------TrpLoadBatchJob---------- 2013-09-17 21:01:00,015 com.dcits.job.TrpLoadBatchJob - Tue Sep 17 21:01:00 CST 2013################START RUN TrpLoadBatchJob ##########0

PHP包含文件中的var,然后包含在其他文件中

<div class="post-text" itemprop="text"> <p>I have a blog structured with several different php files (header, footer, main, etc.). I want to create a PHP file called adv.php that contains a var that needs to be retrieved in all the other PHP files. I would like to include the adv.php file ONLY in the header.php, and make sure that the var still works in footer.php, main.php, and so on.</p> <p>I tried and even with global vars, this doesn't seem to work. How can I fix this? Again, I would like to only include the adv.php file once in header.php and not in every single php file.</p> <p>EDIT</p> <p>Here is a simplified version of the code with only relevant parts:</p> <p>adv.php</p> <pre><code>&lt;?php $ad300_top_right_index = "RTB"; ?&gt; </code></pre> <p>header.php</p> <pre><code>&lt;?php include_once('adv.php'); ?&gt; </code></pre> <p>main.php</p> <pre><code>&lt;?php if ( $ad300_top_right_index == "RTB" ) { ?&gt; show code here &lt;?php } else { ?&gt; show some other code here &lt;?php } ?&gt; </code></pre> <p>About the inclusions: it's a Wordpress template, where the resulting page will include header.php and main.php, and header.php includes adv.php. Of course adv.php gets included before I try to use the var in main.php.</p> </div>

PHPUnit - 使用配置文件时'没有执行测试'

<div class="post-text" itemprop="text"> <h1>The Problem</h1> <p>To improve my quality of code, I've decided to try to learn how to test my code using Unit Testing instead of my mediocre-at-best testing solutions.</p> <p>I decided to install PHPUnit using composer for a personal library that allows me to achieve common database functions. At first I didn't have a configuration file for PHPUnit and when I ran commands like:</p> <pre><code>$ phpunit tests/GeneralStringFunctions/GeneralStringFunctionsTest </code></pre> <p>Please note that this is a terminal command, so I didn't include the <code>.php</code> extension. The GeneralStringFunctionsTest referred to above is actually a <code>GeneralStringFunctionsTest.php</code> file.</p> <p>The output is what I expected:</p> <blockquote> <p>Time: 31 ms, Memory: 2.75Mb</p> <p>OK (1 test, 1 assertion)</p> </blockquote> <p>I then tried to use a configuration file to automatically load the test suite instead of having to manually type in the file every time. I created a file called <code>phpunit.xml</code> in my root directory, and entered the following into the file: <a href="http://pastebin.com/0j0L4WBD">http://pastebin.com/0j0L4WBD</a>:</p> <pre><code>&lt;?xml version = "1.0" encoding="UTF-8" ?&gt; &lt;phpunit&gt; &lt;testsuites&gt; &lt;testsuite name="Tests"&gt; &lt;directory&gt;tests&lt;/directory&gt; &lt;/testsuite&gt; &lt;/testsuites&gt; &lt;/phpunit&gt; </code></pre> <p>Now, when I run the command:</p> <pre><code>phpunit </code></pre> <p>I get the following output:</p> <blockquote> <p>PHPUnit 4.5.0 by Sebastian Bergmann and contributors.</p> <p>Configuration read from /Users/muyiwa/Projects/DatabaseHelper/phpunit.xml</p> <p>Time: 16 ms, Memory: 1.50Mb</p> <p>No tests executed!</p> </blockquote> <p>In case it's helpful, my directory structure is as follows:<br> src - Top level directory (with all my source code)<br> tests - Top level directory (with all my tests, structured the same as my <em>src</em> folder)<br> vendor - Composer third party files </p> <p>I also have the composer json and lock file, as well as the phpunit xml file in the top level as files.</p> <h1>Things I've Tried</h1> <ul> <li>Changing the directory in <code>phpunit.xml</code> to <code>tests/GeneralStringFunctions</code></li> <li>Changing the directory in <code>phpunit.xml</code> to <code>./tests</code></li> <li>Moving the <code>phpunit.xml</code> file to the <code>tests</code> directory and then changing the directory to be <code>./</code> instead of <code>tests</code>.</li> <li>Adding a suffix attribute to the directory tag in <code>phpunit.xml</code> to specify "Tests" as the explicit suffix.</li> </ul> </div>

关于opencv-contrib-master里面的modules包里的程序怎么运行

我用cmake编译了opencv,在vs里面生成了工程,运行不 了 opencv_contrib-master (1)\opencv_contrib-master\modules\structured_light\samples 运行不了里面的例子a啊

使用go构建多个二进制文件

<div class="post-text" itemprop="text"> <p>My project is structured as so:</p> <pre><code>src/github.com/foo/bar/ - binary1 - binary1.go - binary2 - binary2.go </code></pre> <p>Each of these binaries are in a package <code>main</code>.</p> <p>Is there a way, when I'm in the directory <code>src/github.com/foo/bar</code> to just run <code>go build</code> and end up with both binaries compiled?</p> <p><code>go build ./...</code> didn't produce any binaries. <code>go build -a</code> actually builds every package.</p> <p>Or should I just call it as :</p> <pre><code>(cd binary1; go build) (cd binary2; go build) </code></pre> </div>

spring配置文件出错sssssssssssssssssss

Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type [cn.testJob.pss.dao.EmployeeDao] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {@javax.annotation.Resource(shareable=true, mappedName=, description=, name=, type=class java.lang.Object, lookup=, authenticationType=CONTAINER)} at org.springframework.beans.factory.support.DefaultListableBeanFactory.raiseNoSuchBeanDefinitionException(DefaultListableBeanFactory.java:986) at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:856) at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:768) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.autowireResource(CommonAnnotationBeanPostProcessor.java:438) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.getResource(CommonAnnotationBeanPostProcessor.java:416) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor$ResourceElement.getResourceToInject(CommonAnnotationBeanPostProcessor.java:550) at org.springframework.beans.factory.annotation.InjectionMetadata$InjectedElement.inject(InjectionMetadata.java:150) at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.postProcessPropertyValues(CommonAnnotationBeanPostProcessor.java:303) ... 58 more <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:oxm="http://www.springframework.org/schema/oxm" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd http://www.springframework.org/schema/oxm http://www.springframework.org/schema/oxm/spring-oxm-3.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd" default-autowire="byName" default-lazy-init="true"> <!-- default-autowire="byName" default-lazy-init="true"> --> <context:component-scan base-package="cn.testJob.pss" /> <!-- <bean class="cn.itproject.crm.controller.init.ApplicationInitListener"> --> <!-- <property name="companyService" ref="companyServiceImpl"/> --> <!-- <property name="configService" ref="configServiceImpl"/> --> <!-- <property name="notificationService" ref="notificationServiceImpl"/> --> <!-- </bean> --> <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="locations"> <list> <value>classpath:mysql.properties</value> <value>classpath:druid.properties</value> <value>classpath:hibernate.properties</value> <value>classpath:redis.properties</value> </list> </property> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean"> <property name="dataSource" ref="druidDataSource" /> <property name="packagesToScan"> <list> <value>cn.testJob.pss.bean</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${hibernate.dialect}</prop> <prop key="hibernate.show_sql">${hibernate.show_sql}</prop> <prop key="hibernate.format_sql">${hibernate.format_sql}</prop> <prop key="hibernate.use_sql_commants">${hibernate.use_sql_comments}</prop> <prop key="hibernate.hbm2ddl.auto">${hibernate.hbm2ddl.auto}</prop> <prop key="hibernate.cache.use_second_level_cache">${hibernate.cache.use_second_level_cache}</prop> <prop key="hibernate.cache.use_query_cache">${hibernate.cache.use_query_cache}</prop> <prop key="hibernate.cache.region.factory_class">${hibernate.cache.region.factory_class}</prop> <prop key="hibernate.cache.provider_configuration_file_resource_path">${hibernate.cache.provider_configuration_file_resource_path} </prop> <prop key="hibernate.cache.use_structured_entries">${hibernate.cache.use_structured_entries}</prop> </props> </property> </bean> <bean id="txManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean> <tx:advice id="txAdvice" transaction-manager="txManager"> <tx:attributes> <tx:method name="get*" read-only="true" /> <tx:method name="*" /> </tx:attributes> </tx:advice> <!-- <aop:aspectj-autoproxy proxy-target-class="true"/> --> <aop:config> <aop:pointcut id="bizMethods" expression="execution(* cn.testJob.pss.service.*.*(..))" /> <aop:advisor advice-ref="txAdvice" pointcut-ref="bizMethods" /> </aop:config> <import resource="applicationContext-*.xml"/> 这个是sprin配置文件 <bean id="druidDataSource" class="com.alibaba.druid.pool.DruidDataSource" init-method="init" destroy-method="close"> <!-- 数据库基本信息配置 --> <property name="driverClassName" value="${jdbc.driverClass}" /> <property name="url" value="${jdbc.url}" /> <property name="username" value="${jdbc.user}" /> <property name="password" value="${jdbc.password}" /> <!-- 初始化连接数量 --> <property name="initialSize" value="${druid.initialSize}" /> <!-- 最大并发连接数 --> <property name="maxActive" value="${druid.maxActive}" /> <!-- 最小空闲连接数 --> <property name="minIdle" value="${druid.minIdle}" /> <!-- 配置获取连接等待超时的时间 --> <property name="maxWait" value="${druid.maxWait}" /> <!-- 超过时间限制是否回收 --> <property name="removeAbandoned" value="${druid.removeAbandoned}" /> <!-- 超过时间限制多长删除 --> <property name="removeAbandonedTimeout" value="${druid.removeAbandonedTimeout}" /> <!-- 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒 --> <property name="timeBetweenEvictionRunsMillis" value="${druid.timeBetweenEvictionRunsMillis}" /> <!-- 配置一个连接在池中最小生存的时间,单位是毫秒 --> <property name="minEvictableIdleTimeMillis" value="${druid.minEvictableIdleTimeMillis}" /> <!-- 用来检测连接是否有效的sql,要求是一个查询语句 --> <property name="validationQuery" value="${druid.validationQuery}" /> <!-- 申请连接的时候检测 --> <property name="testWhileIdle" value="${druid.testWhileIdle}" /> <!-- 申请连接时执行validationQuery检测连接是否有效,配置为true会降低性能 --> <property name="testOnBorrow" value="${druid.testOnBorrow}" /> <!-- 归还连接时执行validationQuery检测连接是否有效,配置为true会降低性能 --> <property name="testOnReturn" value="${druid.testOnReturn}" /> <!-- 打开PSCache,并且指定每个连接上PSCache的大小 --> <property name="poolPreparedStatements" value="${druid.poolPreparedStatements}" /> <property name="maxPoolPreparedStatementPerConnectionSize" value="${druid.maxPoolPreparedStatementPerConnectionSize}" /> <!--属性类型是字符串,通过别名的方式配置扩展插件,常用的插件有: 监控统计用的filter:stat 日志用的filter:log4j 防御SQL注入的filter:wall --> <property name="filters" value="${druid.filters}" /> </bean> <!-- 输出可执行的SQL --> <!-- 参考:https://github.com/alibaba/druid/wiki/%E9%85%8D%E7%BD%AE_LogFilter --> <!-- <bean id="log-filter" class="com.alibaba.druid.filter.logging.Log4jFilter"> <property name="statementExecutableSqlLogEnable" value="true" /> </bean> --> <!-- 配置Spring监控 --> <!-- 参考:https://github.com/alibaba/druid/wiki/%E9%85%8D%E7%BD%AE_Druid%E5%92%8CSpring%E5%85%B3%E8%81%94%E7%9B%91%E6%8E%A7%E9%85%8D%E7%BD%AE --> <bean id="druid-stat-interceptor" class="com.alibaba.druid.support.spring.stat.DruidStatInterceptor"> </bean> <bean id="druid-stat-pointcut" class="org.springframework.aop.support.JdkRegexpMethodPointcut" scope="prototype"> <property name="patterns"> <list> <value>cn.testJob.pss.service.*</value> <value>cn.testJob.pss.dao.*</value> </list> </property> </bean> <aop:config> <aop:advisor advice-ref="druid-stat-interceptor" pointcut-ref="druid-stat-pointcut" /> </aop:config> 这个是数据源配置文件 麻烦了,各位大神,是在找不出哪里出问题了

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

搜狗输入法也在挑战国人的智商!

故事总是一个接着一个到来...上周写完《鲁大师已经彻底沦为一款垃圾流氓软件!》这篇文章之后,鲁大师的市场工作人员就找到了我,希望把这篇文章删除掉。经过一番沟通我先把这篇文章从公号中删除了...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

程序员为什么千万不要瞎努力?

本文作者用对比非常鲜明的两个开发团队的故事,讲解了敏捷开发之道 —— 如果你的团队缺乏统一标准的环境,那么即使勤劳努力,不仅会极其耗时而且成果甚微,使用...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

《Oracle Java SE编程自学与面试指南》最佳学习路线图2020年最新版(进大厂必备)

正确选择比瞎努力更重要!

《Oracle Java SE编程自学与面试指南》最佳学习路线图(2020最新版)

正确选择比瞎努力更重要!

都前后端分离了,咱就别做页面跳转了!统统 JSON 交互

文章目录1. 无状态登录1.1 什么是有状态1.2 什么是无状态1.3 如何实现无状态1.4 各自优缺点2. 登录交互2.1 前后端分离的数据交互2.2 登录成功2.3 登录失败3. 未认证处理方案4. 注销登录 这是本系列的第四篇,有小伙伴找不到之前文章,松哥给大家列一个索引出来: 挖一个大坑,Spring Security 开搞! 松哥手把手带你入门 Spring Security,别再问密...

字节跳动面试官竟然问了我JDBC?

轻松等回家通知

面试官:你连SSO都不懂,就别来面试了

大厂竟然要考我SSO,卧槽。

阿里面试官让我用Zk(Zookeeper)实现分布式锁

他可能没想到,我当场手写出来了

终于,月薪过5万了!

来看几个问题想不想月薪超过5万?想不想进入公司架构组?想不想成为项目组的负责人?想不想成为spring的高手,超越99%的对手?那么本文内容是你必须要掌握的。本文主要详解bean的生命...

自从喜欢上了B站这12个UP主,我越来越觉得自己是个废柴了!

不怕告诉你,我自从喜欢上了这12个UP主,哔哩哔哩成为了我手机上最耗电的软件,几乎每天都会看,可是吧,看的越多,我就越觉得自己是个废柴,唉,老天不公啊,不信你看看…… 间接性踌躇满志,持续性混吃等死,都是因为你们……但是,自己的学习力在慢慢变强,这是不容忽视的,推荐给你们! 都说B站是个宝,可是有人不会挖啊,没事,今天咱挖好的送你一箩筐,首先啊,我在B站上最喜欢看这个家伙的视频了,为啥 ,咱撇...

代码注释如此沙雕,会玩还是你们程序员!

某站后端代码被“开源”,同时刷遍全网的,还有代码里的那些神注释。 我们这才知道,原来程序员个个都是段子手;这么多年来,我们也走过了他们的无数套路… 首先,产品经理,是永远永远吐槽不完的!网友的评论也非常扎心,说看这些代码就像在阅读程序员的日记,每一页都写满了对产品经理的恨。 然后,也要发出直击灵魂的质问:你是尊贵的付费大会员吗? 这不禁让人想起之前某音乐app的穷逼Vip,果然,穷逼在哪里都是...

立即提问
相关内容推荐