spark structured streaming实现每30秒计算前30分钟的用户增长率

spark structured streaming实现每30秒计算前30分钟的用户增长率,spark structured stream是否可以实现?如何实现

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
structured streaming 时运行一段时间报临时文件不存在,想知道该临时文件是什么,有什么作用
``` Job aborted due to stage failure: Task 1 in stage 9.0 failed 4 times, most recent failure: Lost task 1.3 in stage 9.0 (TID 1018, 34.55.0.164, executor 0): java.lang.IllegalStateException: Error reading delta file /tmp/temporary-01933c45-4657-47d1-a0ab-651476698d08/state/0/1/1.delta of HDFSStateStoreProvider[id = (op=0, part=1), dir = /tmp/temporary-01933c45-4657-47d1-a0ab-651476698d08/state/0/1]: /tmp/temporary-01933c45-4657-47d1-a0ab-651476698d08/state/0/1/1.delta does not exist at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$updateFromDeltaFile(HDFSBackedStateStoreProvider.scala:410) at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$$anonfun$org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$loadMap$1$$anonfun$6.apply(HDFSBackedStateStoreProvider.scala:362) at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$$anonfun$org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$loadMap$1$$anonfun$6.apply(HDFSBackedStateStoreProvider.scala:359) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$$anonfun$org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$loadMap$1.apply(HDFSBackedStateStoreProvider.scala:359) at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$$anonfun$org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$loadMap$1.apply(HDFSBackedStateStoreProvider.scala:358) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$loadMap(HDFSBackedStateStoreProvider.scala:358) at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.getStore(HDFSBackedStateStoreProvider.scala:265) at org.apache.spark.sql.execution.streaming.state.StateStore$.get(StateStore.scala:200) at org.apache.spark.sql.execution.streaming.state.StateStoreRDD.compute(StateStoreRDD.scala:61) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.FileNotFoundException: File /tmp/temporary-01933c45-4657-47d1-a0ab-651476698d08/state/0/1/1.delta does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:142) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769) at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$updateFromDeltaFile(HDFSBackedStateStoreProvider.scala:407) ... 21 more ``` structured streaming 时运行一段时间报临时文件不存在,想知道该临时文件是什么,有什么作用,spark 2.2.0版本,standalone模式,代码中未设置checkpointLocation,初次写spark 任务,请大佬支点
Structured forests for fast edge detection
最近在研究P. Dollár的《Structured forests for fast edge detection》。在MATLAB中使用其提供的工具包能够得到很好的边缘提取效果,但是在使用OPENCV中的函数时,关于模型的训练函数: Ptr<cv::StructuredEdgeDetection> createStructuredEdgeDetection(String model) Parameters: model – model file name 中的模型不知道如何获得,使用MATLAB中的模型并不能够进行训练。目前网上找到了一个模型效果并不好,求大神指导如何将MATLAB中使用的BSD500应用于OPENCV的训练中。
一个赢面的概率的矩阵计算,怎么利用C语言的程序的设计的办法来实现的
Problem Description A role-playing game (RPG and sometimes roleplaying game) is a game in which players assume the roles of characters in a fictional setting. Players take responsibility for acting out these roles within a narrative, either through literal acting or through a process of structured decision-making or character development. Recently, Josephina is busy playing a RPG named TX3. In this game, M characters are available to by selected by players. In the whole game, Josephina is most interested in the "Challenge Game" part. The Challenge Game is a team play game. A challenger team is made up of three players, and the three characters used by players in the team are required to be different. At the beginning of the Challenge Game, the players can choose any characters combination as the start team. Then, they will fight with N AI teams one after another. There is a special rule in the Challenge Game: once the challenger team beat an AI team, they have a chance to change the current characters combination with the AI team. Anyway, the challenger team can insist on using the current team and ignore the exchange opportunity. Note that the players can only change the characters combination to the latest defeated AI team. The challenger team gets victory only if they beat all the AI teams. Josephina is good at statistics, and she writes a table to record the winning rate between all different character combinations. She wants to know the maximum winning probability if she always chooses best strategy in the game. Can you help her? Input There are multiple test cases. The first line of each test case is an integer M (3 ≤ M ≤ 10), which indicates the number of characters. The following is a matrix T whose size is R × R. R equals to C(M, 3). T(i, j) indicates the winning rate of team i when it is faced with team j. We guarantee that T(i, j) + T(j, i) = 1.0. All winning rates will retain two decimal places. An integer N (1 ≤ N ≤ 10000) is given next, which indicates the number of AI teams. The following line contains N integers which are the IDs (0-based) of the AI teams. The IDs can be duplicated. Output For each test case, please output the maximum winning probability if Josephina uses the best strategy in the game. For each answer, an absolute error not more than 1e-6 is acceptable. Sample Input 4 0.50 0.50 0.20 0.30 0.50 0.50 0.90 0.40 0.80 0.10 0.50 0.60 0.70 0.60 0.40 0.50 3 0 1 2 Sample Output 0.378000
计算一个规则的游戏的赢面概率,怎么采用C语言程序的设计的办法来计算的?
Problem Description A role-playing game (RPG and sometimes roleplaying game) is a game in which players assume the roles of characters in a fictional setting. Players take responsibility for acting out these roles within a narrative, either through literal acting or through a process of structured decision-making or character development. Recently, Josephina is busy playing a RPG named TX3. In this game, M characters are available to by selected by players. In the whole game, Josephina is most interested in the "Challenge Game" part. The Challenge Game is a team play game. A challenger team is made up of three players, and the three characters used by players in the team are required to be different. At the beginning of the Challenge Game, the players can choose any characters combination as the start team. Then, they will fight with N AI teams one after another. There is a special rule in the Challenge Game: once the challenger team beat an AI team, they have a chance to change the current characters combination with the AI team. Anyway, the challenger team can insist on using the current team and ignore the exchange opportunity. Note that the players can only change the characters combination to the latest defeated AI team. The challenger team gets victory only if they beat all the AI teams. Josephina is good at statistics, and she writes a table to record the winning rate between all different character combinations. She wants to know the maximum winning probability if she always chooses best strategy in the game. Can you help her? Input There are multiple test cases. The first line of each test case is an integer M (3 ≤ M ≤ 10), which indicates the number of characters. The following is a matrix T whose size is R × R. R equals to C(M, 3). T(i, j) indicates the winning rate of team i when it is faced with team j. We guarantee that T(i, j) + T(j, i) = 1.0. All winning rates will retain two decimal places. An integer N (1 ≤ N ≤ 10000) is given next, which indicates the number of AI teams. The following line contains N integers which are the IDs (0-based) of the AI teams. The IDs can be duplicated. Output For each test case, please output the maximum winning probability if Josephina uses the best strategy in the game. For each answer, an absolute error not more than 1e-6 is acceptable. Sample Input 4 0.50 0.50 0.20 0.30 0.50 0.50 0.90 0.40 0.80 0.10 0.50 0.60 0.70 0.60 0.40 0.50 3 0 1 2 Sample Output 0.378000
Josephina and RPG程序如何才能实现
Problem Description A role-playing game (RPG and sometimes roleplaying game) is a game in which players assume the roles of characters in a fictional setting. Players take responsibility for acting out these roles within a narrative, either through literal acting or through a process of structured decision-making or character development. Recently, Josephina is busy playing a RPG named TX3. In this game, M characters are available to by selected by players. In the whole game, Josephina is most interested in the "Challenge Game" part. The Challenge Game is a team play game. A challenger team is made up of three players, and the three characters used by players in the team are required to be different. At the beginning of the Challenge Game, the players can choose any characters combination as the start team. Then, they will fight with N AI teams one after another. There is a special rule in the Challenge Game: once the challenger team beat an AI team, they have a chance to change the current characters combination with the AI team. Anyway, the challenger team can insist on using the current team and ignore the exchange opportunity. Note that the players can only change the characters combination to the latest defeated AI team. The challenger team gets victory only if they beat all the AI teams. Josephina is good at statistics, and she writes a table to record the winning rate between all different character combinations. She wants to know the maximum winning probability if she always chooses best strategy in the game. Can you help her? Input There are multiple test cases. The first line of each test case is an integer M (3 ≤ M ≤ 10), which indicates the number of characters. The following is a matrix T whose size is R × R. R equals to C(M, 3). T(i, j) indicates the winning rate of team i when it is faced with team j. We guarantee that T(i, j) + T(j, i) = 1.0. All winning rates will retain two decimal places. An integer N (1 ≤ N ≤ 10000) is given next, which indicates the number of AI teams. The following line contains N integers which are the IDs (0-based) of the AI teams. The IDs can be duplicated. Output For each test case, please output the maximum winning probability if Josephina uses the best strategy in the game. For each answer, an absolute error not more than 1e-6 is acceptable. Sample Input 4 0.50 0.50 0.20 0.30 0.50 0.50 0.90 0.40 0.80 0.10 0.50 0.60 0.70 0.60 0.40 0.50 3 0 1 2 Sample Output 0.378000
hibernate memcached默认缓存只有300秒,5分钟,怎么设置?
spring配置文件: <bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean" lazy-init="false"> <property name="dataSource" ref="dataSource" /> <property name="packagesToScan"> <list> <value>com.test.springmvcsh.*</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop> <prop key="hibernate.show_sql">true</prop> <!-- *****************memcache start *********************************** --> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.cache.region.factory_class">com.googlecode.hibernate.memcached.MemcachedRegionFactory</prop> <prop key="hibernate.memcached.servers">127.0.0.1:11211</prop> <prop key="hibernate.cache.region_prefix">memcached2</prop> <prop key="hibernate.memcached.operationTimeout">36000</prop> <prop key="hibernate.memcached.cacheTimeSeconds">36000</prop> <prop key="hibernate.cache.use_structured_entries">true</prop> <prop key="hibernate.cache.use_query_cache">true</prop> </props> </property> </bean> 经测试,memcached缓存数据只有5分钟,超过五分钟就又从数据库取数据,怎么设置memcached的缓存时间? <prop key="hibernate.memcached.cacheTimeSeconds">36000</prop> 这里的设置没生效。
关于opencv-contrib-master里面的modules包里的程序怎么运行
我用cmake编译了opencv,在vs里面生成了工程,运行不 了 opencv_contrib-master (1)\opencv_contrib-master\modules\structured_light\samples 运行不了里面的例子a啊
十六进制数据的分析的一个问题,怎么采用C程序的代码的编写技术思想去实现呢?
Problem Description Protocol Buffers are a way of encoding structured data in an efficient yet extensible format. Google uses Protocol Buffers for almost all of its internal RPC protocols and file formats. In this problem, you will use a simplified version of protocol buffers. Message definition: In protocol buffers, we defining a message in a very simple way: message Gao { required int32 a = 1; required int32 b = 2; required string str = 3; repeated int32 arr = 4; optional int32 optionalField = 5; } The Gao message definition specifies five fields (name/value pairs), one for each piece of data that you want to include in this type of message. In this problem, there are at most 15 fields in one message definition. Field definition: Foremost, we need specifying field rules in one of the following: 1. required: message must have exactly one of this field. 2. optional: message can have zero or one of this field (but not more than one). 3. repeated: this field can be repeated any number of times (including zero) in message. The order of the repeated values will be preserved. Follow by the field rules, we definiting field by type and name. we support int32 and string type in this problem. Each field has assigned a unique numbered tag at the end. These tags are used to identify your fields in the message binary format. The smallest tag number is 1, and the largest is 15. Field name is consist of letters([a-zA-Z]) and do not exceed 15 characters. Base 128 Varints: To understand your simple protocol buffer encoding, you first need to understand varints. Varints are a method of serializing integers using one or more bytes. Smaller numbers take a smaller number of bytes. Each byte in a varint, except the last byte, has the most significant bit (msb) set –– this indicates that there are further bytes to come. The lower 7 bits of each byte are used to store the two's complement representation of the number in groups of 7 bits, least significant group first. So, for example, here is the number 1 –– it's a single byte, so the msb is not set: 0000 0001 And here is 300 – this is a bit more complicated: 1010 1100 0000 0010 How do you figure out that this is 300? First you drop the msb from each byte, as this is just there to tell us whether we've reached the end of the number (as you can see, it's set in the first byte as there is more than one byte in the varint): 1010 1100 0000 0010 → 010 1100 000 0010 You reverse the two groups of 7 bits because, as you remember, varints store numbers with the least significant group first. Then you concatenate them to get your final value: 000 0010 010 1100 → 000 0010 ++ 010 1100 → 100101100 → 256 + 32 + 8 + 4 = 300 (++ means byte concatenation) It's more complex to consider signed number, due to simplify this problem, we only need considered unsigned integers in range [0, 10^9]. Message structure: As you know, a protocol buffer message is a series of key-value pairs. The binary version of a message just uses the field's tag number as the key –– the name and declared type for each field can only be determined on the decoding end by referencing the message type's definition. The "key" for each pair in a wire-format message is actually two values – the field tag number from your message definition, plus a wire type that provides just enough information to find the length of the following value. The available wire types are as follows: Type Meaning Used for 0 Varint int32 2 Length-delimited string,repeated fields Each key in the streamed message is a varint with the value (field_tag_number << 3) | wire_type –– in other words, the last three bits of the number store the wire type. For example, key of field a in message Gao is encoded as 0000 1000(08), and for field str is 0001 1010(1A). Field a seted as 150 will be encoded as 08 96 01 (Key is 08 and 150 encoded as 96 01). Concatenate all encoded fields we can get the encoded message. Optional Elements: If any of fields are optional, the encoded message may not have a key-value pair with that field, in this situation, just set the value of this field as null. String and Repeated Elements: In this situation, we need specifiy payload size (varint type) first after key, and followed by the elements. (We can regard string type as a array) For example: message Test3 { required string b = 2; repeated int32 c = 3; } Assume one message have the value "testing" for the field b and the values 3, 270, and 86942 for the repeated field c, then, the encoded form would be: 12 // tag (field number 2, wire type 2) 07 // payload size (7 bytes) 74 65 73 74 69 6E 67 // ascii code of "testing" 1A // tag (field number 3, wire type 2) 06 // payload size (6 bytes) 03 // first element (varint 3) 8E 02 // second element (varint 270) 9E A7 05 // third element (varint 86942) Notice, repeated string type in this problem is illegal. If any of repetead fields not appeared in this message, this field can regard as empty array([]). The length of arrays and strings will not exceed 50 and strings only consist of alphabet('a' - 'z' and 'A' - 'Z'). Problem: Now give you a message definition, and serval encoded messages, you task is decode this messages. Input The first line of input contains a number T, indicating the number of test cases. (T≤10). For each case, there will be two integers n (n≤17) and m (m≤100). The next n lines consist of message definition, you can assume that the length of message definition will not exceed 1500. A valid definition will only contains alphabet('a' - 'z' and 'A' - 'Z'), number('0' - '9'), whitespace(' ') and control tokens('=;{}'). And then followed by m querys, each line contain one encoded byte stream in hexadecimal notation, there is one whitespace after each byte. And for there are at most 1500 bytes each message. Notice there will be some empty byte stream in test data, see sample input for more details. All the messages in this problem will not contains unknown tags and have a illegal structure. Output For each case, output Case #i: first. (i is the number of the test case, from 1 to T). Then for each query, output the values in each fields, with the same order in message definition. If any of required fields not appeared in this message, please print Error! instead. Output a blank line after each query. Sample Input 1 2 5 message Test1{required int32 a=1;required string b=2 ;repeated int32 c=3;optional int32 optionalField=5;} 08 96 01 12 07 74 65 73 74 69 6e 67 1A 06 03 8E 02 9E A7 05 28 00 08 96 01 12 07 74 65 73 74 69 6e 67 1A 06 03 8E 02 9E A7 05 08 96 01 12 01 74 12 07 74 65 73 74 69 6e 67 1A 06 03 8E 02 9E A7 05 Sample Output Case #1: a = 150 b = "testing" c = [3, 270, 86942] optionalField = 0 a = 150 b = "testing" c = [3, 270, 86942] optionalField = null Error! a = 150 b = "t" c = [] optionalField = null Error!
网络通讯协议的buffer分析问题,怎么采用C程序语言的编写代码的技术方法去实现分析?
Problem Description Protocol Buffers are a way of encoding structured data in an efficient yet extensible format. Google uses Protocol Buffers for almost all of its internal RPC protocols and file formats. In this problem, you will use a simplified version of protocol buffers. Message definition: In protocol buffers, we defining a message in a very simple way: message Gao { required int32 a = 1; required int32 b = 2; required string str = 3; repeated int32 arr = 4; optional int32 optionalField = 5; } The Gao message definition specifies five fields (name/value pairs), one for each piece of data that you want to include in this type of message. In this problem, there are at most 15 fields in one message definition. Field definition: Foremost, we need specifying field rules in one of the following: 1. required: message must have exactly one of this field. 2. optional: message can have zero or one of this field (but not more than one). 3. repeated: this field can be repeated any number of times (including zero) in message. The order of the repeated values will be preserved. Follow by the field rules, we definiting field by type and name. we support int32 and string type in this problem. Each field has assigned a unique numbered tag at the end. These tags are used to identify your fields in the message binary format. The smallest tag number is 1, and the largest is 15. Field name is consist of letters([a-zA-Z]) and do not exceed 15 characters. Base 128 Varints: To understand your simple protocol buffer encoding, you first need to understand varints. Varints are a method of serializing integers using one or more bytes. Smaller numbers take a smaller number of bytes. Each byte in a varint, except the last byte, has the most significant bit (msb) set –– this indicates that there are further bytes to come. The lower 7 bits of each byte are used to store the two's complement representation of the number in groups of 7 bits, least significant group first. So, for example, here is the number 1 –– it's a single byte, so the msb is not set: 0000 0001 And here is 300 – this is a bit more complicated: 1010 1100 0000 0010 How do you figure out that this is 300? First you drop the msb from each byte, as this is just there to tell us whether we've reached the end of the number (as you can see, it's set in the first byte as there is more than one byte in the varint): 1010 1100 0000 0010 → 010 1100 000 0010 You reverse the two groups of 7 bits because, as you remember, varints store numbers with the least significant group first. Then you concatenate them to get your final value: 000 0010 010 1100 → 000 0010 ++ 010 1100 → 100101100 → 256 + 32 + 8 + 4 = 300 (++ means byte concatenation) It's more complex to consider signed number, due to simplify this problem, we only need considered unsigned integers in range [0, 10^9]. Message structure: As you know, a protocol buffer message is a series of key-value pairs. The binary version of a message just uses the field's tag number as the key –– the name and declared type for each field can only be determined on the decoding end by referencing the message type's definition. The "key" for each pair in a wire-format message is actually two values – the field tag number from your message definition, plus a wire type that provides just enough information to find the length of the following value. The available wire types are as follows: Type Meaning Used for 0 Varint int32 2 Length-delimited string,repeated fields Each key in the streamed message is a varint with the value (field_tag_number << 3) | wire_type –– in other words, the last three bits of the number store the wire type. For example, key of field a in message Gao is encoded as 0000 1000(08), and for field str is 0001 1010(1A). Field a seted as 150 will be encoded as 08 96 01 (Key is 08 and 150 encoded as 96 01). Concatenate all encoded fields we can get the encoded message. Optional Elements: If any of fields are optional, the encoded message may not have a key-value pair with that field, in this situation, just set the value of this field as null. String and Repeated Elements: In this situation, we need specifiy payload size (varint type) first after key, and followed by the elements. (We can regard string type as a array) For example: message Test3 { required string b = 2; repeated int32 c = 3; } Assume one message have the value "testing" for the field b and the values 3, 270, and 86942 for the repeated field c, then, the encoded form would be: 12 // tag (field number 2, wire type 2) 07 // payload size (7 bytes) 74 65 73 74 69 6E 67 // ascii code of "testing" 1A // tag (field number 3, wire type 2) 06 // payload size (6 bytes) 03 // first element (varint 3) 8E 02 // second element (varint 270) 9E A7 05 // third element (varint 86942) Notice, repeated string type in this problem is illegal. If any of repetead fields not appeared in this message, this field can regard as empty array([]). The length of arrays and strings will not exceed 50 and strings only consist of alphabet('a' - 'z' and 'A' - 'Z'). Problem: Now give you a message definition, and serval encoded messages, you task is decode this messages. Input The first line of input contains a number T, indicating the number of test cases. (T≤10). For each case, there will be two integers n (n≤17) and m (m≤100). The next n lines consist of message definition, you can assume that the length of message definition will not exceed 1500. A valid definition will only contains alphabet('a' - 'z' and 'A' - 'Z'), number('0' - '9'), whitespace(' ') and control tokens('=;{}'). And then followed by m querys, each line contain one encoded byte stream in hexadecimal notation, there is one whitespace after each byte. And for there are at most 1500 bytes each message. Notice there will be some empty byte stream in test data, see sample input for more details. All the messages in this problem will not contains unknown tags and have a illegal structure. Output For each case, output Case #i: first. (i is the number of the test case, from 1 to T). Then for each query, output the values in each fields, with the same order in message definition. If any of required fields not appeared in this message, please print Error! instead. Output a blank line after each query. Sample Input 1 2 5 message Test1{required int32 a=1;required string b=2 ;repeated int32 c=3;optional int32 optionalField=5;} 08 96 01 12 07 74 65 73 74 69 6e 67 1A 06 03 8E 02 9E A7 05 28 00 08 96 01 12 07 74 65 73 74 69 6e 67 1A 06 03 8E 02 9E A7 05 08 96 01 12 01 74 12 07 74 65 73 74 69 6e 67 1A 06 03 8E 02 9E A7 05 Sample Output Case #1: a = 150 b = "testing" c = [3, 270, 86942] optionalField = 0 a = 150 b = "testing" c = [3, 270, 86942] optionalField = null Error! a = 150 b = "t" c = [] optionalField = null Error!
C语言,A Well-Formed Problem
Description XML, eXtensible Markup Language, is poised to become the lingua franca of structured data communication for the foreseeable future, due in part to its strict formatting requirements. XML parsers must report anything that violates the rules of a well-formed XML document. An XML document is said to be well-formed if it meets all of the wellformedness constraints as defined by the World Wide Web Consortium (W3C) XML specification. XML documents are composed of units called elements, that contain either character data and/or other elements. Elements may also contain within their declaration values called attributes. Consider the following XML document: <?xml version="1.0"?> <customer> <name> <first>John</first> <last>Doe</last> </name> <address> <street> <number>15</number> <direction>West</direction> <name>34th</name> </street> <city>New York</city> <state-code>NY</state-code> <zip-code format="PLUS4">10001-0001</zip-code> <country-code>USA</country-code> </address> <orders/> </customer> The bold identifiers contained within angle brackets are the elements of the document. The italicized identifier "format" within the "zip-code" element is an attribute of that element. All elements, with the exception of "orders", have a start and an end declaration, also called a tags. The "orders" element is an empty element, as indicated by the "/>" sequence that closes the element, and does not require a separate end-tag. The first line is a processing instruction for an XML parser and is not considered an element of the document. The rules for a well-formed document are: 1. There is exactly one element that is not contained within any other element. This element is identified as the "root" or "document" element. In the example above, "customer" is the document element. 2. The structure of an XML document must nest properly. An element's start-tag must be paired with a closing end-tag if it is a non-empty element. 3. The name in an element’s end-tag must match the element type in the start-tag. For example, an element opened with <address> must be closed by </address>. 4. No attribute may appear more than once in the same start-tag or empty-element tag. 5. A parsed element must not contain a recursive reference to itself. For example, it is improper to include another address element within an address element. 6. A named attribute must have an associated value. Input The input will contain a series of XML documents. The start of each document is identified by a line containing only the processing instruction "<?xml version="1.0"?>". The end of the input is identified by a line containing only the text "<?end?>" (this is not a true XML processing instruction, just a sentinel used to mark the end of the input for this problem). As with all XML documents, white space between elements and attributes should be ignored. You may make the following assuptions with regard to the input. The only processing instruction that will be present is the XML version rocessing instruction, and it will always appear only at the beginning of each document in the input. Element and attribute names are case-sensitive. For example, <Address> and <address> are considered to be different. Element and attribute names will use only alpha-numeric characters and the dash "-" character. XML comments will not appear in the input. Values for attributes will always be properly enclosed in double quotes. Output For each input XML document, output a line containing the text "well-formed" if the document is well-formed, "non well-formed" otherwise. Sample Input <?xml version="1.0"?> <acm-contest-problem> <title>A Well-Formed Problem</title> <text>XML, eXtensible Markup Language, is poised to become the lingua franca of structured data communication for the foreseeable future. [...]</text> <input>probleme.in</input> <output>probleme.out</output> </acm-contest-problem> <?xml version="1.0"?> <shopping-list> <items> <item quantity="1" quantity="1">Gallon of milk</item> <item>Frozen pizza </items> </Shopping-list> <errand-list> <errand>Get some cash at the ATM <errand>Pick up dry cleaning</errand> </errand> </errand-list> <?end?> Sample Output well-formed non well-formed
Spring3+hibernate4配置二级缓存的问题
<bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean" destroy-method="destroy"> <property name="dataSource"> <ref bean="dataSource" /> </property> <property name="mappingResources"> <list> <value>com/wjh/po/AfterPostPO.hbm.xml</value> <value>com/wjh/po/LogPO.hbm.xml</value> <value>com/wjh/po/PostPO.hbm.xml</value> <value>com/wjh/po/UserPO.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.format_sql">false</prop> <prop key="hibernate.hbm2ddl.auto">update</prop> <prop key="hibernate.generate_statistics">true</prop> <prop key="hibernate.cache.use_structured_entries">true</prop> <!-- <prop key="hibernate.cache.use_query_cache">true</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.cache.region.factory_class">org.hibernate.cache.ehcache.EhCacheRegionFactory </prop> <prop key="net.sf.ehcache.configurationResourceName">ehcache.xml</prop> --> </props> </property> </bean> spring3+hibernate4 配置ehcache二级缓存只要一把上面的二级缓存配置的注释部分去掉,访问直接报错,而且两次错误不一样,重启tomcat第一次访问时候报的最终错误是: java.lang.ClassNotFoundException: org.hibernate.engine.jndi.JndiNameException 再访问一次报的最终错误是: java.lang.NoClassDefFoundError: Could not initialize class org.hibernate.cache.ehcache.EhCacheRegionFactory 是在是解决不了了,网上都搜遍了,大部分是hibernate3的配置,但是hibernate4的和hibernate3的不一样,也有hibernate4的配置,但是按照他们的都不行,大神门帮帮吧 slf4j-api-1.6.1.jar,hibernate-ehcache-4.3.4.Final.jar,ehcache-core-2.4.3.jar三个包已经导入了
Protocol Buffers
Problem Description Protocol Buffers are a way of encoding structured data in an efficient yet extensible format. Google uses Protocol Buffers for almost all of its internal RPC protocols and file formats. In this problem, you will use a simplified version of protocol buffers. Message definition: In protocol buffers, we defining a message in a very simple way: message Gao { required int32 a = 1; required int32 b = 2; required string str = 3; repeated int32 arr = 4; optional int32 optionalField = 5; } The Gao message definition specifies five fields (name/value pairs), one for each piece of data that you want to include in this type of message. In this problem, there are at most 15 fields in one message definition. Field definition: Foremost, we need specifying field rules in one of the following: 1. required: message must have exactly one of this field. 2. optional: message can have zero or one of this field (but not more than one). 3. repeated: this field can be repeated any number of times (including zero) in message. The order of the repeated values will be preserved. Follow by the field rules, we definiting field by type and name. we support int32 and string type in this problem. Each field has assigned a unique numbered tag at the end. These tags are used to identify your fields in the message binary format. The smallest tag number is 1, and the largest is 15. Field name is consist of letters([a-zA-Z]) and do not exceed 15 characters. Base 128 Varints: To understand your simple protocol buffer encoding, you first need to understand varints. Varints are a method of serializing integers using one or more bytes. Smaller numbers take a smaller number of bytes. Each byte in a varint, except the last byte, has the most significant bit (msb) set –– this indicates that there are further bytes to come. The lower 7 bits of each byte are used to store the two's complement representation of the number in groups of 7 bits, least significant group first. So, for example, here is the number 1 –– it's a single byte, so the msb is not set: 0000 0001 And here is 300 – this is a bit more complicated: 1010 1100 0000 0010 How do you figure out that this is 300? First you drop the msb from each byte, as this is just there to tell us whether we've reached the end of the number (as you can see, it's set in the first byte as there is more than one byte in the varint): 1010 1100 0000 0010 → 010 1100 000 0010 You reverse the two groups of 7 bits because, as you remember, varints store numbers with the least significant group first. Then you concatenate them to get your final value: 000 0010 010 1100 → 000 0010 ++ 010 1100 → 100101100 → 256 + 32 + 8 + 4 = 300 (++ means byte concatenation) It's more complex to consider signed number, due to simplify this problem, we only need considered unsigned integers in range [0, 10^9]. Message structure: As you know, a protocol buffer message is a series of key-value pairs. The binary version of a message just uses the field's tag number as the key –– the name and declared type for each field can only be determined on the decoding end by referencing the message type's definition. The "key" for each pair in a wire-format message is actually two values – the field tag number from your message definition, plus a wire type that provides just enough information to find the length of the following value. The available wire types are as follows: Type Meaning Used for 0 Varint int32 2 Length-delimited string,repeated fields Each key in the streamed message is a varint with the value (field_tag_number << 3) | wire_type –– in other words, the last three bits of the number store the wire type. For example, key of field a in message Gao is encoded as 0000 1000(08), and for field str is 0001 1010(1A). Field a seted as 150 will be encoded as 08 96 01 (Key is 08 and 150 encoded as 96 01). Concatenate all encoded fields we can get the encoded message. Optional Elements: If any of fields are optional, the encoded message may not have a key-value pair with that field, in this situation, just set the value of this field as null. String and Repeated Elements: In this situation, we need specifiy payload size (varint type) first after key, and followed by the elements. (We can regard string type as a array) For example: message Test3 { required string b = 2; repeated int32 c = 3; } Assume one message have the value "testing" for the field b and the values 3, 270, and 86942 for the repeated field c, then, the encoded form would be: 12 // tag (field number 2, wire type 2) 07 // payload size (7 bytes) 74 65 73 74 69 6E 67 // ascii code of "testing" 1A // tag (field number 3, wire type 2) 06 // payload size (6 bytes) 03 // first element (varint 3) 8E 02 // second element (varint 270) 9E A7 05 // third element (varint 86942) Notice, repeated string type in this problem is illegal. If any of repetead fields not appeared in this message, this field can regard as empty array([]). The length of arrays and strings will not exceed 50 and strings only consist of alphabet('a' - 'z' and 'A' - 'Z'). Problem: Now give you a message definition, and serval encoded messages, you task is decode this messages. Input The first line of input contains a number T, indicating the number of test cases. (T≤10). For each case, there will be two integers n (n≤17) and m (m≤100). The next n lines consist of message definition, you can assume that the length of message definition will not exceed 1500. A valid definition will only contains alphabet('a' - 'z' and 'A' - 'Z'), number('0' - '9'), whitespace(' ') and control tokens('=;{}'). And then followed by m querys, each line contain one encoded byte stream in hexadecimal notation, there is one whitespace after each byte. And for there are at most 1500 bytes each message. Notice there will be some empty byte stream in test data, see sample input for more details. All the messages in this problem will not contains unknown tags and have a illegal structure. Output For each case, output Case #i: first. (i is the number of the test case, from 1 to T). Then for each query, output the values in each fields, with the same order in message definition. If any of required fields not appeared in this message, please print Error! instead. Output a blank line after each query. Sample Input 1 2 5 message Test1{required int32 a=1;required string b=2 ;repeated int32 c=3;optional int32 optionalField=5;} 08 96 01 12 07 74 65 73 74 69 6e 67 1A 06 03 8E 02 9E A7 05 28 00 08 96 01 12 07 74 65 73 74 69 6e 67 1A 06 03 8E 02 9E A7 05 08 96 01 12 01 74 12 07 74 65 73 74 69 6e 67 1A 06 03 8E 02 9E A7 05 Sample Output Case #1: a = 150 b = "testing" c = [3, 270, 86942] optionalField = 0 a = 150 b = "testing" c = [3, 270, 86942] optionalField = null Error! a = 150 b = "t" c = [] optionalField = null Error!
ubuntu16.04安装opencv时,make不通过该怎么办?
cmake已经完成,情况如下: ``` cmake .. -- Detected version of GNU GCC: 54 (504) -- Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found suitable version "1.2.8", minimum required is "1.2.3") -- Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found version "1.2.8") -- Checking for module 'gstreamer-base-1.0' -- No package 'gstreamer-base-1.0' found -- Checking for module 'gstreamer-video-1.0' -- No package 'gstreamer-video-1.0' found -- Checking for module 'gstreamer-app-1.0' -- No package 'gstreamer-app-1.0' found -- Checking for module 'gstreamer-riff-1.0' -- No package 'gstreamer-riff-1.0' found -- Checking for module 'gstreamer-pbutils-1.0' -- No package 'gstreamer-pbutils-1.0' found -- Checking for module 'gstreamer-base-0.10' -- No package 'gstreamer-base-0.10' found -- Checking for module 'gstreamer-video-0.10' -- No package 'gstreamer-video-0.10' found -- Checking for module 'gstreamer-app-0.10' -- No package 'gstreamer-app-0.10' found -- Checking for module 'gstreamer-riff-0.10' -- No package 'gstreamer-riff-0.10' found -- Checking for module 'gstreamer-pbutils-0.10' -- No package 'gstreamer-pbutils-0.10' found -- Looking for linux/videodev.h -- Looking for linux/videodev.h - not found -- Looking for linux/videodev2.h -- Looking for linux/videodev2.h - found -- Looking for sys/videoio.h -- Looking for sys/videoio.h - not found -- Checking for module 'libavresample' -- No package 'libavresample' found -- Looking for libavformat/avformat.h -- Looking for libavformat/avformat.h - found -- Looking for ffmpeg/avformat.h -- Looking for ffmpeg/avformat.h - not found -- Checking for module 'libgphoto2' -- No package 'libgphoto2' found -- found IPP (ICV version): 9.0.1 [9.0.1] -- at: /home/quxutao/opencv-3.1.0/3rdparty/ippicv/unpack/ippicv_lnx -- CUDA detected: 7.5 -- CUDA NVCC target flags: -gencode;arch=compute_20,code=sm_20;-gencode;arch=compute_20,code=sm_21;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_30,code=compute_30 -- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE) -- To enable PlantUML support, set PLANTUML_JAR environment variable or pass -DPLANTUML_JAR=<filepath> option to cmake -- Could NOT find PythonInterp: Found unsuitable version "1.4", but required is at least "2.7" (found /home/quxutao/.virtualenvs/cv/bin/python) -- Could NOT find PythonInterp: Found unsuitable version "1.4", but required is at least "2.6" (found /home/quxutao/.virtualenvs/cv/bin/python) -- Could NOT find PythonInterp: Found unsuitable version "1.4", but required is at least "3.4" (found /home/quxutao/.virtualenvs/cv/bin/python) -- Could NOT find PythonInterp: Found unsuitable version "1.4", but required is at least "3.2" (found /home/quxutao/.virtualenvs/cv/bin/python) -- Could NOT find JNI (missing: JAVA_INCLUDE_PATH JAVA_INCLUDE_PATH2 JAVA_AWT_INCLUDE_PATH) -- Could NOT find Matlab (missing: MATLAB_MEX_SCRIPT MATLAB_INCLUDE_DIRS MATLAB_ROOT_DIR MATLAB_LIBRARIES MATLAB_LIBRARY_DIRS MATLAB_MEXEXT MATLAB_ARCH MATLAB_BIN) -- VTK is not found. Please set -DVTK_DIR in CMake to VTK build directory, or to VTK install subdirectory with VTKConfig.cmake file -- Caffe: NO -- Protobuf: YES -- Glog: NO -- HDF5: YES -- Module opencv_sfm disabled because the following dependencies are not found: Eigen Glog/Gflags -- Tesseract: NO -- HDF5: YES -- Build libprotobuf from sources: -- The protocol buffer compiler not found -- Tesseract: NO -- -- General configuration for OpenCV 3.1.0 ===================================== -- Version control: unknown -- -- Platform: -- Host: Linux 4.15.0-47-generic x86_64 -- CMake: 3.5.1 -- CMake generator: Unix Makefiles -- CMake build tool: /usr/bin/make -- Configuration: RELEASE -- -- C/C++: -- Built as dynamic libs?: YES -- C++ Compiler: /usr/bin/c++ (ver 5.4.0) -- C++ flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wno-narrowing -Wno-delete-non-virtual-dtor -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -mno-avx -msse3 -mno-ssse3 -mno-sse4.1 -mno-sse4.2 -ffunction-sections -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG -- C++ flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wno-narrowing -Wno-delete-non-virtual-dtor -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -mno-avx -msse3 -mno-ssse3 -mno-sse4.1 -mno-sse4.2 -ffunction-sections -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG -- C Compiler: /usr/bin/cc -- C flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wno-narrowing -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -mno-avx -msse3 -mno-ssse3 -mno-sse4.1 -mno-sse4.2 -ffunction-sections -fvisibility=hidden -O3 -DNDEBUG -DNDEBUG -- C flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wno-narrowing -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -mno-avx -msse3 -mno-ssse3 -mno-sse4.1 -mno-sse4.2 -ffunction-sections -fvisibility=hidden -g -O0 -DDEBUG -D_DEBUG -- Linker flags (Release): -- Linker flags (Debug): -- Precompiled headers: YES -- Extra dependencies: /usr/lib/x86_64-linux-gnu/libpng.so /usr/lib/x86_64-linux-gnu/libtiff.so /usr/lib/x86_64-linux-gnu/libjasper.so /usr/lib/x86_64-linux-gnu/libjpeg.so gtk-3 gdk-3 pangocairo-1.0 pango-1.0 atk-1.0 cairo-gobject cairo gdk_pixbuf-2.0 gio-2.0 gobject-2.0 gthread-2.0 glib-2.0 dc1394 v4l1 v4l2 avcodec-ffmpeg avformat-ffmpeg avutil-ffmpeg swscale-ffmpeg /usr/lib/x86_64-linux-gnu/libbz2.so /usr/lib/x86_64-linux-gnu/hdf5/openmpi/lib/libhdf5.so /usr/lib/x86_64-linux-gnu/libsz.so /usr/lib/x86_64-linux-gnu/libz.so /usr/lib/x86_64-linux-gnu/libdl.so /usr/lib/x86_64-linux-gnu/libm.so dl m pthread rt cudart nppc nppi npps cufft -L/usr/lib/x86_64-linux-gnu -- 3rdparty dependencies: libwebp IlmImf libprotobuf -- -- OpenCV modules: -- To be built: cudev core cudaarithm flann hdf imgproc ml reg surface_matching video cudabgsegm cudafilters cudaimgproc cudawarping dnn fuzzy imgcodecs photo shape videoio cudacodec highgui objdetect plot ts xobjdetect xphoto bgsegm bioinspired dpm face features2d line_descriptor saliency text calib3d ccalib cudafeatures2d cudalegacy cudaobjdetect cudaoptflow cudastereo datasets rgbd stereo structured_light superres tracking videostab xfeatures2d ximgproc aruco optflow stitching -- Disabled: world contrib_world -- Disabled by dependency: - -- Unavailable: java python2 python3 viz cvv matlab sfm -- -- GUI: -- QT: NO -- GTK+ 3.x: YES (ver 3.18.9) -- GThread : YES (ver 2.48.2) -- GtkGlExt: NO -- OpenGL support: NO -- VTK support: NO -- -- Media I/O: -- ZLib: /usr/lib/x86_64-linux-gnu/libz.so (ver 1.2.8) -- JPEG: /usr/lib/x86_64-linux-gnu/libjpeg.so (ver ) -- WEBP: build (ver 0.3.1) -- PNG: /usr/lib/x86_64-linux-gnu/libpng.so (ver 1.2.54) -- TIFF: /usr/lib/x86_64-linux-gnu/libtiff.so (ver 42 - 4.0.6) -- JPEG 2000: /usr/lib/x86_64-linux-gnu/libjasper.so (ver 1.900.1) -- OpenEXR: build (ver 1.7.1) -- GDAL: NO -- -- Video I/O: -- DC1394 1.x: NO -- DC1394 2.x: YES (ver 2.2.4) -- FFMPEG: YES -- codec: YES (ver 56.60.100) -- format: YES (ver 56.40.101) -- util: YES (ver 54.31.100) -- swscale: YES (ver 3.1.101) -- resample: NO -- gentoo-style: YES -- GStreamer: NO -- OpenNI: NO -- OpenNI PrimeSensor Modules: NO -- OpenNI2: NO -- PvAPI: NO -- GigEVisionSDK: NO -- UniCap: NO -- UniCap ucil: NO -- V4L/V4L2: Using libv4l1 (ver 1.10.0) / libv4l2 (ver 1.10.0) -- XIMEA: NO -- Xine: NO -- gPhoto2: NO -- -- Parallel framework: pthreads -- -- Other third-party libraries: -- Use IPP: 9.0.1 [9.0.1] -- at: /home/quxutao/opencv-3.1.0/3rdparty/ippicv/unpack/ippicv_lnx -- Use IPP Async: NO -- Use VA: NO -- Use Intel VA-API/OpenCL: NO -- Use Eigen: NO -- Use Cuda: YES (ver 7.5) -- Use OpenCL: YES -- Use custom HAL: NO -- -- NVIDIA CUDA -- Use CUFFT: YES -- Use CUBLAS: NO -- USE NVCUVID: NO -- NVIDIA GPU arch: 20 21 30 35 -- NVIDIA PTX archs: 30 -- Use fast math: NO -- -- OpenCL: -- Version: dynamic -- Include path: /home/quxutao/opencv-3.1.0/3rdparty/include/opencl/1.2 -- Use AMDFFT: NO -- Use AMDBLAS: NO -- -- Python 2: -- Interpreter: NO -- -- Python 3: -- Interpreter: NO -- -- Python (for build): NO -- -- Java: -- ant: NO -- JNI: NO -- Java wrappers: NO -- Java tests: NO -- -- Matlab: Matlab not found or implicitly disabled -- -- Documentation: -- Doxygen: NO -- PlantUML: NO -- -- Tests and samples: -- Tests: YES -- Performance tests: YES -- C/C++ Examples: YES -- -- Install path: /usr/local -- -- cvconfig.h is in: /home/quxutao/opencv-3.1.0/build -- ----------------------------------------------------------------- -- -- Configuring done -- Generating done -- Build files have been written to: /home/quxutao/opencv-3.1.0/build ``` 但是make的时候,就报错: ``` make [ 4%] Built target libwebp [ 4%] Built target IlmImf [ 4%] Built target opencv_cudev [ 4%] Built target opencv_core_pch_dephelp [ 4%] Built target pch_Generate_opencv_core [ 4%] Building NVCC (Device) object modules/core/CMakeFiles/cuda_compile.dir/src/cuda/cuda_compile_generated_gpu_mat.cu.o /usr/include/string.h: In function ‘void* __mempcpy_inline(void*, const void*, size_t)’: /usr/include/string.h:652:42: error: ‘memcpy’ was not declared in this scope return (char *) memcpy (__dest, __src, __n) + __n; ^ CMake Error at cuda_compile_generated_gpu_mat.cu.o.cmake:266 (message): Error generating file /home/quxutao/opencv-3.1.0/build/modules/core/CMakeFiles/cuda_compile.dir/src/cuda/./cuda_compile_generated_gpu_mat.cu.o modules/core/CMakeFiles/opencv_core.dir/build.make:399: recipe for target 'modules/core/CMakeFiles/cuda_compile.dir/src/cuda/cuda_compile_generated_gpu_mat.cu.o' failed make[2]: *** [modules/core/CMakeFiles/cuda_compile.dir/src/cuda/cuda_compile_generated_gpu_mat.cu.o] Error 1 CMakeFiles/Makefile2:2307: recipe for target 'modules/core/CMakeFiles/opencv_core.dir/all' failed make[1]: *** [modules/core/CMakeFiles/opencv_core.dir/all] Error 2 Makefile:160: recipe for target 'all' failed make: *** [all] Error 2 ``` 弄了一下午了,没有找到相关的解决办法。我的cuda是7.5,其实不用GPU也可以的,我只是想用KAZE滤波。。。跪求大神帮忙。。。
hibernate整合redis二级缓存问题
背景: 有一老项目采用的hibernate配合ehcache作为二级缓存,现因需求需要更换redis来做hibernate的二级缓存。大家不用问为什么不换掉hibernate,因为是老项目所以风险太大。 问题: 1、我按照网上做法导入了若干个jar包,见图: ![图片说明](https://img-ask.csdn.net/upload/201809/12/1536736237_134115.jpg) 2、spring中sessionFactory也配置了二级缓存: ``` hibernate.cache.use_query_cache = true hibernate.cache.use_second_level_cache = true hibernate.cache.use_structured_entries = true hibernate.cache.region_prefix = hibernate hibernate.cache.region.factory_class = org.hibernate.cache.redis.hibernate5.RedisRegionFactory hibernate.cache.provider_configuration_file_resource_path = hibernate-redis.properties ``` 3、也分别创建了hibernate-redis.properties和redisson.yaml,内容分别如下 (1)hibernate-redis.properties: ``` redisson-config = classpath:redisson.yaml redis.expiryInSeconds.default = 360 redis.expiryInSeconds.hibernate.common = 0 redis.expiryInSeconds.hibernate.account = 1200 ``` (2)redisson.yaml ``` singleServerConfig: idleConnectionTimeout: 10000 pingTimeout: 5000 connectTimeout: 5000 timeout: 5000 retryAttempts: 1 retryInterval: 1000 reconnectionTimeout: 3000 failedAttempts: 1 password: 123456 subscriptionsPerConnection: 3 clientName: null address: [ "redis://127.0.0.1:6379" ] subscriptionConnectionMinimumIdleSize: 1 subscriptionConnectionPoolSize: 3 connectionMinimumIdleSize: 3 connectionPoolSize: 3 database: 0 dnsMonitoring: false dnsMonitoringInterval: 5000 threads: 0 codec: !<org.redisson.codec.SnappyCodec> {} useLinuxNativeEpoll: false eventLoopGroup: null ``` 4、redis也正常运行 5、最后运行程序报了一个错: ``` Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize instance of `java.net.URI` out of START_ARRAY token at [Source: (URL); line: 14, column: 4] (through reference chain: org.redisson.config.Config["singleServerConfig"]->org.redisson.config.SingleServerConfig["address"]) at com.fasterxml.jackson.databind.exc.MismatchedInputException.from(MismatchedInputException.java:63) at com.fasterxml.jackson.databind.DeserializationContext.reportInputMismatch(DeserializationContext.java:1342) at com.fasterxml.jackson.databind.DeserializationContext.handleUnexpectedToken(DeserializationContext.java:1138) at com.fasterxml.jackson.databind.deser.std.StdDeserializer._deserializeFromArray(StdDeserializer.java:674) at com.fasterxml.jackson.databind.deser.std.FromStringDeserializer.deserialize(FromStringDeserializer.java:164) at com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(FieldProperty.java:136) at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288) at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151) at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:127) at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288) at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151) at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4013) at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2959) at org.redisson.config.ConfigSupport.fromYAML(ConfigSupport.java:169) at org.redisson.config.Config.fromYAML(Config.java:754) at org.hibernate.cache.redis.client.RedisClientFactory.createRedisClient(RedisClientFactory.java:63) ... 57 more ``` 不知道怎么解决了,希望有相关经验的大神能助我解惑。
webservice 客户端调用问题
问题: 服务端正常,能够正常访问wsdl地址。 客户端调用,偶尔出现一下错误,有时候不报,网上找了很多办法,都没有解决,不知道有没有高人能够解决,调用代码如下: Object service = null; JaxWsProxyFactoryBean factory = new JaxWsProxyFactoryBean(); factory.setServiceClass(clazz); factory.setAddress(wsdl); service = factory.create(); (StrucSTaskService)service.sendChildList(); ----------------------- javax.xml.ws.WebServiceException: Could not send Message. at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:149) at $Proxy106.send(Unknown Source) at cn.com.xx.turbo.adapter.structured.service.StrucSTaskService.sendChildList(StrucSTaskService.java:565) Caused by: java.net.SocketException: SocketException invoking http://10.1.7.40:8888/turbo/ws/data_exch_service?wsdl: Unexpected end of file from serverJaxWsDynamicClientFactory at sun.reflect.GeneratedConstructorAccessor275.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.mapException(HTTPConduit.java:1364) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1348) at org.apache.cxf.transport.AbstractConduit.close(AbstractConduit.java:56) at org.apache.cxf.transport.http.HTTPConduit.close(HTTPConduit.java:651) at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:62) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307) at org.apache.cxf.endpoint.ClientImpl.doInvoke(ClientImpl.java:516) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:425) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:326) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:279) at org.apache.cxf.frontend.ClientProxy.invokeSync(ClientProxy.java:96) at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:138) ... 8 more Caused by: java.net.SocketException: Unexpected end of file from server at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:782) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:641) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1218) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:379) at org.apache.cxf.transport.http.URLConnectionHTTPConduit$URLConnectionWrappedOutputStream.getResponseCode(URLConnectionHTTPConduit.java:275) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:1563) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponse(HTTPConduit.java:1533) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1335) ... 18 more
spring配置文件出错sssssssssssssssssss
Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type [cn.testJob.pss.dao.EmployeeDao] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {@javax.annotation.Resource(shareable=true, mappedName=, description=, name=, type=class java.lang.Object, lookup=, authenticationType=CONTAINER)} at org.springframework.beans.factory.support.DefaultListableBeanFactory.raiseNoSuchBeanDefinitionException(DefaultListableBeanFactory.java:986) at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:856) at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:768) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.autowireResource(CommonAnnotationBeanPostProcessor.java:438) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.getResource(CommonAnnotationBeanPostProcessor.java:416) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor$ResourceElement.getResourceToInject(CommonAnnotationBeanPostProcessor.java:550) at org.springframework.beans.factory.annotation.InjectionMetadata$InjectedElement.inject(InjectionMetadata.java:150) at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.postProcessPropertyValues(CommonAnnotationBeanPostProcessor.java:303) ... 58 more <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:oxm="http://www.springframework.org/schema/oxm" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd http://www.springframework.org/schema/oxm http://www.springframework.org/schema/oxm/spring-oxm-3.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd" default-autowire="byName" default-lazy-init="true"> <!-- default-autowire="byName" default-lazy-init="true"> --> <context:component-scan base-package="cn.testJob.pss" /> <!-- <bean class="cn.itproject.crm.controller.init.ApplicationInitListener"> --> <!-- <property name="companyService" ref="companyServiceImpl"/> --> <!-- <property name="configService" ref="configServiceImpl"/> --> <!-- <property name="notificationService" ref="notificationServiceImpl"/> --> <!-- </bean> --> <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="locations"> <list> <value>classpath:mysql.properties</value> <value>classpath:druid.properties</value> <value>classpath:hibernate.properties</value> <value>classpath:redis.properties</value> </list> </property> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean"> <property name="dataSource" ref="druidDataSource" /> <property name="packagesToScan"> <list> <value>cn.testJob.pss.bean</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${hibernate.dialect}</prop> <prop key="hibernate.show_sql">${hibernate.show_sql}</prop> <prop key="hibernate.format_sql">${hibernate.format_sql}</prop> <prop key="hibernate.use_sql_commants">${hibernate.use_sql_comments}</prop> <prop key="hibernate.hbm2ddl.auto">${hibernate.hbm2ddl.auto}</prop> <prop key="hibernate.cache.use_second_level_cache">${hibernate.cache.use_second_level_cache}</prop> <prop key="hibernate.cache.use_query_cache">${hibernate.cache.use_query_cache}</prop> <prop key="hibernate.cache.region.factory_class">${hibernate.cache.region.factory_class}</prop> <prop key="hibernate.cache.provider_configuration_file_resource_path">${hibernate.cache.provider_configuration_file_resource_path} </prop> <prop key="hibernate.cache.use_structured_entries">${hibernate.cache.use_structured_entries}</prop> </props> </property> </bean> <bean id="txManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean> <tx:advice id="txAdvice" transaction-manager="txManager"> <tx:attributes> <tx:method name="get*" read-only="true" /> <tx:method name="*" /> </tx:attributes> </tx:advice> <!-- <aop:aspectj-autoproxy proxy-target-class="true"/> --> <aop:config> <aop:pointcut id="bizMethods" expression="execution(* cn.testJob.pss.service.*.*(..))" /> <aop:advisor advice-ref="txAdvice" pointcut-ref="bizMethods" /> </aop:config> <import resource="applicationContext-*.xml"/> 这个是sprin配置文件 <bean id="druidDataSource" class="com.alibaba.druid.pool.DruidDataSource" init-method="init" destroy-method="close"> <!-- 数据库基本信息配置 --> <property name="driverClassName" value="${jdbc.driverClass}" /> <property name="url" value="${jdbc.url}" /> <property name="username" value="${jdbc.user}" /> <property name="password" value="${jdbc.password}" /> <!-- 初始化连接数量 --> <property name="initialSize" value="${druid.initialSize}" /> <!-- 最大并发连接数 --> <property name="maxActive" value="${druid.maxActive}" /> <!-- 最小空闲连接数 --> <property name="minIdle" value="${druid.minIdle}" /> <!-- 配置获取连接等待超时的时间 --> <property name="maxWait" value="${druid.maxWait}" /> <!-- 超过时间限制是否回收 --> <property name="removeAbandoned" value="${druid.removeAbandoned}" /> <!-- 超过时间限制多长删除 --> <property name="removeAbandonedTimeout" value="${druid.removeAbandonedTimeout}" /> <!-- 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒 --> <property name="timeBetweenEvictionRunsMillis" value="${druid.timeBetweenEvictionRunsMillis}" /> <!-- 配置一个连接在池中最小生存的时间,单位是毫秒 --> <property name="minEvictableIdleTimeMillis" value="${druid.minEvictableIdleTimeMillis}" /> <!-- 用来检测连接是否有效的sql,要求是一个查询语句 --> <property name="validationQuery" value="${druid.validationQuery}" /> <!-- 申请连接的时候检测 --> <property name="testWhileIdle" value="${druid.testWhileIdle}" /> <!-- 申请连接时执行validationQuery检测连接是否有效,配置为true会降低性能 --> <property name="testOnBorrow" value="${druid.testOnBorrow}" /> <!-- 归还连接时执行validationQuery检测连接是否有效,配置为true会降低性能 --> <property name="testOnReturn" value="${druid.testOnReturn}" /> <!-- 打开PSCache,并且指定每个连接上PSCache的大小 --> <property name="poolPreparedStatements" value="${druid.poolPreparedStatements}" /> <property name="maxPoolPreparedStatementPerConnectionSize" value="${druid.maxPoolPreparedStatementPerConnectionSize}" /> <!--属性类型是字符串,通过别名的方式配置扩展插件,常用的插件有: 监控统计用的filter:stat 日志用的filter:log4j 防御SQL注入的filter:wall --> <property name="filters" value="${druid.filters}" /> </bean> <!-- 输出可执行的SQL --> <!-- 参考:https://github.com/alibaba/druid/wiki/%E9%85%8D%E7%BD%AE_LogFilter --> <!-- <bean id="log-filter" class="com.alibaba.druid.filter.logging.Log4jFilter"> <property name="statementExecutableSqlLogEnable" value="true" /> </bean> --> <!-- 配置Spring监控 --> <!-- 参考:https://github.com/alibaba/druid/wiki/%E9%85%8D%E7%BD%AE_Druid%E5%92%8CSpring%E5%85%B3%E8%81%94%E7%9B%91%E6%8E%A7%E9%85%8D%E7%BD%AE --> <bean id="druid-stat-interceptor" class="com.alibaba.druid.support.spring.stat.DruidStatInterceptor"> </bean> <bean id="druid-stat-pointcut" class="org.springframework.aop.support.JdkRegexpMethodPointcut" scope="prototype"> <property name="patterns"> <list> <value>cn.testJob.pss.service.*</value> <value>cn.testJob.pss.dao.*</value> </list> </property> </bean> <aop:config> <aop:advisor advice-ref="druid-stat-interceptor" pointcut-ref="druid-stat-pointcut" /> </aop:config> 这个是数据源配置文件 麻烦了,各位大神,是在找不出哪里出问题了
ssh2中调用weblogic配置的JNDI 识别不了 求求好心人 给个解答吧 weblogic版本12c
weblogic给出的提示信息 信息: Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 2015-5-2 17:09:00 org.hibernate.hql.ast.ASTQueryTranslatorFactory <init> 信息: Using ASTQueryTranslatorFactory 2015-5-2 17:09:00 org.hibernate.cfg.SettingsFactory buildSettings 信息: Query language substitutions: {} 2015-5-2 17:09:00 org.hibernate.cfg.SettingsFactory buildSettings 信息: JPA-QL strict compliance: disabled 2015-5-2 17:09:00 org.hibernate.cfg.SettingsFactory buildSettings 信息: Second-level cache: enabled 2015-5-2 17:09:00 org.hibernate.cfg.SettingsFactory buildSettings 信息: Query cache: disabled 2015-5-2 17:09:00 org.hibernate.cfg.SettingsFactory createRegionFactory 信息: Cache region factory : org.hibernate.cache.impl.NoCachingRegionFactory 2015-5-2 17:09:00 org.hibernate.cfg.SettingsFactory buildSettings 信息: Optimize cache for minimal puts: disabled 2015-5-2 17:09:00 org.hibernate.cfg.SettingsFactory buildSettings 信息: Structured second-level cache entries: disabled 2015-5-2 17:09:00 org.hibernate.cfg.SettingsFactory buildSettings 信息: Statistics: disabled 2015-5-2 17:09:00 org.hibernate.cfg.SettingsFactory buildSettings 信息: Deleted entity synthetic identifier rollback: disabled 2015-5-2 17:09:00 org.hibernate.cfg.SettingsFactory buildSettings 信息: Default entity-mode: pojo 2015-5-2 17:09:00 org.hibernate.cfg.SettingsFactory buildSettings 信息: Named query checking : enabled 2015-5-2 17:09:00 org.hibernate.impl.SessionFactoryImpl <init> 信息: building session factory 2015-5-2 17:09:01 org.hibernate.impl.SessionFactoryObjectFactory addInstance 信息: Not binding factory to JNDI, no JNDI name configured applicationContext.xml配置信息 <jee:jndi-lookup id="dataSource" jndi-name="jndi/cqjsrwsyy"/> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="dataSource"/> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${hibernate.dialect}</prop> <prop key="hibernate.show_sql">${hibernate.show_sql}</prop> <prop key="hibernate.format_sql">${hibernate.format_sql}</prop> </props> </property> <property name="packagesToScan"> <list> <value>jsrwsyy.domain</value> </list> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <prop
Josephina and RPG
Problem Description A role-playing game (RPG and sometimes roleplaying game) is a game in which players assume the roles of characters in a fictional setting. Players take responsibility for acting out these roles within a narrative, either through literal acting or through a process of structured decision-making or character development. Recently, Josephina is busy playing a RPG named TX3. In this game, M characters are available to by selected by players. In the whole game, Josephina is most interested in the "Challenge Game" part. The Challenge Game is a team play game. A challenger team is made up of three players, and the three characters used by players in the team are required to be different. At the beginning of the Challenge Game, the players can choose any characters combination as the start team. Then, they will fight with N AI teams one after another. There is a special rule in the Challenge Game: once the challenger team beat an AI team, they have a chance to change the current characters combination with the AI team. Anyway, the challenger team can insist on using the current team and ignore the exchange opportunity. Note that the players can only change the characters combination to the latest defeated AI team. The challenger team gets victory only if they beat all the AI teams. Josephina is good at statistics, and she writes a table to record the winning rate between all different character combinations. She wants to know the maximum winning probability if she always chooses best strategy in the game. Can you help her? Input There are multiple test cases. The first line of each test case is an integer M (3 ≤ M ≤ 10), which indicates the number of characters. The following is a matrix T whose size is R × R. R equals to C(M, 3). T(i, j) indicates the winning rate of team i when it is faced with team j. We guarantee that T(i, j) + T(j, i) = 1.0. All winning rates will retain two decimal places. An integer N (1 ≤ N ≤ 10000) is given next, which indicates the number of AI teams. The following line contains N integers which are the IDs (0-based) of the AI teams. The IDs can be duplicated. Output For each test case, please output the maximum winning probability if Josephina uses the best strategy in the game. For each answer, an absolute error not more than 1e-6 is acceptable. Sample Input 4 0.50 0.50 0.20 0.30 0.50 0.50 0.90 0.40 0.80 0.10 0.50 0.60 0.70 0.60 0.40 0.50 3 0 1 2 Sample Output 0.378000
hibernate memcached 二级缓存无效
我用memcached配置的二级缓存 Spring-hibernate配置: [code="java"] <property name="hibernateProperties"> <props> <prop key="hibernate.connection.SetBigStringTryClob">true</prop> <prop key="hibernate.dialect">${hibernate.dialect}</prop> <prop key="hibernate.show_sql">${hibernate.show_sql}</prop> <prop key="hibernate.format_sql">${hibernate.format_sql}</prop> <prop key="hibernate.cache.region_prefix">${hibernate.cache.region_prefix}</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.cache.use_query_cache">true</prop> <prop key="hibernate.cache.provider_class">${hibernate.cache.provider_class}</prop> <prop key="hibernate.cache.use_structured_entries">${hibernate.cache.use_structured_entries}</prop> <prop key="hibernate.memcached.servers">${hibernate.memcached.servers}</prop> <prop key="hibernate.memcached.memcacheClientFactory">${hibernate.memcached.memcacheClientFactory}</prop> </props> </property> [/code] 系统启动提示 Starting MemcachedClient... 要缓存的类映射文件里加了 <cache usage="read-write"/> 而且我debug发现对象已经加载到memcached中了 [code="java"] quality.cache.memcache.com.akazam.directview.develop.model.Application:0:304 quality.cache.memcache.com.akazam.directview.develop.model.Application:0:303 quality.cache.memcache.com.akazam.directview.develop.model.Application:0:302 quality.cache.memcache.com.akazam.directview.develop.model.Application:0:301 quality.cache.memcache.com.akazam.directview.develop.model.Application:0:299 quality.cache.memcache.com.akazam.directview.develop.model.Application:0:413 quality.cache.memcache.com.akazam.directview.develop.model.Application:0:121 ............. [/code] 但是我在测试调用的时候,仍然是去数据库查询的,而不是直接从缓存中读取 郁闷了,不知道是怎么回事 测试代码: [code="java"] List<Application> applications = null; //getApi().getDao()返回一个HibernateGenericDao applications = (List<Application>)getApi().getDao().loadAll(Application.class); applications = (List<Application>)getApi().getDao().getList(" from Application ", new Object[]{}); applications = (List<Application>) getApi().getDao().loadAsHql(" from Application "); applications = (List<Application>) getApi().getDao().loadAsHql(" from Application "); [/code] 控制台输出了四条mysql查询语句,根本没有调用缓存 求大神帮忙看看哪出问题了!
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私有的数
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
此博客仅为我业余记录文章所用,发布到此,仅供网友阅读参考,如有侵权,请通知我,我会删掉。 补充 有不少读者留言说本文章没有用,因为天气预报直接打开手机就可以收到了,为何要多此一举发送到邮箱呢!!!那我在这里只能说:因为你没用,所以你没用!!! 这里主要介绍的是思路,不是天气预报!不是天气预报!!不是天气预报!!!天气预报只是用于举例。请各位不要再刚了!!! 下面是我会用到的两个场景: 每日下
死磕YOLO系列,YOLOv1 的大脑、躯干和手脚
YOLO 是我非常喜欢的目标检测算法,堪称工业级的目标检测,能够达到实时的要求,它帮我解决了许多实际问题。 这就是 YOLO 的目标检测效果。它定位了图像中物体的位置,当然,也能预测物体的类别。 之前我有写博文介绍过它,但是每次重新读它的论文,我都有新的收获,为此我准备写一个系列的文章来详尽分析它。这是第一篇,从它的起始 YOLOv1 讲起。 YOLOv1 的论文地址:https://www.c
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的 回答 - Bravo Yeung,获得该问题下回答中得最高赞(236赞和1枚专业勋章),对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalk
20行Python代码爬取王者荣耀全英雄皮肤
引言 王者荣耀大家都玩过吧,没玩过的也应该听说过,作为时下最火的手机MOBA游戏,咳咳,好像跑题了。我们今天的重点是爬取王者荣耀所有英雄的所有皮肤,而且仅仅使用20行Python代码即可完成。 准备工作 爬取皮肤本身并不难,难点在于分析,我们首先得得到皮肤图片的url地址,话不多说,我们马上来到王者荣耀的官网: 我们点击英雄资料,然后随意地选择一位英雄,接着F12打开调试台,找到英雄原皮肤的图片
简明易理解的@SpringBootApplication注解源码解析(包含面试提问)
欢迎关注文章系列 ,关注我 《提升能力,涨薪可待》 《面试知识,工作可待》 《实战演练,拒绝996》 欢迎关注我博客,原创技术文章第一时间推出 也欢迎关注公 众 号【Ccww笔记】,同时推出 如果此文对你有帮助、喜欢的话,那就点个赞呗,点个关注呗! 《提升能力,涨薪可待篇》- @SpringBootApplication注解源码解析 一、@SpringBootApplication 的作用是什
西游记团队中如果需要裁掉一个人,会先裁掉谁?
2019年互联网寒冬,大批企业开始裁员,下图是网上流传的一张截图: 裁员不可避免,那如何才能做到不管大环境如何变化,自身不受影响呢? 我们先来看一个有意思的故事,如果西游记取经团队需要裁员一名,会裁掉谁呢,为什么? 西游记团队组成: 1.唐僧 作为团队teamleader,有很坚韧的品性和极高的原则性,不达目的不罢休,遇到任何问题,都没有退缩过,又很得上司支持和赏识(直接得到唐太宗的任命,既给
Python语言高频重点汇总
Python语言高频重点汇总 GitHub面试宝典仓库——点这里跳转 文章目录Python语言高频重点汇总**GitHub面试宝典仓库——点这里跳转**1. 函数-传参2. 元类3. @staticmethod和@classmethod两个装饰器4. 类属性和实例属性5. Python的自省6. 列表、集合、字典推导式7. Python中单下划线和双下划线8. 格式化字符串中的%和format9.
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外
ES6基础-ES6的扩展
进行对字符串扩展,正则扩展,数值扩展,函数扩展,对象扩展,数组扩展。 开发环境准备: 编辑器(VS Code, Atom,Sublime)或者IDE(Webstorm) 浏览器最新的Chrome 字符串的扩展: 模板字符串,部分新的方法,新的unicode表示和遍历方法: 部分新的字符串方法 padStart,padEnd,repeat,startsWith,endsWith,includes 字
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip install selenium -i https://pypi.tuna.tsinghua.edu.cn/simple/ 
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
你知道的越多,你不知道的越多 点赞再看,养成习惯 本文 GitHub https://github.com/JavaFamily 已收录,有一线大厂面试点思维导图,也整理了很多我的文档,欢迎Star和完善,大家面试可以参照考点复习,希望我们一起有点东西。 前前言 为啥今天有个前前言呢? 因为你们的丙丙啊,昨天有牌面了哟,直接被微信官方推荐,知乎推荐,也就仅仅是还行吧(心里乐开花)
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca
Python爬虫精简步骤1 获取数据
爬虫的工作分为四步: 1.获取数据。爬虫程序会根据我们提供的网址,向服务器发起请求,然后返回数据。 2.解析数据。爬虫程序会把服务器返回的数据解析成我们能读懂的格式。 3.提取数据。爬虫程序再从中提取出我们需要的数据。 4.储存数据。爬虫程序把这些有用的数据保存起来,便于你日后的使用和分析。 这一篇的内容就是:获取数据。 首先,我们将会利用一个强大的库——requests来获取数据。 在电脑上安装
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东
破14亿,Python分析我国存在哪些人口危机!
2020年1月17日,国家统计局发布了2019年国民经济报告,报告中指出我国人口突破14亿。 猪哥的朋友圈被14亿人口刷屏,但是很多人并没有看到我国复杂的人口问题:老龄化、男女比例失衡、生育率下降、人口红利下降等。 今天我们就来分析一下我们国家的人口数据吧! 更多有趣分析教程,扫描下方二维码关注vx公号「裸睡的猪」 即可查看! 一、背景 1.人口突破14亿 2020年1月17日,国家统计局发布
web前端javascript+jquery知识点总结
Javascript javascript 在前端网页中占有非常重要的地位,可以用于验证表单,制作特效等功能,它是一种描述语言,也是一种基于对象(Object)和事件驱动并具有安全性的脚本语言 ,语法同java类似,是一种解释性语言,边执行边解释。 JavaScript的组成: ECMAScipt 用于描述: 语法,变量和数据类型,运算符,逻辑控制语句,关键字保留字,对象。 浏览器对象模型(Br
Qt实践录:开篇
本系列文章介绍笔者的Qt实践之路。 背景 笔者首次接触 Qt 大约是十多年前,当时试用了 Qt ,觉得不如 MFC 好用。现在 Qt 的 API、文档等都比较完善,在年初决定重新拾起,正所谓技多不压身,将 Qt 当为一种谋生工具亦未尝不可。利用春节假期的集中时间,快速专攻一下。 本系列名为“Qt实践”,故不是教程,笔者对 Qt 的定位是“使用”,可以帮助快速编写日常的工具,如串口、网络等。所以不
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ......
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。   再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。   下文是原回答,希望能对你能有所启发。   如果我说,这个世界上人真的分三六九等,
为什么听过很多道理,依然过不好这一生?
记录学习笔记是一个重要的习惯,不希望学习过的东西成为过眼云烟。做总结的同时也是一次复盘思考的过程。 本文是根据阅读得到 App上《万维钢·精英日课》部分文章后所做的一点笔记和思考。学习是一个系统的过程,思维模型的建立需要相对完整的学习和思考过程。以下观点是在碎片化阅读后总结的一点心得总结。
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合
雷火神山直播超两亿,Web播放器事件监听是怎么实现的?
Web播放器解决了在手机浏览器和PC浏览器上播放音视频数据的问题,让视音频内容可以不依赖用户安装App,就能进行播放以及在社交平台进行传播。在视频业务大数据平台中,播放数据的统计分析非常重要,所以Web播放器在使用过程中,需要对其内部的数据进行收集并上报至服务端,此时,就需要对发生在其内部的一些播放行为进行事件监听。 那么Web播放器事件监听是怎么实现的呢? 01 监听事件明细表 名
3万字总结,Mysql优化之精髓
本文知识点较多,篇幅较长,请耐心学习 MySQL已经成为时下关系型数据库产品的中坚力量,备受互联网大厂的青睐,出门面试想进BAT,想拿高工资,不会点MySQL优化知识,拿offer的成功率会大大下降。 为什么要优化 系统的吞吐量瓶颈往往出现在数据库的访问速度上 随着应用程序的运行,数据库的中的数据会越来越多,处理时间会相应变慢 数据是存放在磁盘上的,读写速度无法和内存相比 如何优化 设计
一条链接即可让黑客跟踪你的位置! | Seeker工具使用
搬运自:冰崖的部落阁(icecliffsnet) 严正声明:本文仅限于技术讨论,严禁用于其他用途。 请遵守相对应法律规则,禁止用作违法途径,出事后果自负! 上次写的防社工文章里边提到的gps定位信息(如何防止自己被社工或人肉) 除了主动收集他人位置信息以外,我们还可以进行被动收集 (没有技术含量) Seeker作为一款高精度地理位置跟踪工具,同时也是社交工程学(社会工程学)爱好者...
作为程序员的我,大学四年一直自学,全靠这些实用工具和学习网站!
我本人因为高中沉迷于爱情,导致学业荒废,后来高考,毫无疑问进入了一所普普通通的大学,实在惭愧...... 我又是那么好强,现在学历不行,没办法改变的事情了,所以,进入大学开始,我就下定决心,一定要让自己掌握更多的技能,尤其选择了计算机这个行业,一定要多学习技术。 在进入大学学习不久后,我就认清了一个现实:我这个大学的整体教学质量和学习风气,真的一言难尽,懂的人自然知道怎么回事? 怎么办?我该如何更好的提升
前端JS初级面试题二 (。•ˇ‸ˇ•。)老铁们!快来瞧瞧自己都会了么
1. 传统事件绑定和符合W3C标准的事件绑定有什么区别? 传统事件绑定 &lt;div onclick=""&gt;123&lt;/div&gt; div1.onclick = function(){}; &lt;button onmouseover=""&gt;&lt;/button&gt; 注意: 如果给同一个元素绑定了两次或多次相同类型的事件,那么后面的绑定会覆盖前面的绑定 (不支持DOM事...
相关热词 c# 数组类型 泛型约束 c#的赛狗日程序 c# 传递数组 可变参数 c# 生成存储过程 c# list 补集 c#获得所有窗体 c# 当前秒数转成年月日 c#中的枚举 c# 计算校验和 连续随机数不重复c#
立即提问