win7安装ubuntu双系统 错误 error:no such partition

图片说明

3个回答

分区是不是这样就行了啊

不是的,需要进一步精确的划分,具体教程私我,晚上发给你...

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
win7+ubuntu 11.04操作失误导致系统无法启动
各位好: 系统是win7+ubuntu 11.04. 昨天想把ubuntu给卸载,用win7中的磁盘管理直接将ubuntu的磁盘删除。 重启系统之后就出现下面错误提示: error:no such partition grub rescue> 现在想恢复从win7或者原ubuntu启动,并且不损坏win7系统,还有可能吗?可以的话应该怎么操作。 求各位指导! 非常感谢大家!
Ubuntu双系统,找不到引导区,直接进入win
首先说明我的安装过程: 1. 512G SSD + 500G HDD,SDD作为系统盘,打算分win10 350G,ubuntu16.04 160G左右 2. 采用UEFI引导,BIOS设置 UEFI only模式 3. 安装ubuntu之前关闭了security boot,安装过程采用默认安装方法 4. 安装结束后不能直接进入系统修改boot,只能进行重启 5. 重启后,看不到ubuntu的引导,直接进入win10系统 6. 电脑型号为Thinkpad P50,BIOS设置中也找不到Ubuntu的引导,更不能调节优先顺序 7. win10关闭快速启动 至此,安装结束,但是找不到ubuntu。 目前使用过的解决方案: 1. win环境下,通过EasyUEFI可以添加efi entry,但是重启后仍然引导win,再次打开EasyUEFI,之前的添加消失 2. 通过U盘驱动进入try模式,试图通过efibootmgr来调整启动项,发现之前EasyUEFI添加的entry还在,但是通过sudo efibootmgr -o 4, 0调整,没有权限,调整后看不到引导次序(permission denied) 3. try模式,采用mount,安装grub方法,类似之前的MBR引导方式,但GPT partition label contains no BIOS Boot Partition,这种方法同样无效 4. 同样是try模式,安装boot-repair,repair后,efibootmgr显示boot优先级,ubuntu是在windows前,但重启仍直接进入win 这几种方案都没有解决,请问各路大神,有什么建议
Ubuntu自己编译的Android系统运行到模拟器问题
``` root@huguang:~/viking# emulator -kernel ~/viking/prebuilts/qemu-kernel/arm64/kernel-qemu -sysdir ~/viking/out/target/product/generic_arm64/ -system system.img -data userdata.img -ramdisk ramdisk.img emulator: WARNING: system partition size adjusted to match image file (2560 MB > 200 MB) emulator: WARNING: data partition size adjusted to match image file (2560 MB > 200 MB) emulator: ERROR: Missing initial data partition file: (null) emulator: WARNING: encryption is off libGL error: unable to load driver: vmwgfx_dri.so libGL error: driver pointer missing libGL error: failed to load driver: vmwgfx libGL error: unable to load driver: swrast_dri.so libGL error: failed to load driver: swrast X Error of failed request: GLXBadContext Major opcode of failed request: 155 (GLX) Minor opcode of failed request: 6 (X_GLXIsDirect) Serial number of failed request: 49 Current serial number in output stream: 48 qemu-system-aarch64: Could not open 'userdata.img': 没有那个文件或目录 ```
Android 系统启动模拟器出现这个问题怎么解决
``` root@huguang:~/android# emulator emulator: WARNING: system partition size adjusted to match image file (2560 MB > 200 MB) emulator: WARNING: data partition size adjusted to match image file (550 MB > 200 MB) emulator: WARNING: encryption is off libGL error: unable to load driver: vmwgfx_dri.so libGL error: driver pointer missing libGL error: failed to load driver: vmwgfx libGL error: unable to load driver: swrast_dri.so libGL error: failed to load driver: swrast X Error of failed request: GLXBadContext Major opcode of failed request: 155 (GLX) Minor opcode of failed request: 6 (X_GLXIsDirect) Serial number of failed request: 49 Current serial number in output stream: 48 libGL error: unable to load driver: vmwgfx_dri.so libGL error: driver pointer missing libGL error: failed to load driver: vmwgfx libGL error: unable to load driver: swrast_dri.so libGL error: failed to load driver: swrast X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 155 (GLX) Minor opcode of failed request: 24 (X_GLXCreateNewContext) Value in failed request: 0x0 Serial number of failed request: 33 Current serial number in output stream: 34 QObject::~QObject: Timers cannot be stopped from another thread 段错误 (核心已转储) ``` ---------------- <br> <br> sdk目录好像少了tools目录 ``` root@huguang:~/android/out/host/linux-x86/sdk/sdk/android-sdk_eng.root_linux-x86# ls add-ons documentation.html RELEASE_NOTES.html tests build-tools platforms samples docs platform-tools system-images root@huguang:~/android/out/host/linux-x86/sdk/sdk/android-sdk_eng.root_linux-x86# ```
Estimation 估算的问题
Problem Description “There are too many numbers here!” your boss bellows. “How am I supposed to make sense of all of this? Pare it down! Estimate!” You are disappointed. It took a lot of work to generate those numbers. But, you’ll do what your boss asks. You decide to estimate in the following way: You have an array A of numbers. You will partition it into k contiguous sections, which won’t necessarily be of the same size. Then, you’ll use a single number to estimate an entire section. In other words, for your array A of size n, you want to create another array B of size n, which has k contiguous sections. If i and j are in the same section, then B[i]=B[j]. You want to minimize the error, expressed as the sum of the absolute values of the differences (Σ|A[i]-B[i]|). Input There will be several test cases in the input. Each test case will begin with two integers on a line, n (1≤n≤2,000) and k (1≤k≤25, k≤n), where n is the size of the array, and k is the number of contiguous sections to use in estimation. The array A will be on the next n lines, one integer per line. Each integer element of A will be in the range from -10,000 to 10,000, inclusive. The input will end with a line with two 0s. Output For each test case, output a single integer on its own line, which is the minimum error you can achieve. Output no extra spaces, and do not separate answers with blank lines. All possible inputs yield answers which will fit in a signed 64-bit integer. Sample Input 7 2 6 5 4 3 2 1 7 0 0 Sample Output 9
The Partition of A Graph 程序的实现过程
Problem Description Simple enough, you’ll be given a simple undirected graph with n vertexes and m edges, try to divide this graph fits the following requirements: All edges are divided into several groups; only two edges exist in every group, and this two edges have common vertex. Your task is to figure out if such a partition exists. Input There are multiple test cases, for each test case: The first line contains two integers n (0<n<=1000) and m represent the number of vertexes and edges. The following m lines, each line contains two integers represent two endpoints of an edge. The vertexes are numbered from 1 to n. The input terminated when n=m=0. Output For each test case, output one line, if such a partition exists, print “Yes”, otherwise, print “No”. Sample Input 3 3 1 2 1 3 2 3 3 2 1 2 1 3 0 0 Sample Output No Yes
mysql InnoDB: Error: "mysql"."innodb_table_stats
附上错误日志: 2015-06-23 22:39:06 1252 [Note] Giving 0 client threads a chance to die gracefully 2015-06-23 22:39:06 1252 [Note] Shutting down slave threads 2015-06-23 22:39:06 1252 [Note] Forcefully disconnecting 0 remaining clients 2015-06-23 22:39:06 1252 [Note] Binlog end 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'partition' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'BLACKHOLE' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_SYS_FIELDS' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_SYS_INDEXES' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_SYS_TABLES' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_FT_CONFIG' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_FT_DELETED' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_METRICS' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_CMPMEM' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_CMP_RESET' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_CMP' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_LOCK_WAITS' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_LOCKS' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'INNODB_TRX' 2015-06-23 22:39:09 1252 [Note] Shutting down plugin 'InnoDB' 2015-06-23 22:39:09 1252 [Note] InnoDB: FTS optimize thread exiting. 2015-06-23 22:39:09 1252 [Note] InnoDB: Starting shutdown... 2015-06-23 22:39:12 1252 [Note] InnoDB: Shutdown completed; log sequence number 44832642 2015-06-23 22:39:12 1252 [Note] Shutting down plugin 'ARCHIVE' 2015-06-23 22:39:12 1252 [Note] Shutting down plugin 'MyISAM' 2015-06-23 22:39:12 1252 [Note] Shutting down plugin 'CSV' 2015-06-23 22:39:12 1252 [Note] Shutting down plugin 'MRG_MYISAM' 2015-06-23 22:39:12 1252 [Note] Shutting down plugin 'MEMORY' 2015-06-23 22:39:12 1252 [Note] Shutting down plugin 'sha256_password' 2015-06-23 22:39:12 1252 [Note] Shutting down plugin 'mysql_old_password' 2015-06-23 22:39:12 1252 [Note] Shutting down plugin 'mysql_native_password' 2015-06-23 22:39:12 1252 [Note] Shutting down plugin 'binlog' 2015-06-23 22:39:12 1252 [Note] /usr/sbin/mysqld: Shutdown complete 150623 22:39:12 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended 150623 22:39:39 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 2015-06-23 22:39:39 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2015-06-23 22:39:39 1190 [Warning] Buffered warning: Changed limits: max_open_files: 1024 (requested 5000) 2015-06-23 22:39:39 1190 [Warning] Buffered warning: Changed limits: table_cache: 431 (requested 2000) 2015-06-23 22:39:39 1190 [Note] Plugin 'FEDERATED' is disabled. 2015-06-23 22:39:39 1190 [Note] InnoDB: Using atomics to ref count buffer pool pages 2015-06-23 22:39:39 1190 [Note] InnoDB: The InnoDB memory heap is disabled 2015-06-23 22:39:39 1190 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2015-06-23 22:39:39 1190 [Note] InnoDB: Memory barrier is not used 2015-06-23 22:39:39 1190 [Note] InnoDB: Compressed tables use zlib 1.2.3 2015-06-23 22:39:39 1190 [Note] InnoDB: Using Linux native AIO 2015-06-23 22:39:39 1190 [Note] InnoDB: Not using CPU crc32 instructions 2015-06-23 22:39:39 1190 [Note] InnoDB: Initializing buffer pool, size = 1.0G 2015-06-23 22:39:40 1190 [Note] InnoDB: Completed initialization of buffer pool 2015-06-23 22:39:40 1190 [Note] InnoDB: Highest supported file format is Barracuda. 2015-06-23 22:39:40 1190 [Note] InnoDB: 128 rollback segment(s) are active. 2015-06-23 22:39:40 1190 [Note] InnoDB: Waiting for purge to start 2015-06-23 22:39:40 1190 [Note] InnoDB: 5.6.22 started; log sequence number 44832642 2015-06-23 22:39:41 1190 [Note] Server hostname (bind-address): '*'; port: 3306 2015-06-23 22:39:41 1190 [Note] IPv6 is available. 2015-06-23 22:39:41 1190 [Note] - '::' resolves to '::'; 2015-06-23 22:39:41 1190 [Note] Server socket created on IP: '::'. 2015-06-23 22:39:41 1190 [ERROR] Incorrect definition of table mysql.db: expected column 'User' at position 2 to have type char(16), found type char(80). 2015-06-23 22:39:41 1190 [ERROR] Incorrect definition of table mysql.event: expected column 'definer' at position 3 to have type char(77), found type char(141). 2015-06-23 22:39:41 1190 [ERROR] Incorrect definition of table mysql.event: expected column 'sql_mode' at position 14 to have type set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH'), found type set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','IGNORE_BAD_TABLE_OPTIONS','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_A 2015-06-23 22:39:41 1190 [ERROR] Event Scheduler: An error occurred when initializing system tables. Disabling the Event Scheduler. 2015-06-23 22:39:41 1190 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.6.22' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) 2015-06-23 23:00:47 7f7038bc0700 InnoDB: Error: Column last_update in table "mysql"."innodb_table_stats" is INT UNSIGNED NOT NULL but should be BINARY(4) NOT NULL (type mismatch). 2015-06-23 23:00:47 7f7038bc0700 InnoDB: Error: Fetch of persistent statistics requested for table "moodle"."mdl_config" but the required system tables mysql.innodb_table_stats and mysql.innodb_index_stats are not present or have unexpected structure. Using transient stats instead.
win7系统安装centos 7 遇到问题
本人所用电脑是华硕。我用u盘安装centos 然后出现以下错误 配置分区时出现错误: No valid bootloader target device found See below for details. For a UEFI installation .you must include an EFI System Partition on a GPT -formatted disk . mounted at /boot/efi. 跪求大神解决。。。。
sqoop1.99.6启动job报错
用start job -jid 1报错 Exception has occurred during processing command Exception: org.apache.sqoop.common.SqoopException Message: CLIENT_0001:Server has returned exception Stack trace: at org.apache.sqoop.client.request.ResourceRequest (ResourceRequest.java:129) at org.apache.sqoop.client.request.ResourceRequest (ResourceRequest.java:179) at org.apache.sqoop.client.request.JobResourceRequest (JobResourceRequest.java:112) at org.apache.sqoop.client.request.SqoopResourceRequests (SqoopResourceRequests.java:157) at org.apache.sqoop.client.SqoopClient (SqoopClient.java:452) at org.apache.sqoop.shell.StartJobFunction (StartJobFunction.java:80) at org.apache.sqoop.shell.SqoopFunction (SqoopFunction.java:51) at org.apache.sqoop.shell.SqoopCommand (SqoopCommand.java:135) at org.apache.sqoop.shell.SqoopCommand (SqoopCommand.java:111) at org.codehaus.groovy.tools.shell.Command$execute (null:-1) at org.codehaus.groovy.runtime.callsite.CallSiteArray (CallSiteArray.java:42) at org.codehaus.groovy.tools.shell.Command$execute (null:-1) at org.codehaus.groovy.tools.shell.Shell (Shell.groovy:101) at org.codehaus.groovy.tools.shell.Groovysh (Groovysh.groovy:-1) at sun.reflect.GeneratedMethodAccessor23 (null:-1) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.codehaus.groovy.reflection.CachedMethod (CachedMethod.java:90) at groovy.lang.MetaMethod (MetaMethod.java:233) at groovy.lang.MetaClassImpl (MetaClassImpl.java:1054) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter (ScriptBytecodeAdapter.java:128) at org.codehaus.groovy.tools.shell.Groovysh (Groovysh.groovy:173) at sun.reflect.GeneratedMethodAccessor22 (null:-1) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce (PogoMetaMethodSite.java:267) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite (PogoMetaMethodSite.java:52) at org.codehaus.groovy.runtime.callsite.AbstractCallSite (AbstractCallSite.java:141) at org.codehaus.groovy.tools.shell.Groovysh (Groovysh.groovy:121) at org.codehaus.groovy.tools.shell.Shell (Shell.groovy:114) at org.codehaus.groovy.tools.shell.Shell$leftShift$0 (null:-1) at org.codehaus.groovy.tools.shell.ShellRunner (ShellRunner.groovy:88) at org.codehaus.groovy.tools.shell.InteractiveShellRunner (InteractiveShellRunner.groovy:-1) at sun.reflect.GeneratedMethodAccessor20 (null:-1) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.codehaus.groovy.reflection.CachedMethod (CachedMethod.java:90) at groovy.lang.MetaMethod (MetaMethod.java:233) at groovy.lang.MetaClassImpl (MetaClassImpl.java:1054) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter (ScriptBytecodeAdapter.java:128) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter (ScriptBytecodeAdapter.java:148) at org.codehaus.groovy.tools.shell.InteractiveShellRunner (InteractiveShellRunner.groovy:100) at sun.reflect.GeneratedMethodAccessor19 (null:-1) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce (PogoMetaMethodSite.java:267) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite (PogoMetaMethodSite.java:52) at org.codehaus.groovy.runtime.callsite.AbstractCallSite (AbstractCallSite.java:137) at org.codehaus.groovy.tools.shell.ShellRunner (ShellRunner.groovy:57) at org.codehaus.groovy.tools.shell.InteractiveShellRunner (InteractiveShellRunner.groovy:-1) at sun.reflect.NativeMethodAccessorImpl (NativeMethodAccessorImpl.java:-2) at sun.reflect.NativeMethodAccessorImpl (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.codehaus.groovy.reflection.CachedMethod (CachedMethod.java:90) at groovy.lang.MetaMethod (MetaMethod.java:233) at groovy.lang.MetaClassImpl (MetaClassImpl.java:1054) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter (ScriptBytecodeAdapter.java:128) at org.codehaus.groovy.runtime.ScriptBytecodeAdapter (ScriptBytecodeAdapter.java:148) at org.codehaus.groovy.tools.shell.InteractiveShellRunner (InteractiveShellRunner.groovy:66) at java_lang_Runnable$run (null:-1) at org.codehaus.groovy.runtime.callsite.CallSiteArray (CallSiteArray.java:42) at org.codehaus.groovy.runtime.callsite.AbstractCallSite (AbstractCallSite.java:108) at org.codehaus.groovy.runtime.callsite.AbstractCallSite (AbstractCallSite.java:112) at org.codehaus.groovy.tools.shell.Groovysh (Groovysh.groovy:463) at org.codehaus.groovy.tools.shell.Groovysh (Groovysh.groovy:402) at org.apache.sqoop.shell.SqoopShell (SqoopShell.java:130) Caused by: Exception: org.apache.sqoop.common.SqoopException Message: GENERIC_HDFS_CONNECTOR_0007:Invalid output directory - Unexpected exception Stack trace: at org.apache.sqoop.connector.hdfs.HdfsToInitializer (HdfsToInitializer.java:71) at org.apache.sqoop.connector.hdfs.HdfsToInitializer (HdfsToInitializer.java:35) at org.apache.sqoop.driver.JobManager (JobManager.java:449) at org.apache.sqoop.driver.JobManager (JobManager.java:373) at org.apache.sqoop.driver.JobManager (JobManager.java:276) at org.apache.sqoop.handler.JobRequestHandler (JobRequestHandler.java:380) at org.apache.sqoop.handler.JobRequestHandler (JobRequestHandler.java:116) at org.apache.sqoop.server.v1.JobServlet (JobServlet.java:96) at org.apache.sqoop.server.SqoopProtocolServlet (SqoopProtocolServlet.java:79) at javax.servlet.http.HttpServlet (HttpServlet.java:646) at javax.servlet.http.HttpServlet (HttpServlet.java:723) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:206) at org.apache.hadoop.security.authentication.server.AuthenticationFilter (AuthenticationFilter.java:644) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter (DelegationTokenAuthenticationFilter.java:304) at org.apache.hadoop.security.authentication.server.AuthenticationFilter (AuthenticationFilter.java:592) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve (StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve (StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve (StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve (ErrorReportValve.java:103) at org.apache.catalina.core.StandardEngineValve (StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter (CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor (Http11Processor.java:861) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler (Http11Protocol.java:606) at org.apache.tomcat.util.net.JIoEndpoint$Worker (JIoEndpoint.java:489) at java.lang.Thread (Thread.java:748) Caused by: Exception: java.io.IOException Message: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; Host Details : local host is: "node01/192.168.65.100"; destination host is: "node01":9870; Stack trace: at org.apache.hadoop.net.NetUtils (NetUtils.java:818) at org.apache.hadoop.ipc.Client (Client.java:1549) at org.apache.hadoop.ipc.Client (Client.java:1491) at org.apache.hadoop.ipc.Client (Client.java:1388) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker (ProtobufRpcEngine.java:233) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker (ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy19 (null:-1) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB (ClientNamenodeProtocolTranslatorPB.java:907) at sun.reflect.NativeMethodAccessorImpl (NativeMethodAccessorImpl.java:-2) at sun.reflect.NativeMethodAccessorImpl (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method (Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler (RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call (RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call (RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call (RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler (RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy20 (null:-1) at org.apache.hadoop.hdfs.DFSClient (DFSClient.java:1666) at org.apache.hadoop.hdfs.DistributedFileSystem$29 (DistributedFileSystem.java:1576) at org.apache.hadoop.hdfs.DistributedFileSystem$29 (DistributedFileSystem.java:1573) at org.apache.hadoop.fs.FileSystemLinkResolver (FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem (DistributedFileSystem.java:1588) at org.apache.hadoop.fs.FileSystem (FileSystem.java:1683) at org.apache.sqoop.connector.hdfs.HdfsToInitializer (HdfsToInitializer.java:58) at org.apache.sqoop.connector.hdfs.HdfsToInitializer (HdfsToInitializer.java:35) at org.apache.sqoop.driver.JobManager (JobManager.java:449) at org.apache.sqoop.driver.JobManager (JobManager.java:373) at org.apache.sqoop.driver.JobManager (JobManager.java:276) at org.apache.sqoop.handler.JobRequestHandler (JobRequestHandler.java:380) at org.apache.sqoop.handler.JobRequestHandler (JobRequestHandler.java:116) at org.apache.sqoop.server.v1.JobServlet (JobServlet.java:96) at org.apache.sqoop.server.SqoopProtocolServlet (SqoopProtocolServlet.java:79) at javax.servlet.http.HttpServlet (HttpServlet.java:646) at javax.servlet.http.HttpServlet (HttpServlet.java:723) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:206) at org.apache.hadoop.security.authentication.server.AuthenticationFilter (AuthenticationFilter.java:644) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter (DelegationTokenAuthenticationFilter.java:304) at org.apache.hadoop.security.authentication.server.AuthenticationFilter (AuthenticationFilter.java:592) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain (ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve (StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve (StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve (StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve (ErrorReportValve.java:103) at org.apache.catalina.core.StandardEngineValve (StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter (CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor (Http11Processor.java:861) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler (Http11Protocol.java:606) at org.apache.tomcat.util.net.JIoEndpoint$Worker (JIoEndpoint.java:489) at java.lang.Thread (Thread.java:748) Caused by: Exception: java.lang.Throwable Message: RPC response exceeds maximum data length Stack trace: at org.apache.hadoop.ipc.Client$IpcStreams (Client.java:1864) at org.apache.hadoop.ipc.Client$Connection (Client.java:1183) at org.apache.hadoop.ipc.Client$Connection (Client.java:1079) 哪位大侠帮忙看看:主要应该是这句 Caused by: Exception: java.io.IOException Message: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; Host Details : local host is: "node01/192.168.65.100"; destination host is: "node01":9870; 但不知道问题出在哪里 我的link配置: From database configuration Schema name: mysql Table name: help_topic Table SQL statement: Table column names: Partition column name: Null value allowed for the partition column: Boundary query: Incremental read Check column: Last value: To HDFS configuration Override null value: Null value: Output format: 0 : TEXT_FILE 1 : SEQUENCE_FILE Choose: 0 Compression format: 0 : NONE 1 : DEFAULT 2 : DEFLATE 3 : GZIP 4 : BZIP2 5 : LZO 6 : LZ4 7 : SNAPPY 8 : CUSTOM Choose: 0 Custom compression format: Output directory: hdfs://node01:9870/sqoop Append mode: Throttling resources
在eclipse上创建AVD环境出现的问题!!求解!!!!
在eclipse上创建AVD环境出现的问题 Starting emulator for AVD 'AVD1' emulator: WARNING: userdata partition is resized from 1 M to 200 M ERROR: resizing partition e2fsck failed with exit code 8 emulator: ERROR: x86 emulation currently requires hardware acceleration! Please ensure Intel HAXM is properly installed and usable. CPU acceleration status: HAXM must be updated (version 1.1.1 < 6.0.1). HAXM下了,虚拟机也是打开的 ![图片说明](https://img-ask.csdn.net/upload/201908/14/1565763489_325453.png) ![图片说明](https://img-ask.csdn.net/upload/201908/14/1565763577_148689.png)
ICPC: Intelligent Congruent Partition of Chocolate
The twins named Tatsuya and Kazuya love chocolate. They have found a bar of their favorite chocolate in a very strange shape. The chocolate bar looks to have been eaten partially by Mam. They, of course, claim to eat it and then will cut it into two pieces for their portions. Since they want to be sure that the chocolate bar is fairly divided, they demand that the shapes of the two pieces are congruent and that each piece is connected. The chocolate bar consists of many square shaped blocks of chocolate connected to the adjacent square blocks of chocolate at their edges. The whole chocolate bar is also connected. Cutting the chocolate bar along with some edges of square blocks, you should help them to divide it into two congruent and connected pieces of chocolate. That is, one piece fits into the other after it is rotated, turned over and moved properly. Figure F-1: A partially eaten chocolate bar with 18 square blocks of chocolate For example, there is a partially eaten chocolate bar with 18 square blocks of chocolate as depicted in Figure F-1. Cutting it along with some edges of square blocks, you get two pieces of chocolate with 9 square blocks each as depicted in Figure F-2. Figure F-2: Partitioning of the chocolate bar in Figure F-1 You get two congruent and connected pieces as the result. One of them consists of 9 square blocks hatched with horizontal lines and the other with vertical lines. Rotated clockwise with a right angle and turned over on a horizontal line, the upper piece exactly fits into the lower piece. Figure F-3: A shape that cannot be partitioned into two congruent and connected pieces Two square blocks touching only at their corners are regarded as they are not connected to each other. Figure F-3 is an example shape that cannot be partitioned into two congruent and connected pieces. Note that, without the connectivity requirement, this shape can be partitioned into two congruent pieces with three squares (Figure F-4). Figure F-4: Two congruent but disconnected pieces Your job is to write a program that judges whether a given bar of chocolate can be partitioned into such two congruent and connected pieces or not. Input The input is a sequence of datasets. The end of the input is indicated by a line containing two zeros separated by a space. Each dataset is formatted as follows. w h r(1, 1) ... r(1, w) r(2, 1) ... r(2, w) ... r(h, 1) ... r(h, w) The integers w and h are the width and the height of a chocolate bar, respectively. You may assume 2 ≤ w ≤ 10 and 2 ≤ h ≤ 10. Each of the following h lines consists of w digits delimited by a space. The digit r(i, j) represents the existence of a square block of chocolate at the position (i, j) as follows. '0': There is no chocolate (i.e., already eaten). '1': There is a square block of chocolate. You can assume that there are at most 36 square blocks of chocolate in the bar, i.e., the number of digit '1's representing square blocks of chocolate is at most 36 in each dataset. You can also assume that there is at least one square block of chocolate in each row and each column. You can assume that the chocolate bar is connected. Since Mam does not eat chocolate bars making holes in them, you can assume that there is no dataset that represents a bar in a shape with hole(s) as depicted in Figure F-5. Figure F-5: A partially eaten chocolate bar with a hole (You can assume that there is no dataset like this example) Output For each dataset, output a line containing one of two uppercase character strings "YES" or "NO". "YES" means the chocolate bar indicated by the dataset can be partitioned into two congruent and connected pieces, and "NO" means it cannot be partitioned into such two pieces. No other characters should be on the output line. Sample Input 2 2 1 1 1 1 3 3 0 1 0 1 1 0 1 1 1 4 6 1 1 1 0 1 1 1 1 1 1 1 0 1 1 1 0 0 1 1 0 1 1 1 0 7 5 0 0 1 0 0 1 1 0 1 1 1 1 1 0 0 1 1 1 1 1 0 1 1 1 1 1 1 0 1 0 0 0 1 1 0 9 7 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 9 7 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 7 6 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 0 0 1 1 1 1 1 0 0 1 1 1 1 1 0 10 10 0 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 1 0 1 1 1 1 1 1 1 1 1 0 0 0 Output for the Sample Input YES NO YES YES YES NO NO YES
fsl i.mx6烧写启动烧写的android系统时出现了初始化错误!
U-Boot 2009.08 ( 3月 05 2013 - 17:20:28) CPU: Freescale i.MX6 family TO1.2 at 792 MHz Temperature: 34 C, calibration data 0x5774e769 mx6q pll1: 792MHz mx6q pll2: 528MHz mx6q pll3: 480MHz mx6q pll8: 50MHz ipg clock : 66000000Hz ipg per clock : 66000000Hz uart clock : 80000000Hz cspi clock : 60000000Hz ahb clock : 132000000Hz axi clock : 264000000Hz emi_slow clock: 29333333Hz ddr clock : 528000000Hz usdhc1 clock : 198000000Hz usdhc2 clock : 198000000Hz usdhc3 clock : 198000000Hz usdhc4 clock : 198000000Hz nfc clock : 24000000Hz Board: i.MX6Q-SABRESD: unknown-board Board: 0x63012 [POR ] Boot Device: SD I2C: ready DRAM: 1 GB MMC: FSL_USDHC: 0,FSL_USDHC: 1,FSL_USDHC: 2,FSL_USDHC: 3 *** Warning - bad CRC or MMC, using default environment In: serial Out: serial Err: serial Net: got MAC address from IIM: 00:00:00:00:00:00 FEC0 [PRIME] Hit any key to stop autoboot: 0 Card did not respond to voltage select! mmc3 init failed fastboot is in init......flash target is MMC:3 Card did not respond to voltage select! MMC card init failed! Bad partition index:1 for partition:boot Bad partition index:2 for partition:recovery Bad partition index:5 for partition:system wait usb cable into the connector!
启动mysql服务报ERROR:1067错误
这里是日志: 2014-09-01T06:12:49.816554Z 0 [Note] Plugin 'FEDERATED' is disabled. 2014-09-01T06:12:49.818554Z 0 [Note] InnoDB: Using atomics to ref count buffer pool pages 2014-09-01T06:12:49.820554Z 0 [Note] InnoDB: Mutexes and rw_locks use Windows interlocked functions 2014-09-01T06:12:49.822554Z 0 [Note] InnoDB: Uses system mutexes 2014-09-01T06:12:49.824554Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.3 2014-09-01T06:12:49.825555Z 0 [Warning] InnoDB: Adjusting innodb_buffer_pool_instances from 8 to 1 since innodb_buffer_pool_size is less than 1024 MiB 2014-09-01T06:12:49.830555Z 0 [Note] InnoDB: Number of pools: 1 2014-09-01T06:12:49.832555Z 0 [Note] InnoDB: Not using CPU crc32 instructions 2014-09-01T06:12:49.834555Z 0 [ERROR] InnoDB: Unable to create temporary file; errno: 2 2014-09-01T06:12:49.835555Z 0 [ERROR] Plugin 'InnoDB' init function returned error. 2014-09-01T06:12:49.837555Z 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 2014-09-01T06:12:49.840555Z 0 [ERROR] Unknown/unsupported storage engine: innodb 2014-09-01T06:12:49.841555Z 0 [ERROR] Aborting 2014-09-01T06:12:49.842556Z 0 [Note] Binlog end 2014-09-01T06:12:50.010565Z 0 [Note] Shutting down plugin 'partition' 2014-09-01T06:12:50.015565Z 0 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA' 2014-09-01T06:12:50.021566Z 0 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES' 2014-09-01T06:12:50.027566Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES' 2014-09-01T06:12:50.033566Z 0 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS' 2014-09-01T06:12:50.039567Z 0 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN' 2014-09-01T06:12:50.043567Z 0 [Note] Shutting down plugin 'INNODB_SYS_FIELDS' 2014-09-01T06:12:50.045567Z 0 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS' 2014-09-01T06:12:50.048567Z 0 [Note] Shutting down plugin 'INNODB_SYS_INDEXES' 2014-09-01T06:12:50.050567Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS' 2014-09-01T06:12:50.052568Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLES' 2014-09-01T06:12:50.054568Z 0 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE' 2014-09-01T06:12:50.057568Z 0 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE' 2014-09-01T06:12:50.059568Z 0 [Note] Shutting down plugin 'INNODB_FT_CONFIG' 2014-09-01T06:12:50.061568Z 0 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED' 2014-09-01T06:12:50.064568Z 0 [Note] Shutting down plugin 'INNODB_FT_DELETED' 2014-09-01T06:12:50.066568Z 0 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD' 2014-09-01T06:12:50.068568Z 0 [Note] Shutting down plugin 'INNODB_METRICS' 2014-09-01T06:12:50.071569Z 0 [Note] Shutting down plugin 'INNODB_TEMP_TABLE_INFO' 2014-09-01T06:12:50.073569Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS' 2014-09-01T06:12:50.074569Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU' 2014-09-01T06:12:50.076569Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE' 2014-09-01T06:12:50.078569Z 0 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET' 2014-09-01T06:12:50.079569Z 0 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX' 2014-09-01T06:12:50.081569Z 0 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET' 2014-09-01T06:12:50.083569Z 0 [Note] Shutting down plugin 'INNODB_CMPMEM' 2014-09-01T06:12:50.084569Z 0 [Note] Shutting down plugin 'INNODB_CMP_RESET' 2014-09-01T06:12:50.086569Z 0 [Note] Shutting down plugin 'INNODB_CMP' 2014-09-01T06:12:50.087570Z 0 [Note] Shutting down plugin 'INNODB_LOCK_WAITS' 2014-09-01T06:12:50.089570Z 0 [Note] Shutting down plugin 'INNODB_LOCKS' 2014-09-01T06:12:50.091570Z 0 [Note] Shutting down plugin 'INNODB_TRX' 2014-09-01T06:12:50.092570Z 0 [Note] Shutting down plugin 'BLACKHOLE' 2014-09-01T06:12:50.094570Z 0 [Note] Shutting down plugin 'ARCHIVE' 2014-09-01T06:12:50.096570Z 0 [Note] Shutting down plugin 'MRG_MYISAM' 2014-09-01T06:12:50.097570Z 0 [Note] Shutting down plugin 'MyISAM' 2014-09-01T06:12:50.099570Z 0 [Note] Shutting down plugin 'MEMORY' 2014-09-01T06:12:50.100570Z 0 [Note] Shutting down plugin 'CSV' 2014-09-01T06:12:50.101570Z 0 [Note] Shutting down plugin 'sha256_password' 2014-09-01T06:12:50.103570Z 0 [Note] Shutting down plugin 'mysql_old_password' 2014-09-01T06:12:50.105571Z 0 [Note] Shutting down plugin 'mysql_native_password' 2014-09-01T06:12:50.107571Z 0 [Note] Shutting down plugin 'binlog' 2014-09-01T06:12:50.108571Z 0 [Note] F:\mysqlpath\MySQL\MySQL Server 5.7\bin\mysqld: Shutdown complete 有谁知道的吗?谢谢。。。
Doors and Penguins 代码编程
Problem Description The organizers of the Annual Computing Meeting have invited a number of vendors to set up booths in a large exhibition hall during the meeting to showcase their latest products. As the vendors set up their booths at their assigned locations, they discovered that the organizers did not take into account an important fact---each vendor supports either the Doors operating system or the Penguin operating system, but not both. A vendor supporting one operating system does not want a booth next to one supporting another operating system. Unfortunately the booths have already been assigned and even set up. There is no time to reassign the booths or have them moved. To make matter worse, these vendors in fact do not even want to be in the same room with vendors supporting a different operating system. Luckily, the organizers found some portable partition screens to build a wall that can separate the two groups of vendors. They have enough material to build a wall of any length. The screens can only be used to build a straight wall. The organizers need your help to determine if it is possible to separate the two groups of vendors by a single straight wall built from the portable screens. The wall built must not touch any vendor booth (but it may be arbitrarily close to touching a booth). This will hopefully prevent one of the vendors from knocking the wall over accidentally. Input The input consists of a number of cases. Each case starts with 2 integers on a line separated by a single space: D and P, the number of vendors supporting the Doors and Penguins operating system, respectively (1 <= D, P <= 500). The next D lines specify the locations of the vendors supporting Doors. This is followed by P lines specifying the locations of the vendors supporting Penguins. The location of each vendor is specified by four positive integers: x1, y1, x2, y2. (x1, y1) specifies the coordinates of the southwest corner of the booth while (x2, y2) specifies the coordinates of the northeast corner. The coordinates satisfy x1 < x2 and y1 < y2. All booths are rectangular and have sides parallel to one of the compass directions. The coordinates of the southwest corner of the exhibition hall is (0,0) and the coordinates of the northeast corner is (15000, 15000). You may assume that all vendor booths are completely inside the exhibition hall and do not touch the walls of the hall. The booths do not overlap or touch each other. The end of input is indicated by D = P = 0. Output For each case, print the case number (starting from 1), followed by a colon and a space. Next, print the sentence: It is possible to separate the two groups of vendors. if it is possible to do so. Otherwise, print the sentence: It is not possible to separate the two groups of vendors. Print a blank line between consecutive cases. Sample Input 3 3 10 40 20 50 50 80 60 90 30 60 40 70 30 30 40 40 50 50 60 60 10 10 20 20 2 1 10 10 20 20 40 10 50 20 25 12 35 40 0 0 Sample Output Case 1: It is possible to separate the two groups of vendors. Case 2: It is not possible to separate the two groups of vendors.
卡夫卡kafka消费问题,消费一段时间就会超时 求助原因
``` 80 DEBUG [YyjkStorageKafkaConsumerThread] YyjkStorageKafkaConsumerThread.value:{"tableName":"sw_segment","operateType":"INSERT","operateId":"4921.43.15759673490360004","indexType":"type","storageType":"elasticsearch","date":1575967330707,"tableData":{"trace_id":"4921.43.15759673490360005","endpoint_name":"/v4/default/registry/microservices/b0b6cb4d62e32b56c3bf8cb4bd2b7aed46cdffbe/instances/72abd3701b1811eaa6ea005056b6530b/heartbeat","latency":3,"end_time":1575967349039,"endpoint_id":189076,"service_instance_id":4921,"version":2,"start_time":1575967349036,"data_binary":"Cg0KC7kmK8SNreHuqv8bEsQBEP///////////wEYrLLl9+4tIK+y5ffuLTCUxQtABFABWANgO3oSCgtodHRwLm1ldGhvZBIDUFVUeogBCgN1cmwSgAEvdjQvZGVmYXVsdC9yZWdpc3RyeS9taWNyb3NlcnZpY2VzL2IwYjZjYjRkNjJlMzJiNTZjM2JmOGNiNGJkMmI3YWVkNDZjZGZmYmUvaW5zdGFuY2VzLzcyYWJkMzcwMWIxODExZWFhNmVhMDA1MDU2YjY1MzBiL2hlYXJ0YmVhdBgNILkm","service_id":13,"time_bucket":20191210164229,"is_error":0,"segment_id":"4921.43.15759673490360004"}} 16:40:05,480 DEBUG [YyjkStorageKafkaConsumerThread] YyjkStorageKafkaConsumerThread.FormatData,tableName:sw_segment|operateId:4921.43.15759673490360004|tableMap:{trace_id=4921.43.15759673490360005, endpoint_name=/v4/default/registry/microservices/b0b6cb4d62e32b56c3bf8cb4bd2b7aed46cdffbe/instances/72abd3701b1811eaa6ea005056b6530b/heartbeat, latency=3, end_time=1575967349039, endpoint_id=189076, service_instance_id=4921, version=2, start_time=1575967349036, data_binary=Cg0KC7kmK8SNreHuqv8bEsQBEP///////////wEYrLLl9+4tIK+y5ffuLTCUxQtABFABWANgO3oSCgtodHRwLm1ldGhvZBIDUFVUeogBCgN1cmwSgAEvdjQvZGVmYXVsdC9yZWdpc3RyeS9taWNyb3NlcnZpY2VzL2IwYjZjYjRkNjJlMzJiNTZjM2JmOGNiNGJkMmI3YWVkNDZjZGZmYmUvaW5zdGFuY2VzLzcyYWJkMzcwMWIxODExZWFhNmVhMDA1MDU2YjY1MzBiL2hlYXJ0YmVhdBgNILkm, service_id=13, time_bucket=20191210164229, is_error=0, segment_id=4921.43.15759673490360004} 16:40:05,480 DEBUG [ElasticSearchClient] Executing bulk [32] with 8 requests 16:40:05,481 DEBUG [MainClientExec] [exchange: 44] start execution 16:40:05,481 DEBUG [RequestAddCookies] CookieSpec selected: default 16:40:05,481 DEBUG [RequestAuthCache] Re-using cached 'basic' auth scheme for http://10.23.11.224:9200 16:40:05,481 DEBUG [RequestAuthCache] No credentials for preemptive authentication 16:40:05,481 DEBUG [InternalHttpAsyncClient] [exchange: 44] Request connection for {}->http://10.23.11.224:9200 16:40:05,481 DEBUG [PoolingNHttpClientConnectionManager] Connection request: [route: {}->http://10.23.11.224:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 16:40:05,482 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: Set timeout 0 16:40:05,482 DEBUG [PoolingNHttpClientConnectionManager] Connection leased: [id: http-outgoing-0][route: {}->http://10.23.11.224:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 16:40:05,482 DEBUG [InternalHttpAsyncClient] [exchange: 44] Connection allocated: CPoolProxy{http-outgoing-0 [ACTIVE]} 16:40:05,482 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: Set attribute http.nio.exchange-handler 16:40:05,482 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:r]: Event set [w] 16:40:05,482 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] Request ready 16:40:05,482 DEBUG [InternalHttpAsyncClient] Connection route already established 16:40:05,482 DEBUG [MainClientExec] [exchange: 44] Attempt 1 to execute request 16:40:05,482 DEBUG [MainClientExec] Target auth state: UNCHALLENGED 16:40:05,482 DEBUG [MainClientExec] Proxy auth state: UNCHALLENGED 16:40:05,482 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:w]: Set timeout 30000 16:40:05,482 DEBUG [headers] http-outgoing-0 >> POST /_bulk?timeout=1m HTTP/1.1 16:40:05,482 DEBUG [headers] http-outgoing-0 >> Content-Length: 6657 16:40:05,482 DEBUG [headers] http-outgoing-0 >> Content-Type: application/json 16:40:05,482 DEBUG [headers] http-outgoing-0 >> Host: 10.23.11.224:9200 16:40:05,482 DEBUG [headers] http-outgoing-0 >> Connection: Keep-Alive 16:40:05,482 DEBUG [headers] http-outgoing-0 >> User-Agent: Apache-HttpAsyncClient/4.1.2 (Java/1.8.0_221) 16:40:05,483 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:w]: Event set [w] 16:40:05,483 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] Output ready 16:40:05,483 DEBUG [MainClientExec] [exchange: 44] produce content 16:40:05,483 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] [content length: 6657; pos: 4096; completed: false] 16:40:05,483 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:w]: 4293 bytes written 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "POST /_bulk?timeout=1m HTTP/1.1[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "Content-Length: 6657[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "Content-Type: application/json[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "Host: 10.23.11.224:9200[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "User-Agent: Apache-HttpAsyncClient/4.1.2 (Java/1.8.0_221)[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "[\r][\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.88.15759673496781142"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.88.15759673496781143","endpoint_name":"/authentication","latency":71,"end_time":1575967349749,"endpoint_id":150,"service_instance_id":11,"version":2,"start_time":1575967349678,"data_binary":"CgwKCgtY1oK15O6q/xsS3gEIARivt+X37i0gurfl9+4tMIkBQAVQAVgBYCF6DgoHZGIudHlwZRIDc3FsehsKC2RiLmluc3RhbmNlEgx0eWd6cHRfZHpzd2p6kwEKDGRiLnN0YXRlbWVudBKCAXNlbGVjdCB0LmNoZWNrX3RpbWUsdC5leHRlbmRfaW5mbyx0LnVzZXJfbmFtZSx0LmxvZ2luX2NoYW5uZWwgZnJvbSBzc29fdXNlcl9zZXNzaW9uIHQgd2hlcmUgdC50aWNrZXQgPSA/IGFuZCB0LmxvZ291dF90aW1lIGlzIG51bGwSnAEIAhjGt+X37i0g2Lfl9+4tMJUBQAVQAVgBYCF6DgoHZGIudHlwZRIDc3FsehsKC2RiLmluc3RhbmNlEgx0eWd6cHRfZHpzd2p6UgoMZGIuc3RhdGVtZW50EkJ1cGRhdGUgc3NvX3VzZXJfc2Vzc2lvbiB0IHNldCB0LmV4dGVuZF9pbmZvID0gPyB3aGVyZSB0LnRpY2tldCA9ID8SWAgDGNm35ffuLSDrt+X37i0wlAFABVABWAFgIXoOCgdkYi50eXBlEgNzcWx6GwoLZGIuaW5zdGFuY2USDHR5Z3pwdF9kenN3anoOCgxkYi5zdGF0ZW1lbnQSZhD///////////8BGK635ffuLSD1t+X37i0wlgFYA2ABejAKA3VybBIpaHR0cDovL25zc28uZHpzd2pqYy50YXguY24vYXV0aGVudGljYXRpb256EgoLaHR0cC5tZXRob2QSA0dFVBgMIAs=","service_id":12,"time_bucket":20191210164229,"is_error":0,"segment_id":"11.88.15759673496781142"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"4921.36.15759673457660020"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"4921.36.15759673457660021","endpoint_name":"Mysql/JDBI/PreparedStatement/executeQuery","latency":377,"end_time":1575967346143,"endpoint_id":162,"service_instance_id":4921,"version":2,"start_time":1575967345766,"data_binary":"Cg0KC7kmJPSg4dHuqv8bErYBEP///////////wEY5pjl9+4tIN+b5ffuLTCiAUADUAFYAWAheg4KB2RiLnR5cGUSA3NxbHohCgtkYi5pbnN0YW5jZRISdHlnenB0X2R6c3dqX3d3X2tmel0KDGRiLnN0YXRlbWVudBJNc2VsZWN0ICogZnJvbSBxel9kbWIgd2hlcmUgeHlieiA9ICdZJyBBTkQgeXhieiA9ICdZJyBhbmQgbG93ZXIoY29kZSk9bG93ZXIoPykYDSC5Jg==","service_id":13,"time_bucket":20191210164225,"is_error":0,"segment_id":"4921.36.15759673457660020"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"4921.36.15759673461450022"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"4921.36.15759673461450023","endpoint_name":"Mysql/JDBI/PreparedStatement/executeQuery","latency":69,"end_time":1575967346214,"endpoint_id":162,"service_instance_id":4921,"version":2,"start_time":1575967346145,"data_binary":"Cg0KC7kmJKbKyNPuqv8bErYBEP///////////wEY4Zvl9+4tIKac5ffuLTCiAUADUAFYAWAheg4KB2RiLnR5cGUSA3NxbHohCgtkYi5pbnN0YW5jZRISdHlnenB0X2R6c3dqX3d3X2tmel0KDGRiLnN0YXRlbWVudBJNc2VsZWN0ICogZnJvbSBxel9kbWIgd2hlcmUgeHlieiA9ICdZJyBBTkQgeXhieiA9ICdZJyBhbmQgbG93ZXIoY29kZSk9bG93ZXIoPykYDSC5Jg==","service_id":13,"time_bucket":20191210164226,"is_error":0,"segment_id":"4921.36.15759673461450022"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.37.15759673529984298"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.37.15759673529984299","endpoint_name":"Mysql/JDBI/PreparedStatement/executeQuery","latency":14,"end_time":1575967353012,"endpoint_id":137,"service_instance_id":11,"version":2,"start_time":1575967352998,"data_binary":"CgwKCgslqsqf9O6q/xsSsAEQ////////////ARim0eX37i0gtNHl9+4tMIkBQAVQAVgBYCF6DgoHZGIudHlwZRIDc3FsehsKC2RiLmluc3RhbmNlEgx0eWd6cHRfZHpzd2p6XQoMZGIuc3RhdGVtZW50Ek1zZWxlY3QgKiBmcm9tIHF6X2RtYiB3aGVyZSB4eWJ6ID0gJ1knIEFORCB5eGJ6ID0gJ1knIGFuZCBsb3dlcihjb2RlKT1sb3dlcig/KRgMIAs=","service_id":12,"time_bucket":20191210164232,"is_error":0,"segment_id":"11.37.15759673529984298"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.37.15759673530124300"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.37.15759673530124301","endpoint_name":"Mysql/JDBI/PreparedStatement/executeQuery","latency":12,"end_time":1575967353024,"endpoint_id":137,"service_instance_id":11,"version":2,"start_time":1575967353012,"data_binary":"CgwKCgsljJCo9O6q/xsSsAEQ////////////ARi00eX37i0gwNHl9+4tMIkBQAVQAVgBYCF6DgoHZGIudHlwZRIDc3FsehsKC2RiLmluc3RhbmNlEgx0eWd6cHRfZHpzd2p6XQoMZGIuc3RhdGVtZW50Ek1zZWxlY3QgKiBmcm9tIHF6X2RtYiB3aGVyZSB4eWJ6ID0gJ1knIEFORCB5eGJ6ID0gJ1knIGFuZCBsb3dlcihjb2RlKT1sb3dlcig/KRgMIAs=","service_id":12,"time_bucket":20191210164233,"is_error":0,"segment_id":"11.37.15759673530124300"}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.47.15759673539631576"}}[\n]" 16:40:05,483 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.47.15759673539631577","endpoint_name":"/v4/default/registry/mi" 16:40:05,483 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] Output ready 16:40:05,483 DEBUG [MainClientExec] [exchange: 44] produce content 16:40:05,483 DEBUG [MainClientExec] [exchange: 44] Request completed 16:40:05,484 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] [content length: 6657; pos: 6657; completed: true] 16:40:05,484 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][rw:w]: 2561 bytes written 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "croservices/62af8840312c4750370c3ea64fd68203bf02d518/instances/1563e3d11a5f11eabd58005056b67cc4/heartbeat","latency":2,"end_time":1575967353965,"endpoint_id":146,"service_instance_id":11,"version":2,"start_time":1575967353963,"data_binary":"CgwKCgsv2LPs+O6q/xsSwwEQ////////////ARjr2OX37i0g7djl9+4tMJIBQAdQAVgDYDt6EgoLaHR0cC5tZXRob2QSA1BVVHqIAQoDdXJsEoABL3Y0L2RlZmF1bHQvcmVnaXN0cnkvbWljcm9zZXJ2aWNlcy82MmFmODg0MDMxMmM0NzUwMzcwYzNlYTY0ZmQ2ODIwM2JmMDJkNTE4L2luc3RhbmNlcy8xNTYzZTNkMTFhNWYxMWVhYmQ1ODAwNTA1NmI2N2NjNC9oZWFydGJlYXQYDCAL","service_id":12,"time_bucket":20191210164233,"is_error":0,"segment_id":"11.47.15759673539631576"}[\n]" 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"11.47.15759673539651578"}}[\n]" 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"11.47.15759673539631577","endpoint_name":"#/v4/default/registry/microservices/62af8840312c4750370c3ea64fd68203bf02d518/instances/1563e3d11a5f11eabd58005056b67cc4/heartbeat","latency":0,"end_time":1575967353965,"endpoint_id":147,"service_instance_id":11,"version":2,"start_time":1575967353965,"data_binary":"CgwKCgsv+s/t+O6q/xsSwAMQ////////////ARjt2OX37i0g7djl9+4tKpoCCAESDAoKCy/Ys+z47qr/GyALOAtCgAEvdjQvZGVmYXVsdC9yZWdpc3RyeS9taWNyb3NlcnZpY2VzLzYyYWY4ODQwMzEyYzQ3NTAzNzBjM2VhNjRmZDY4MjAzYmYwMmQ1MTgvaW5zdGFuY2VzLzE1NjNlM2QxMWE1ZjExZWFiZDU4MDA1MDU2YjY3Y2M0L2hlYXJ0YmVhdFKAAS92NC9kZWZhdWx0L3JlZ2lzdHJ5L21pY3Jvc2VydmljZXMvNjJhZjg4NDAzMTJjNDc1MDM3MGMzZWE2NGZkNjgyMDNiZjAyZDUxOC9pbnN0YW5jZXMvMTU2M2UzZDExYTVmMTFlYWJkNTgwMDUwNTZiNjdjYzQvaGVhcnRiZWF0OoEBIy92NC9kZWZhdWx0L3JlZ2lzdHJ5L21pY3Jvc2VydmljZXMvNjJhZjg4NDAzMTJjNDc1MDM3MGMzZWE2NGZkNjgyMDNiZjAyZDUxOC9pbnN0YW5jZXMvMTU2M2UzZDExYTVmMTFlYWJkNTgwMDUwNTZiNjdjYzQvaGVhcnRiZWF0UAJYA2A7GAwgCw==","service_id":12,"time_bucket":20191210164233,"is_error":0,"segment_id":"11.47.15759673539651578"}[\n]" 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "{"index":{"_index":"sw_segment","_type":"type","_id":"4921.43.15759673490360004"}}[\n]" 16:40:05,484 DEBUG [wire] http-outgoing-0 >> "{"trace_id":"4921.43.15759673490360005","endpoint_name":"/v4/default/registry/microservices/b0b6cb4d62e32b56c3bf8cb4bd2b7aed46cdffbe/instances/72abd3701b1811eaa6ea005056b6530b/heartbeat","latency":3,"end_time":1575967349039,"endpoint_id":189076,"service_instance_id":4921,"version":2,"start_time":1575967349036,"data_binary":"Cg0KC7kmK8SNreHuqv8bEsQBEP///////////wEYrLLl9+4tIK+y5ffuLTCUxQtABFABWANgO3oSCgtodHRwLm1ldGhvZBIDUFVUeogBCgN1cmwSgAEvdjQvZGVmYXVsdC9yZWdpc3RyeS9taWNyb3NlcnZpY2VzL2IwYjZjYjRkNjJlMzJiNTZjM2JmOGNiNGJkMmI3YWVkNDZjZGZmYmUvaW5zdGFuY2VzLzcyYWJkMzcwMWIxODExZWFhNmVhMDA1MDU2YjY1MzBiL2hlYXJ0YmVhdBgNILkm","service_id":13,"time_bucket":20191210164229,"is_error":0,"segment_id":"4921.43.15759673490360004"}[\n]" 16:40:05,484 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] Request ready 16:40:05,484 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:w]: Event cleared [w] 16:40:06,073 DEBUG [FetchSessionHandler] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Node 0 sent an incremental fetch response for session 520315326 with 0 response partition(s), 1 implied partition(s) 16:40:07,365 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: 1460 bytes read 16:40:07,365 DEBUG [wire] http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]" 16:40:07,365 DEBUG [wire] http-outgoing-0 << "content-type: application/json; charset=UTF-8[\r][\n]" 16:40:07,365 DEBUG [wire] http-outgoing-0 << "content-length: 3697[\r][\n]" 16:40:07,365 DEBUG [wire] http-outgoing-0 << "[\r][\n]" 16:40:07,365 DEBUG [wire] http-outgoing-0 << "{"took":1872,"errors":true,"items":[{"index":{"_index":"sw_segment","_type":"type","_id":"11.88.15759673496781142","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"4921.36.15759673457660020","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"4921.36.15759673461450022","_version":1,"result":"created","_shards"" 16:40:07,365 DEBUG [headers] http-outgoing-0 << HTTP/1.1 200 OK 16:40:07,365 DEBUG [headers] http-outgoing-0 << content-type: application/json; charset=UTF-8 16:40:07,365 DEBUG [headers] http-outgoing-0 << content-length: 3697 16:40:07,365 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE(1372)] Response received 16:40:07,365 DEBUG [MainClientExec] [exchange: 44] Response received HTTP/1.1 200 OK 16:40:07,365 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE(1372)] Input ready 16:40:07,365 DEBUG [MainClientExec] [exchange: 44] Consume content 16:40:07,365 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: 2325 bytes read 16:40:07,365 DEBUG [wire] http-outgoing-0 << ":{"total":1,"successful":1,"failed":0},"_seq_no":24332,"_primary_term":1,"status":201}},{"index":{"_index":"sw_segment","_type":"type","_id":"11.37.15759673529984298","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"11.37.15759673530124300","_version":1,"result":"created","_shards":{"total":1,"successful":1,"failed":0},"_seq_no":24333,"_primary_term":1,"status":201}},{"index":{"_index":"sw_segment","_type":"type","_id":"11.47.15759673539631576","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"11.47.15759673539651578","status":429,"error":{"type":"es_rejected_execution_exception","reason":"rejected execution of processing of [32196501][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[sw_segment][1]] containing [5] requests, target allocation id: U61PmKwGRPe_wjkdosiWUg, primary term: 1 on EsThreadPoolExecutor[name = JKPT-ES-NODE-001/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@37fc16a1[Running, pool size = 8, active threads = 8, queued tasks = 200, completed tasks = 4765825]]"}}},{"index":{"_index":"sw_segment","_type":"type","_id":"4921.43.15759673490360004","_version":1,"result":"created","_shards":{"total":1,"successful":1,"failed":0},"_seq_no":24334,"_primary_term":1,"status":201}}]}" 16:40:07,365 DEBUG [InternalHttpAsyncClient] [exchange: 44] Connection can be kept alive indefinitely 16:40:07,365 DEBUG [MainClientExec] [exchange: 44] Response processed 16:40:07,365 DEBUG [InternalHttpAsyncClient] [exchange: 44] releasing connection 16:40:07,365 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 16:40:07,365 DEBUG [PoolingNHttpClientConnectionManager] Releasing connection: [id: http-outgoing-0][route: {}->http://10.23.11.224:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 16:40:07,365 DEBUG [PoolingNHttpClientConnectionManager] Connection [id: http-outgoing-0][route: {}->http://10.23.11.224:9200] can be kept alive indefinitely 16:40:07,365 DEBUG [ManagedNHttpClientConnectionImpl] http-outgoing-0 10.23.6.33:6663<->10.23.11.224:9200[ACTIVE][r:r]: Set timeout 0 16:40:07,365 DEBUG [PoolingNHttpClientConnectionManager] Connection released: [id: http-outgoing-0][route: {}->http://10.23.11.224:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 16:40:07,367 DEBUG [RestClient] request [POST http://10.23.11.224:9200/_bulk?timeout=1m] returned [HTTP/1.1 200 OK] 16:40:07,367 DEBUG [InternalIODispatch] http-outgoing-0 [ACTIVE] [content length: 3697; pos: 3697; completed: true] 16:40:08,187 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:40:08,389 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 16:40:11,203 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:40:11,404 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 16:40:14,221 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:41:20,774 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 16:41:23,589 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:41:23,790 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response actCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 16:41:38,665 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 16:41:38,867 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 17:13:13,402 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 17:13:13,605 DEBUG [NetworkClient] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Node -1 disconnected. 17:13:13,606 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 17:13:13,707 DEBUG [NetworkClient] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending metadata request (type=MetadataRequest, topics=compute_traceStorage, allowAutoCreate=true) to node 10.23.11.235:9092 (id: 0 rack: null) 17:13:13,907 DEBUG [Metadata] Updating last seen epoch from 0 to 0 for partition compute_traceStorage-0 17:13:13,907 DEBUG [Metadata] Updated cluster metadata version 4 to MetadataCache{cluster=Cluster(id = yQ_sRlMlSui8hlVtaPl4wg, nodes = [10.23.11.235:9092 (id: 0 rack: null)], partitions = [Partition(topic = compute_traceStorage, partition = 0, leader = 0, replicas = [0], isr = [0], offlineReplicas = [])], controller = 10.23.11.235:9092 (id: 0 rack: null))} 17:13:16,420 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Sending Heartbeat request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 17:13:16, 17:15:02,170 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Received successful Heartbeat response 17:15:04,683 WARN [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] This member will leave the group because consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records. 17:15:04,683 INFO [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Member consumer-1-f8b4d0da-f83c-4849-8cfd-74e748aad3c7 sending LeaveGroup request to coordinator 10.23.11.235:9092 (id: 2147483647 rack: null) 17:15:04,683 DEBUG [AbstractCoordinator] [Consumer clientId=consumer-1, groupId=jkpt-transfer-group] Disabling heartbeat thread ```
kafka创建的topic为none
大家好: 我创建了一个topic,查看topic的描述,情况如下: Topic:deng3 PartitionCount:2 ReplicationFactor:2 Configs: Topic: deng3 Partition: 0 Leader: none Replicas: 3,4 Isr: Topic: deng3 Partition: 1 Leader: none Replicas: 4,5 Isr: 有没有人知道原因吗?
I can do it! 是怎么写的呢
Problem Description Given n elements, which have two properties, say Property A and Property B. For convenience, we use two integers Ai and Bi to measure the two properties. Your task is, to partition the element into two sets, say Set A and Set B , which minimizes the value of max(x∈Set A) {Ax}+max(y∈Set B) {By}. See sample test cases for further details. Input There are multiple test cases, the first line of input contains an integer denoting the number of test cases. For each test case, the first line contains an integer N, indicates the number of elements. (1 <= N <= 100000) For the next N lines, every line contains two integers Ai and Bi indicate the Property A and Property B of the ith element. (0 <= Ai, Bi <= 1000000000) Output For each test cases, output the minimum value. Sample Input 1 3 1 100 2 100 3 1 Sample Output Case 1: 3
jenkins中S3-publisher插件出现安装完启动不了的问题。
在我安装完S3-publisher的时候,去启动它,日志中报错,报错如下: ``` Failed to load hudson.plugins.s3.Entry com.amazonaws.SdkClientException: Unable to load partition metadata from com/amazonaws/partitions/endpoints.json at com.amazonaws.partitions.PartitionsLoader.build(PartitionsLoader.java:82) at com.amazonaws.regions.RegionMetadataFactory.create(RegionMetadataFactory.java:30) at com.amazonaws.regions.RegionUtils.initialize(RegionUtils.java:65) at com.amazonaws.regions.RegionUtils.getRegionMetadata(RegionUtils.java:53) at com.amazonaws.regions.RegionUtils.getRegionsForService(RegionUtils.java:98) at hudson.plugins.s3.Entry.<clinit>(Entry.java:41) Caused: java.lang.ExceptionInInitializerError at sun.misc.Unsafe.ensureClassInitialized(Native Method) at sun.reflect.UnsafeFieldAccessorFactory.newFieldAccessor(UnsafeFieldAccessorFactory.java:43) at sun.reflect.ReflectionFactory.newFieldAccessor(ReflectionFactory.java:156) at java.lang.reflect.Field.acquireFieldAccessor(Field.java:1088) at java.lang.reflect.Field.getFieldAccessor(Field.java:1069) at java.lang.reflect.Field.get(Field.java:393) at net.java.sezpoz.IndexItem.instance(IndexItem.java:185) Caused: java.lang.InstantiationException at net.java.sezpoz.IndexItem.instance(IndexItem.java:193) at hudson.ExtensionFinder$GuiceFinder.instantiate(ExtensionFinder.java:369) at hudson.ExtensionFinder$GuiceFinder.access$700(ExtensionFinder.java:240) at hudson.ExtensionFinder$GuiceFinder$SezpozModule$1.get(ExtensionFinder.java:557) at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:81) at com.google.inject.internal.InternalFactoryToInitializableAdapter.provision(InternalFactoryToInitializableAdapter.java:53) at com.google.inject.internal.ProviderInternalFactory$1.call(ProviderInternalFactory.java:65) at com.google.inject.internal.ProvisionListenerStackCallback$Provision.provision(ProvisionListenerStackCallback.java:115) at hudson.ExtensionFinder$GuiceFinder$SezpozModule.onProvision(ExtensionFinder.java:577) at com.google.inject.internal.ProvisionListenerStackCallback$Provision.provision(ProvisionListenerStackCallback.java:126) at com.google.inject.internal.ProvisionListenerStackCallback.provision(ProvisionListenerStackCallback.java:68) at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:63) at com.google.inject.internal.InternalFactoryToInitializableAdapter.get(InternalFactoryToInitializableAdapter.java:45) at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46) at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1103) at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:145) at hudson.ExtensionFinder$GuiceFinder$FaultTolerantScope$1.get(ExtensionFinder.java:440) at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:41) at com.google.inject.internal.InjectorImpl$2$1.call(InjectorImpl.java:1016) at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092) at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1012) at hudson.ExtensionFinder$GuiceFinder._find(ExtensionFinder.java:402) at hudson.ExtensionFinder$GuiceFinder.find(ExtensionFinder.java:393) at hudson.ClassicPluginStrategy.findComponents(ClassicPluginStrategy.java:344) at hudson.ExtensionList.load(ExtensionList.java:381) at hudson.ExtensionList.ensureLoaded(ExtensionList.java:317) at hudson.ExtensionList.iterator(ExtensionList.java:172) at jenkins.model.Jenkins.getDescriptorByType(Jenkins.java:1599) at hudson.plugins.git.GitSCM.onLoaded(GitSCM.java:1870) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at hudson.init.TaskMethodFinder.invoke(TaskMethodFinder.java:104) at hudson.init.TaskMethodFinder$TaskImpl.run(TaskMethodFinder.java:175) at org.jvnet.hudson.reactor.Reactor.runTask(Reactor.java:296) at jenkins.model.Jenkins$5.runTask(Jenkins.java:1142) at org.jvnet.hudson.reactor.Reactor$2.run(Reactor.java:214) at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:117) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Jan 08, 2020 3:21:05 AM WARNING hudson.ExtensionFinder$GuiceFinder instantiate Failed to load hudson.plugins.s3.S3BucketPublisher java.lang.NoClassDefFoundError: Could not initialize class hudson.plugins.s3.Entry at hudson.plugins.s3.S3BucketPublisher$DescriptorImpl.<init>(S3BucketPublisher.java:404) at hudson.plugins.s3.S3BucketPublisher$DescriptorImpl.<init>(S3BucketPublisher.java:409) at hudson.plugins.s3.S3BucketPublisher.<clinit>(S3BucketPublisher.java:41) at sun.misc.Unsafe.ensureClassInitialized(Native Method) at sun.reflect.UnsafeFieldAccessorFactory.newFieldAccessor(UnsafeFieldAccessorFactory.java:43) at sun.reflect.ReflectionFactory.newFieldAccessor(ReflectionFactory.java:156) at java.lang.reflect.Field.acquireFieldAccessor(Field.java:1088) at java.lang.reflect.Field.getFieldAccessor(Field.java:1069) at java.lang.reflect.Field.get(Field.java:393) at net.java.sezpoz.IndexItem.instance(IndexItem.java:185) Caused: java.lang.InstantiationException at net.java.sezpoz.IndexItem.instance(IndexItem.java:193) at hudson.ExtensionFinder$GuiceFinder.instantiate(ExtensionFinder.java:369) at hudson.ExtensionFinder$GuiceFinder.access$700(ExtensionFinder.java:240) at hudson.ExtensionFinder$GuiceFinder$SezpozModule$1.get(ExtensionFinder.java:557) at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:81) at com.google.inject.internal.InternalFactoryToInitializableAdapter.provision(InternalFactoryToInitializableAdapter.java:53) at com.google.inject.internal.ProviderInternalFactory$1.call(ProviderInternalFactory.java:65) at com.google.inject.internal.ProvisionListenerStackCallback$Provision.provision(ProvisionListenerStackCallback.java:115) at hudson.ExtensionFinder$GuiceFinder$SezpozModule.onProvision(ExtensionFinder.java:577) at com.google.inject.internal.ProvisionListenerStackCallback$Provision.provision(ProvisionListenerStackCallback.java:126) at com.google.inject.internal.ProvisionListenerStackCallback.provision(ProvisionListenerStackCallback.java:68) at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:63) at com.google.inject.internal.InternalFactoryToInitializableAdapter.get(InternalFactoryToInitializableAdapter.java:45) at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46) at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1103) at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:145) at hudson.ExtensionFinder$GuiceFinder$FaultTolerantScope$1.get(ExtensionFinder.java:440) at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:41) at com.google.inject.internal.InjectorImpl$2$1.call(InjectorImpl.java:1016) at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092) at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1012) at hudson.ExtensionFinder$GuiceFinder._find(ExtensionFinder.java:402) at hudson.ExtensionFinder$GuiceFinder.find(ExtensionFinder.java:393) at hudson.ClassicPluginStrategy.findComponents(ClassicPluginStrategy.java:344) at hudson.ExtensionList.load(ExtensionList.java:381) at hudson.ExtensionList.ensureLoaded(ExtensionList.java:317) at hudson.ExtensionList.iterator(ExtensionList.java:172) at jenkins.model.Jenkins.getDescriptorByType(Jenkins.java:1599) at hudson.plugins.git.GitSCM.onLoaded(GitSCM.java:1870) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at hudson.init.TaskMethodFinder.invoke(TaskMethodFinder.java:104) at hudson.init.TaskMethodFinder$TaskImpl.run(TaskMethodFinder.java:175) at org.jvnet.hudson.reactor.Reactor.runTask(Reactor.java:296) at jenkins.model.Jenkins$5.runTask(Jenkins.java:1142) at org.jvnet.hudson.reactor.Reactor$2.run(Reactor.java:214) at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:117) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` 我不知道这是为什么,谁能帮帮我。
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小人工智障。 思路可以运用在不同地方,主要介绍的是思路。
Linux(服务器编程):15---两种高效的事件处理模式(reactor模式、proactor模式)
前言 同步I/O模型通常用于实现Reactor模式 异步I/O模型则用于实现Proactor模式 最后我们会使用同步I/O方式模拟出Proactor模式 一、Reactor模式 Reactor模式特点 它要求主线程(I/O处理单元)只负责监听文件描述符上是否有事件发生,有的话就立即将时间通知工作线程(逻辑单元)。除此之外,主线程不做任何其他实质性的工作 读写数据,接受新的连接,以及处...
为什么要学数据结构?
一、前言 在可视化化程序设计的今天,借助于集成开发环境可以很快地生成程序,程序设计不再是计算机专业人员的专利。很多人认为,只要掌握几种开发工具就可以成为编程高手,其实,这是一种误解。要想成为一个专业的开发人员,至少需要以下三个条件: 1) 能够熟练地选择和设计各种数据结构和算法 2) 至少要能够熟练地掌握一门程序设计语言 3) 熟知所涉及的相关应用领域的知识 其中,后两个条件比较容易实现,而第一个...
C语言魔塔游戏
很早就很想写这个,今天终于写完了。 游戏截图: 编译环境: VS2017 游戏需要一些图片,如果有想要的或者对游戏有什么看法的可以加我的QQ 2985486630 讨论,如果暂时没有回应,可以在博客下方留言,到时候我会看到。 下面我来介绍一下游戏的主要功能和实现方式 首先是玩家的定义,使用结构体,这个名字是可以自己改变的 struct gamerole { char n
进程通信方式总结与盘点
​ 进程通信是指进程之间的信息交换。这里需要和进程同步做一下区分,进程同步控制多个进程按一定顺序执行,进程通信是一种手段,而进程同步是目标。从某方面来讲,进程通信可以解决进程同步问题。 ​ 首先回顾下我们前面博文中讲到的信号量机制,为了实现进程的互斥与同步,需要在进程间交换一定的信息,因此信号量机制也可以被归为进程通信的一种方式,但是也被称为低级进程通信,主要原因为: 效率低:一次只可操作少量的...
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观
听说了吗?阿里双11作战室竟1根网线都没有
双11不光是购物狂欢节,更是对技术的一次“大考”,对于阿里巴巴企业内部运营的基础保障技术而言,亦是如此。 回溯双11历史,这背后也经历过“小米加步枪”的阶段:作战室从随处是网线,交换机放地上的“一地狼藉”;到如今媲美5G的wifi网速,到现场却看不到一根网线;从当年使用商用AP(无线路由器),让光明顶双11当天断网一分钟,到全部使用阿里自研AP……阿里巴巴企业智能事业部工程师们提供的基础保障...
在阿里,40岁的奋斗姿势
在阿里,40岁的奋斗姿势 在阿里,什么样的年纪可以称为老呢?35岁? 在云网络,有这样一群人,他们的平均年龄接近40,却刚刚开辟职业生涯的第二战场。 他们的奋斗姿势是什么样的呢? 洛神赋 “翩若惊鸿,婉若游龙。荣曜秋菊,华茂春松。髣髴兮若轻云之蔽月,飘飖兮若流风之回雪。远而望之,皎若太阳升朝霞;迫而察之,灼若芙蕖出渌波。” 爱洛神,爱阿里云 2018年,阿里云网络产品部门启动洛神2.0升...
关于研发效能提升的思考
研发效能提升是最近比较热门的一个话题,本人根据这几年的工作心得,做了一些思考总结,由于个人深度有限,暂且抛转引入。 三要素 任何生产力的提升都离不开这三个因素:人、流程和工具,少了其中任何一个因素都无法实现。 人,即思想,也就是古人说的“道”,道不同不相为谋,是制高点,也是高层建筑的基石。 流程,即方法,也是古人说的“法”。研发效能的提升,也就是要提高投入产出比,既要增加产出,也要减...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip install selenium -i https://pypi.tuna.tsinghua.edu.cn/simple/ 
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca
2020年,冯唐49岁:我给20、30岁IT职场年轻人的建议
点击“技术领导力”关注∆  每天早上8:30推送 作者| Mr.K   编辑| Emma 来源| 技术领导力(ID:jishulingdaoli) 前天的推文《冯唐:职场人35岁以后,方法论比经验重要》,收到了不少读者的反馈,觉得挺受启发。其实,冯唐写了不少关于职场方面的文章,都挺不错的。可惜大家只记住了“春风十里不如你”、“如何避免成为油腻腻的中年人”等不那么正经的文章。 本文整理了冯
程序员该看的几部电影
##1、骇客帝国(1999) 概念:在线/离线,递归,循环,矩阵等 剧情简介: 不久的将来,网络黑客尼奥对这个看似正常的现实世界产生了怀疑。 他结识了黑客崔妮蒂,并见到了黑客组织的首领墨菲斯。 墨菲斯告诉他,现实世界其实是由一个名叫“母体”的计算机人工智能系统控制,人们就像他们饲养的动物,没有自由和思想,而尼奥就是能够拯救人类的救世主。 可是,救赎之路从来都不会一帆风顺,到底哪里才是真实的世界?
入职阿里5年,他如何破解“技术债”?
简介: 作者 | 都铎 作为一名技术人,你常常会听到这样的话: “先快速上线” “没时间改” “再缓一缓吧” “以后再解决” “先用临时方案处理” …… 当你埋下的坑越来越多,不知道哪天哪位同学就会踩上一颗雷。特别赞同“人最大的恐惧就是未知,当技术债可说不可见的时候,才是最让人不想解决的时候。” 作为一个程序员,我们反对复制粘贴,但是我们经常会见到相似的代码,相同的二方包,甚至整个代码...
Python绘图,圣诞树,花,爱心 | Turtle篇
每周每日,分享Python实战代码,入门资料,进阶资料,基础语法,爬虫,数据分析,web网站,机器学习,深度学习等等。 公众号回复【进群】沟通交流吧,QQ扫码进群学习吧 微信群 QQ群 1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle()...
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东
破14亿,Python分析我国存在哪些人口危机!
2020年1月17日,国家统计局发布了2019年国民经济报告,报告中指出我国人口突破14亿。 猪哥的朋友圈被14亿人口刷屏,但是很多人并没有看到我国复杂的人口问题:老龄化、男女比例失衡、生育率下降、人口红利下降等。 今天我们就来分析一下我们国家的人口数据吧! 一、背景 1.人口突破14亿 2020年1月17日,国家统计局发布了 2019年国民经济报告 ,报告中指出:年末中国大陆总人口(包括31个
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ......
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!
2020年的1月,我辞掉了我的第一份工作
其实,这篇文章,我应该早点写的,毕竟现在已经2月份了。不过一些其它原因,或者是我的惰性、还有一些迷茫的念头,让自己迟迟没有试着写一点东西,记录下,或者说是总结下自己前3年的工作上的经历、学习的过程。 我自己知道的,在写自己的博客方面,我的文笔很一般,非技术类的文章不想去写;另外我又是一个还比较热衷于技术的人,而平常复杂一点的东西,如果想写文章写的清楚点,是需要足够...
别低估自己的直觉,也别高估自己的智商
所有群全部吵翻天,朋友圈全部沦陷,公众号疯狂转发。这两周没怎么发原创,只发新闻,可能有人注意到了。我不是懒,是文章写了却没发,因为大家的关注力始终在这次的疫情上面,发了也没人看。当然,我...
Java坑人面试题系列: 包装类(中级难度)
Java Magazine上面有一个专门坑人的面试题系列: https://blogs.oracle.com/javamagazine/quiz-2。 这些问题的设计宗旨,主要是测试面试者对Java语言的了解程度,而不是为了用弯弯绕绕的手段把面试者搞蒙。 如果你看过往期的问题,就会发现每一个都不简单。 这些试题模拟了认证考试中的一些难题。 而 “中级(intermediate)” 和 “高级(ad
深度学习入门笔记(十八):卷积神经网络(一)
欢迎关注WX公众号:【程序员管小亮】 专栏——深度学习入门笔记 声明 1)该文章整理自网上的大牛和机器学习专家无私奉献的资料,具体引用的资料请看参考文献。 2)本文仅供学术交流,非商用。所以每一部分具体的参考资料并没有详细对应。如果某部分不小心侵犯了大家的利益,还望海涵,并联系博主删除。 3)博主才疏学浅,文中如有不当之处,请各位指出,共同进步,谢谢。 4)此属于第一版本,若有错误,还需继续修正与...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。 再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。 下文是原回答,希望能对你能有所启发。 如果我说,这个世界上人真的分三六九等,...
节后首个工作日,企业们集体开晨会让钉钉挂了
By 超神经场景描述:昨天 2 月 3 日,是大部分城市号召远程工作的第一天,全国有接近 2 亿人在家开始远程办公,钉钉上也有超过 1000 万家企业活跃起来。关键词:十一出行 人脸...
Java基础知识点梳理
Java基础知识点梳理 摘要: 虽然已经在实际工作中经常与java打交道,但是一直没系统地对java这门语言进行梳理和总结,掌握的知识也比较零散。恰好利用这段时间重新认识下java,并对一些常见的语法和知识点做个总结与回顾,一方面为了加深印象,方便后面查阅,一方面为了学好java打下基础。 Java简介 java语言于1995年正式推出,最开始被命名为Oak语言,由James Gosling(詹姆
2020年全新Java学习路线图,含配套视频,学完即为中级Java程序员!!
新的一年来临,突如其来的疫情打破了平静的生活! 在家的你是否很无聊,如果无聊就来学习吧! 世上只有一种投资只赚不赔,那就是学习!!! 传智播客于2020年升级了Java学习线路图,硬核升级,免费放送! 学完你就是中级程序员,能更快一步找到工作! 一、Java基础 JavaSE基础是Java中级程序员的起点,是帮助你从小白到懂得编程的必经之路。 在Java基础板块中有6个子模块的学
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合
你也能看懂的:蒙特卡罗方法
蒙特卡罗方法,也称统计模拟方法,是1940年代中期由于科学技术的发展和电子计算机的发明,而提出的一种以概率统计理论为指导的数值计算方法。是指使用随机数(或更常见的伪随机数)来解决很多计算问题的方法 蒙特卡罗方法可以粗略地分成两类:一类是所求解的问题本身具有内在的随机性,借助计算机的运算能力可以直接模拟这种随机的过程。另一种类型是所求解问题可以转化为某种随机分布的特征数,比如随机事件出现的概率,或...
如何优雅地打印一个Java对象?
你好呀,我是沉默王二,一个和黄家驹一样身高,和刘德华一样颜值的程序员。虽然已经写了十多年的 Java 代码,但仍然觉得自己是个菜鸟(请允许我惭愧一下)。 在一个月黑风高的夜晚,我思前想后,觉得再也不能这么蹉跎下去了。于是痛下决心,准备通过输出的方式倒逼输入,以此来修炼自己的内功,从而进阶成为一名真正意义上的大神。与此同时,希望这些文章能够帮助到更多的读者,让大家在学习的路上不再寂寞、空虚和冷。 ...
相关热词 c# 压缩图片好麻烦 c#计算数组中的平均值 c#获取路由参数 c#日期精确到分钟 c#自定义异常必须继承 c#查表并返回值 c# 动态 表达式树 c# 监控方法耗时 c# listbox c#chart显示滚动条
立即提问