IPC 相同IP的摄像机,把改成同一网段的不同IP,TCP连接方式

用的是IP和端口连接,目前的问题是搜索摄像机发现相同IP 的摄像机,
连接到其中一个摄像机中,更改它的IP,但是另外一个摄像机也会同时被更改
请问怎样才能只改其中一个?
出现这问题的原因会不会是因为用的IP连接的?
求大神给个好思路

1个回答

断开其它的 一个一个的修改

qq_37602698
ls_er 就是不能断开其他的,能断开就好了
大约一年之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
linux IPC 消息队列多进程send msg如何保证数据同步?
接收端就一个进程在接收,发送端很多进程同时向消息队列发送msg, 发送端如何保持消息的同步,还是说内核已经做了这个工作,用户侧不需要考虑同步问题?
Uninitialized variable: i
使用代码工具检查的时候发现编译提示逻辑错误变量未初始化:Uninitialized variable: i ``` int i, comNum; TExtComDevice* ptExtComDev; Bool TestDone = TRUE; Int32 retVal; #if defined(_IPC6XX_) || defined(_IPC6X5_GN_) comNum = MAX_SERIAL_NUM; #elif defined(_CHEETAH1_) comNum = 1; #else comNum = 0; #endif for(i=0; i<comNum; i++) { ptExtComDev = &gtExtComDevice[i]; if(100 != ptExtComDev->eProtocolType) { TestDone = FALSE; KOSA_printf("SerialPost %d is not OK!\n", i); break; } } if(i == comNum)/*代码出错一行*/ { /*内部代码省略*/ } ``` 变量i虽然开始只是声明int i,但是后面for()循环结束后i应该已经被赋值,不应该是未初始话状态,希望大家帮忙给个答案
为什么一顿操作,啥也没改变?RMAN还原数据库。
为什么我还原数据库,数据库没有被还原? 操作如下,请大佬们帮忙解答。 ``` [oracle@dg1 ~]$ rman target / Recovery Manager: Release 19.0.0.0.0 - Production on Mon Dec 23 15:08:56 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. connected to target database: ORCL (DBID=1554676977) RMAN> shutdown immediate using target database control file instead of recovery catalog database closed database dismounted Oracle instance shut down RMAN> startup mount; connected to target database (not started) Oracle instance started database mounted Total System Global Area 1795159104 bytes Fixed Size 8897600 bytes Variable Size 503316480 bytes Database Buffers 1275068416 bytes Redo Buffers 7876608 bytes RMAN> restore database; Starting restore at 23-DEC-19 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=11 device type=DISK channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00001 to /oracleDB/app/oracle/oradata/ORCL/system01.dbf channel ORA_DISK_1: restoring datafile 00002 to /oracleDB/app/oracle/oradata/ORCL/SNSYSINDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00003 to /oracleDB/app/oracle/oradata/ORCL/sysaux01.dbf channel ORA_DISK_1: restoring datafile 00004 to /oracleDB/app/oracle/oradata/ORCL/undotbs01.dbf channel ORA_DISK_1: restoring datafile 00005 to /oracleDB/app/oracle/oradata/ORCL/SNSYSDATA_TBS.dbf channel ORA_DISK_1: restoring datafile 00007 to /oracleDB/app/oracle/oradata/ORCL/users01.dbf channel ORA_DISK_1: restoring datafile 00008 to /oracleDB/app/oracle/oradata/ORCL/SNOPDATA_TBS.dbf channel ORA_DISK_1: restoring datafile 00009 to /oracleDB/app/oracle/oradata/ORCL/SNOPINDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00010 to /oracleDB/app/oracle/oradata/ORCL/SNMQDATA_TBS.dbf channel ORA_DISK_1: restoring datafile 00011 to /oracleDB/app/oracle/oradata/ORCL/SNMQINDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00012 to /oracleDB/app/oracle/oradata/ORCL/SNMONITORDATA_TBS.dbf channel ORA_DISK_1: restoring datafile 00013 to /oracleDB/app/oracle/oradata/ORCL/SNMONITORINDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00014 to /oracleDB/app/oracle/oradata/ORCL/SNHRDATA_TBS.dbf channel ORA_DISK_1: restoring datafile 00015 to /oracleDB/app/oracle/oradata/ORCL/SNHRINDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00016 to /oracleDB/app/oracle/oradata/ORCL/SNBUDGE_TBS.dbf channel ORA_DISK_1: restoring datafile 00017 to /oracleDB/app/oracle/oradata/ORCL/OPTDATA_TBS.dbf channel ORA_DISK_1: restoring datafile 00018 to /oracleDB/app/oracle/oradata/ORCL/RATIFY_TBS.dbf channel ORA_DISK_1: restoring datafile 00019 to /oracleDB/app/oracle/oradata/ORCL/COMM_TBS.dbf channel ORA_DISK_1: restoring datafile 00020 to /oracleDB/app/oracle/oradata/ORCL/MSG_TBS.dbf channel ORA_DISK_1: restoring datafile 00021 to /oracleDB/app/oracle/oradata/ORCL/PO_TBS.dbf channel ORA_DISK_1: restoring datafile 00022 to /oracleDB/app/oracle/oradata/ORCL/FUND_TBS.dbf channel ORA_DISK_1: restoring datafile 00023 to /oracleDB/app/oracle/oradata/ORCL/OFIN_TBS.dbf channel ORA_DISK_1: restoring datafile 00024 to /oracleDB/app/oracle/oradata/ORCL/TAX_TBS.dbf channel ORA_DISK_1: restoring datafile 00025 to /oracleDB/app/oracle/oradata/ORCL/CARGO_TBS.dbf channel ORA_DISK_1: restoring datafile 00026 to /oracleDB/app/oracle/oradata/ORCL/INV_TBS.dbf channel ORA_DISK_1: restoring datafile 00027 to /oracleDB/app/oracle/oradata/ORCL/CARGO_IDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00028 to /oracleDB/app/oracle/oradata/ORCL/PO_IDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00029 to /oracleDB/app/oracle/oradata/ORCL/FUND_IDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00030 to /oracleDB/app/oracle/oradata/ORCL/COST_IDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00031 to /oracleDB/app/oracle/oradata/ORCL/STOCK_IDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00032 to /oracleDB/app/oracle/oradata/ORCL/FIN_IDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00033 to /oracleDB/app/oracle/oradata/ORCL/FIN_TBS.dbf channel ORA_DISK_1: restoring datafile 00034 to /oracleDB/app/oracle/oradata/ORCL/CFIN_TBS.dbf channel ORA_DISK_1: restoring datafile 00035 to /oracleDB/app/oracle/oradata/ORCL/CFIN_IDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00036 to /oracleDB/app/oracle/oradata/ORCL/COST_TBS.dbf channel ORA_DISK_1: restoring datafile 00037 to /oracleDB/app/oracle/oradata/ORCL/INV_IDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00038 to /oracleDB/app/oracle/oradata/ORCL/STOCK_TBS.dbf channel ORA_DISK_1: restoring datafile 00039 to /oracleDB/app/oracle/oradata/ORCL/MSG_IDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00040 to /oracleDB/app/oracle/oradata/ORCL/SYSDATA_TBS.dbf channel ORA_DISK_1: restoring datafile 00041 to /oracleDB/app/oracle/oradata/ORCL/FINDATA_TBS.dbf channel ORA_DISK_1: restoring datafile 00042 to /oracleDB/app/oracle/oradata/ORCL/OPTINDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00043 to /oracleDB/app/oracle/oradata/ORCL/EDI_TBS.dbf channel ORA_DISK_1: restoring datafile 00044 to /oracleDB/app/oracle/oradata/ORCL/OFIN_IDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00045 to /oracleDB/app/oracle/oradata/ORCL/SYSINDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00046 to /oracleDB/app/oracle/oradata/ORCL/FININDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00047 to /oracleDB/app/oracle/oradata/ORCL/HRDATA_TBS.dbf channel ORA_DISK_1: restoring datafile 00048 to /oracleDB/app/oracle/oradata/ORCL/HRINDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00049 to /oracleDB/app/oracle/oradata/ORCL/AVICERP_IDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00050 to /oracleDB/app/oracle/oradata/ORCL/AVICERP_TBS.dbf channel ORA_DISK_1: restoring datafile 00051 to /oracleDB/app/oracle/oradata/ORCL/AVICWMS_IDX_TBS.dbf channel ORA_DISK_1: restoring datafile 00052 to /oracleDB/app/oracle/oradata/ORCL/AVICWMS_TBS.dbf channel ORA_DISK_1: restoring datafile 00053 to /oracleDB/app/oracle/oradata/ORCL/AVICIDC_TBS.dbf channel ORA_DISK_1: restoring datafile 00054 to /oracleDB/app/oracle/oradata/ORCL/AVICIDC_IDX_TBS.dbf channel ORA_DISK_1: reading from backup piece /oracleDB/app/oracle/fast_recovery_area/ORCL_P/backupset/2019_12_23/o1_mf_nnndf_TAG20191223T092841_h0062bm7_.bkp channel ORA_DISK_1: piece handle=/oracleDB/app/oracle/fast_recovery_area/ORCL_P/backupset/2019_12_23/o1_mf_nnndf_TAG20191223T092841_h0062bm7_.bkp tag=TAG20191223T092841 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:01:26 Finished restore at 23-DEC-19 RMAN> recover database; Starting recover at 23-DEC-19 using channel ORA_DISK_1 starting media recovery media recovery complete, elapsed time: 00:00:03 Finished recover at 23-DEC-19 RMAN> alter database open; Statement processed RMAN> exit Recovery Manager complete. [oracle@dg1 ~]$ lsnrctl status LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 23-DEC-2019 15:13:52 Copyright (c) 1991, 2019, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dg1)(PORT=1521))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 19.0.0.0.0 - Production Start Date 19-DEC-2019 10:45:38 Uptime 4 days 4 hr. 28 min. 14 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /oracleDB/app/oracle/product/19.0.0/dbhome_1/network/admin/listener.ora Listener Log File /oracleDB/app/oracle/diag/tnslsnr/dg1/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dg1)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=dg1)(PORT=5500))(Security=(my_wallet_directory=/oracleDB/app/oracle/admin/orcl_p/xdb_wallet))(Presentation=HTTP)(Session=RAW)) Services Summary... Service "orcl" has 1 instance(s). Instance "orcl", status UNKNOWN, has 1 handler(s) for this service... Service "orclXDB" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... Service "orcl_p" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... The command completed successfully [oracle@dg1 ~]$ ```
Onvif协议IPC服务器端开发,在海康NVR上,切换子码流失败。
现象: 我使用的IPC上跑的是自己写的ONVIF服务器,在ONVIF TEST TOOL上可以播放出来我的两路码流,但是在NVR上只可以播放主码流,切换码流时就会提示错误。如图: ![图片说明](https://img-ask.csdn.net/upload/201911/13/1573636391_738241.jpg) ![图片说明](https://img-ask.csdn.net/upload/201911/13/1573636421_365356.jpg) 我又使用了别的厂家的IPC(ONVIF协议是他们做好的),发现可以正常切换码流。抓了一下包,结果如下图: ![图片说明](https://img-ask.csdn.net/upload/201911/13/1573636594_741597.png) 可以看到这里是直接用rtsp控制切换了码流地址。 我想知道这个是在ONVIF协议里实现吗?还是应该怎么做?万分感谢啊。
Failed to retrieve data from /webhdfs/v1?op=LISTSTATUS: Server Error
请教各位大神: 通过CENTOS 7自带的FIREFOX去查看HDFS,DataNode及NameNode都已经正常显示,但是查阅Utilties中的Browse Directory,产生如下错误: ![图片说明](https://img-ask.csdn.net/upload/201911/30/1575127373_682984.jpg) 还请大神指示一下,应该从哪些配置文件着手去查看故障?谢谢! 相关LOG: 2019-11-30 23:24:17,964 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1778ms No GCs detected 2019-11-30 23:25:41,727 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on default port 9820, call Call#2 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs from 192.168.100.125:57842: org.apache.hadoop.fs.ParentNotDirectoryException: /test (is not a directory) 搞定了: JAVA JDK 13.0.1改成JAVA JDK 8就解决了问题,晕倒~整了一周居然是这个问题~~~~
有没大神能提供一下gb28181基于pjsip设备侧的demo?
包含gb28181 2016版信令交互(报文收发解析等);最好是IPC上可以直接运行的demo,能够对接SPVMN平台;感谢!
shutdown关闭读这一半后为什么还能收到对端的数据。
之前看unix网络编程,shutdown函数读半关闭会丢弃数据的,但是我写了个测试程序,在同一个主机上运行,关闭读这一半后还是可以读到数据,这是为什么? client ``` #include <stdio.h> #include <sys/socket.h> #include <errno.h> #include <string.h> #include <sys/types.h> #include <netinet/in.h> int main(int argc,char ** argv) { int fd; struct sockaddr_in addr; int ret; int i; char buf[256] = {0}; fd = socket(AF_INET,SOCK_STREAM,0); if(fd < 0){ fprintf(stderr,"socket error\n"); return -1; } addr.sin_family = AF_INET; addr.sin_port = htons(8888); addr.sin_addr.s_addr = inet_addr("127.0.0.1"); ret = connect(fd,(struct sockaddr*)&addr,sizeof(addr)); if(ret < 0){ fprintf(stderr,"connect error:%s\n",strerror(errno)); close(fd); return -1; } i = 0; while(1){ #if 1 if(i == 5){ printf("shutdown read\n"); shutdown(fd,SHUT_RD); } #endif if(i == 10){ printf("client break\n"); shutdown(fd,SHUT_WR); break; } write(fd,"1234567890",10); i++; memset(buf,0,sizeof(buf)); if((ret = read(fd,buf,sizeof(buf))) >0){ printf("ret=%d,buf:%s\n",ret,buf); } sleep(1); } sleep(2); close(fd); return 0; } ``` server ``` void recive_data(void *p) { int fd = *((int *)p); char buf[256] = {0}; char out_data[256] = {0}; int status = 0; int ret; int i = 0; while(1){ memset(buf,0,sizeof(buf)); if((ret = read(fd,buf,sizeof(buf)))>0){ printf("server buf:%s\n"); } else{ printf("ret=%d,recv fin\n",ret); break; } i++; sprintf(out_data,"the %d tims,0987654321\n",i); ret = write(fd,out_data,strlen(out_data)); if(ret > 0){ printf("server write %d bytes data to client\n",ret); } } close(fd); //pthread_exit((void *)&status); return 0; } int main(int argc,char **argv) { int socketfd = -1; struct sockaddr_in addr; struct sockaddr addr_client; int addr_len; int ret; int listenfd = -1; fd_set rdset; pthread_t tid; int status; socketfd = socket(AF_INET,SOCK_STREAM,0); if(socketfd < 0){ fprintf(stderr,"socket error:%s \n",strerror(errno)); return -1; } addr.sin_family = AF_INET; addr.sin_port = htons(8888); addr.sin_addr.s_addr = htonl(INADDR_ANY); ret =bind(socketfd,(struct sockaddr*)&addr,sizeof(addr)); if(ret < 0){ fprintf(stderr,"bind error:%s\n",strerror(errno)); return -1; } listen(socketfd,0); while(1){ listenfd = accept(socketfd,&addr_client,&addr_len); if(listenfd < 0){ fprintf(stderr,"accept error:%s\n",strerror(errno)); return -1; } printf("listenfd=%d\n",listenfd); if(pthread_create(&tid,NULL,recive_data,(void *)&listenfd)<0){ fprintf(stderr,"pthread_create error:%s\n",strerror(errno)); return -1; } //pthread_join(tid,&status); } close(socketfd); } ``` 运行结果如下 shishaowei@T330A:~/test/IPC/socket/tcp$ ./tcp_server & [1] 8567 shishaowei@T330A:~/test/IPC/socket/tcp$ ./tcp_client listenfd=4 server buf:1234567890 server write 22 bytes data to client ret=22,buf:the 1 tims,0987654321 server buf:1234567890 server write 22 bytes data to client ret=22,buf:the 2 tims,0987654321 server buf:1234567890 server write 22 bytes data to client ret=22,buf:the 3 tims,0987654321 server buf:1234567890 server write 22 bytes data to client ret=22,buf:the 4 tims,0987654321 server buf:1234567890 server write 22 bytes data to client ret=22,buf:the 5 tims,0987654321 shutdown read server buf:1234567890 server write 22 bytes data to client ret=22,buf:the 6 tims,0987654321 server buf:1234567890 server write 22 bytes data to client ret=22,buf:the 7 tims,0987654321 server buf:1234567890 server write 22 bytes data to client ret=22,buf:the 8 tims,0987654321 server buf:1234567890 server write 22 bytes data to client ret=22,buf:the 9 tims,0987654321 server buf:1234567890 server write 23 bytes data to client client break ret=0,recv fin
ipc 网络摄像机领域
我想学习一些有关做网络摄像机的技术或者有什么这方面的证书? 感觉自己一个人在东学一点西学一点。有哪位好心人士能帮帮忙吗? 给一些指导性的意见,谢谢。。。
hive运行insert语句在on yarn的情况下报错,开启本地模式后就好了,报错如下:
``` hive> insert into test values('B',2); Query ID = root_20191114105642_8cc05952-0497-4eff-893e-af6de8f05c6e Total jobs = 3 Launching Job 1 out of 3 Number of reduce tasks is set to 0 since there's no reduce operator 19/11/14 10:56:43 INFO client.RMProxy: Connecting to ResourceManager at cloudera/37.64.0.71:8032 19/11/14 10:56:43 INFO client.RMProxy: Connecting to ResourceManager at cloudera/37.64.0.71:8032 java.io.IOException: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:15360, vCores:8>, maximum allowed allocation=<memory:6557, vCores:8>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:6557, vCores:8> at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:478) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:374) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:302) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:280) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:522) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:377) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:318) at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:633) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:267) at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:531) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:345) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:251) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:444) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1328) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:836) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:772) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:699) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:313) at org.apache.hadoop.util.RunJar.main(RunJar.java:227) Caused by: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:15360, vCores:8>, maximum allowed allocation=<memory:6557, vCores:8>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:6557, vCores:8> at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:478) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:374) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:302) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:280) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:522) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:377) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:318) at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:633) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:267) at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:531) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateYarnException(RPCUtil.java:75) at org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:116) at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:284) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy43.submitApplication(Unknown Source) at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:290) at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:297) at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:330) ... 35 more Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException): Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:15360, vCores:8>, maximum allowed allocation=<memory:6557, vCores:8>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:6557, vCores:8> at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:478) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:374) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:302) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:280) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:522) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:377) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:318) at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:633) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:267) at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:531) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1499) at org.apache.hadoop.ipc.Client.call(Client.java:1445) at org.apache.hadoop.ipc.Client.call(Client.java:1355) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy42.submitApplication(Unknown Source) at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:281) ... 48 more Job Submission failed with exception 'java.io.IOException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:15360, vCores:8>, maximum allowed allocation=<memory:6557, vCores:8>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:6557, vCores:8> at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:478) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:374) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:302) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:280) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:522) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:377) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:318) at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:633) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:267) at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:531) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) )' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request! Cannot allocate containers as requested resource is greater than maximum allowed allocation. Requested resource type=[memory-mb], Requested resource=<memory:15360, vCores:8>, maximum allowed allocation=<memory:6557, vCores:8>, please note that maximum allowed allocation is calculated by scheduler based on maximum resource of registered NodeManagers, which might be less than configured maximum allocation=<memory:6557, vCores:8> at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.throwInvalidResourceException(SchedulerUtils.java:478) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.checkResourceRequestAgainstAvailableResource(SchedulerUtils.java:374) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:302) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:280) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:522) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:377) at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:318) at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:633) at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:267) at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:531) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) ``` # 内存最大只有6G,他非要申请15G,这个问题该如何处理, # 求助各位大佬!!!
hadoop集群添加kerberos认证后namenode启动报ipc认证失败?
问题描述: namenode连接journalnode报错,zkfc连接namenode也报错,都是同样的错。 namenode错误日志: 2019-07-16 18:55:52,617 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hostname/ip:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-07-16 18:55:52,616 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hostname/ip:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-07-16 18:55:53,438 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 6001 ms (timeout=20000 ms) for a response for selectInputStreams. No responses yet. 2019-07-16 18:55:53,618 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hostname/ip:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-07-16 18:55:53,618 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hostname/ip:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-07-16 18:55:53,619 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hostname/ip:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-07-16 18:55:54,439 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 7003 ms (timeout=20000 ms) for a response for selectInputStreams. No responses yet. journalnode错误日志: 2019-07-16 18:56:10,836 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for ip:port:null (GSS initiate failed) with true cause: (GSS initiate failed) 2019-07-16 18:56:11,939 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for ip:port:null (GSS initiate failed) with true cause: (GSS initiate failed) 2019-07-16 18:56:12,391 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for ip:port:null (GSS initiate failed) with true cause: (GSS initiate failed) 2019-07-16 18:56:13,341 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for ip:port:null (GSS initiate failed) with true cause: (GSS initiate failed) 2019-07-16 18:56:16,212 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for ip:port:null (GSS initiate failed) with true cause: (GSS initiate failed) 2019-07-16 18:56:17,871 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for ip:port:null (GSS initiate failed) with true cause: (GSS initiate failed) 2019-07-16 18:56:20,902 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for ip:port:null (GSS initiate failed) with true cause: (GSS initiate failed) 2019-07-16 18:56:21,081 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for ip:port:null (GSS initiate failed) with true cause: (GSS initiate failed) 查看了一下kdc的日志:可能问题在这里 Jul 16 17:03:50 hadoop01 krb5kdc[47](info): TGS_REQ (8 etypes {18 17 20 19 16 23 25 26}) 10.10.10.40: LOOKING_UP_SERVER: authtime 0, root/hadoop00@HADOOP.COM for host/hadoop01@HADOOP.COM, Server not found in Kerberos database Jul 16 17:03:50 hadoop01 krb5kdc[47](info): TGS_REQ (8 etypes {18 17 20 19 16 23 25 26}) 10.10.10.40: LOOKING_UP_SERVER: authtime 0, root/hadoop00@HADOOP.COM for host/hadoop00@HADOOP.COM, Server not found in Kerberos database Jul 16 17:03:52 hadoop01 krb5kdc[47](info): AS_REQ (3 etypes {17 16 23}) 10.10.10.40: ISSUE: authtime 1563267832, etypes {rep=17 tkt=18 ses=17}, root/hadoop00@HADOOP.COM for krbtgt/HADOOP.COM@HADOOP.COM Jul 16 17:03:53 hadoop01 krb5kdc[47](info): TGS_REQ (3 etypes {17 16 23}) 10.10.10.40: ISSUE: authtime 1563267832 , etypes {rep=17 tkt=18 ses=17}, root/hadoop00@HADOOP.COM for root/hadoop01@HADOOP.COM Jul 16 17:03:54 hadoop01 krb5kdc[47](info): TGS_REQ (8 etypes {18 17 20 19 16 23 25 26}) 10.10.10.40: LOOKING_UP_SERVER: authtime 0, root/hadoop00@HADOOP.COM for host/hadoop10@HADOOP.COM, Server not found in Kerberos database 所以怀疑问题处在这里,本地kinit root 和HTTP用户都是可以的,正常情况下应该是访问HTTP/hadoop01@HADOOP.COM 而不是host/hadoop01@HADOOP.COM 不知道这里为什么会出现host,请kerberos的大神指导一下
编译firefox windows版时出现错误不知怎么解决
编译环境是 vs 2019 选择的是 c++ 桌面开发和c++游戏开发。 windows sdk 是 10.0.18362.0版本。 LLVM是9.0的32位版本。 rust使用的是rustup-init自动安装的最新版(使用网上的msi安装包安装后编译时报错python有问题)。 安装了nasm-2.14rc16的64位版本 firefox使用的是官网提供的firefox-69.0.3 MozillaBuild使用的是3.2版本 编译了大约20分钟后系统报错,最后错误如下 ``` 5:59.18 In file included from e:/firefox-69.0.3/obj-x86_64-pc-mingw32/ipc/ipdl/UnifiedProtocols9.cpp:2: 5:59.18 In file included from e:/firefox-69.0.3/obj-x86_64-pc-mingw32/ipc/ipdl/PChildToParentStreamChild.cpp:7: 5:59.18 In file included from e:/firefox-69.0.3/obj-x86_64-pc-mingw32/ipc/ipdl/_ipdlheaders\mozilla/ipc/PChildToParentStreamChild.h:9: 5:59.18 In file included from e:/firefox-69.0.3/obj-x86_64-pc-mingw32/ipc/ipdl/_ipdlheaders\mozilla/ipc/PChildToParentStream.h:11: 5:59.18 In file included from e:/firefox-69.0.3/obj-x86_64-pc-mingw32/dist/include/ipc/IPCMessageUtils.h:36: 5:59.18 In file included from e:/firefox-69.0.3/obj-x86_64-pc-mingw32/dist/include/nsIWidget.h:21: 5:59.18 In file included from e:/firefox-69.0.3/obj-x86_64-pc-mingw32/dist/include\nsStyleConsts.h:17: 5:59.19 In file included from e:/firefox-69.0.3/obj-x86_64-pc-mingw32/dist/include/mozilla/ServoStyleConsts.h:9929: 5:59.19 e:/firefox-69.0.3/obj-x86_64-pc-mingw32/dist/include/mozilla/ServoStyleConstsInlines.h(117,3): warning: ignoring return value of function declared with 'nodiscard' attribute [-Wunused-result] 5:59.19 count.load(std::memory_order_acquire); 5:59.19 ^~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~ 5:59.19 e:/firefox-69.0.3/obj-x86_64-pc-mingw32/dist/include/mozilla/ServoStyleConstsInlines.h(199,22): note: in instantiation of member function 'mozilla::StyleArcInner<mozilla::StyleTemplateAreas>::DecrementRef' requested here 5:59.19 if (MOZ_LIKELY(!p->DecrementRef())) { 5:59.20 ^ 5:59.20 e:/firefox-69.0.3/obj-x86_64-pc-mingw32/dist/include/mozilla/ServoStyleConstsInlines.h(224,3): note: in instantiation of member function 'mozilla::StyleArc<mozilla::StyleTemplateAreas>::Release' requested here 5:59.21 Release(); 5:59.21 ^ 5:59.21 e:/firefox-69.0.3/obj-x86_64-pc-mingw32/dist/include/mozilla/ServoStyleConsts.h(8098,10): note: in instantiation of member function 'mozilla::StyleArc<mozilla::StyleTemplateAreas>::~StyleArc' requested here 5:59.21 struct StyleAreas_Body { 5:59.21 ^ 6:00.88 2 warnings generated. 6:02.54 2 warnings generated. 6:03.26 2 warnings generated. 6:05.10 2 warnings generated. 6:10.54 2 warnings generated. 6:10.61 mozmake.EXE[2]: *** [e:/firefox-69.0.3/config/recurse.mk:34: compile] Error 2 6:10.61 mozmake.EXE[1]: *** [e:/firefox-69.0.3/config/rules.mk:391: default] Error 2 6:10.61 mozmake.EXE: *** [client.mk:125: build] Error 2 6:10.63 147 compiler warnings present. ``` 其中大量的警告重复出现 ``` e:/firefox-69.0.3/obj-x86_64-pc-mingw32/dist/include/mozilla/ServoStyleConstsInlines.h(117,3): warning: ignoring return value of function declared with 'nodiscard' attribute [-Wunused-result] 5:59.19 count.load(std::memory_order_acquire); 5:59.19 ^~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~ 5:59.19 e:/firefox-69.0.3/obj-x86_64-pc-mingw32/dist/include/mozilla/ServoStyleConstsInlines.h(199,22): note: in instantiation of member function 'mozilla::StyleArcInner<mozilla::StyleTemplateAreas>::DecrementRef' requested here 5:59.19 if (MOZ_LIKELY(!p->DecrementRef())) { 5:59.20 ^ 5:59.20 e:/firefox-69.0.3/obj-x86_64-pc-mingw32/dist/include/mozilla/ServoStyleConstsInlines.h(224,3): note: in instantiation of member function 'mozilla::StyleArc<mozilla::StyleTemplateAreas>::Release' requested here 5:59.21 Release(); 5:59.21 ^ 5:59.21 e:/firefox-69.0.3/obj-x86_64-pc-mingw32/dist/include/mozilla/ServoStyleConsts.h(8098,10): note: in instantiation of member function 'mozilla::StyleArc<mozilla::StyleTemplateAreas>::~StyleArc' requested here 5:59.21 struct StyleAreas_Body { 5:59.21 ^ 6:00.88 2 warnings generated. 6:02.54 2 warnings generated. ``` 请问这种问题怎么处理呢?
学习c/c++遇到问题找不到资料,关于实现tcp三步握手以及c是怎样实现弱口令检测的问题
刚学完synflood的源码,但是有疑问,是不是synflood只做到了第二步就完毕了,谁有有源码可以给我看看 完整的tcp握手三步是怎样完成的。? 还有 ,对于一些远程主机扫描 弱口令检测等等的,很好奇是怎么实现的,求解。例如ipc 等等。注入类此。希望在csdn能得到高手指教。 新手一枚,求解。
关于 ipc$ 管道共享问题
如何能让两台机子可以使用 ipc$ 进行局域网传输,现在所知道的有: 命令一:net use \ip 地址\ipc$ "密码" /user:"用户名" 命令二:file://ip 地址 /d$ 前提:双方都关闭了防火墙,都开启了 server(LanmanServer 服务),开启了管道共享(ipc$ 和 d$ e$ f$)开启了网络发现和文件和打印机共享,公共文件夹的共享,关闭了密码保护。 都使用的是公用网络,关闭了安全软件(360 等)!并且两台电脑都有密码。而且输入都是对的 但是很多情况下会连接的上,但连不上盘符,即命令一成功,命令二失败 提示是:请输入网络密码 请问的是:解决方法和怎样配置可以更好的实现这个 ipc 管道的文件传输!要注意哪些方面! 真心请教,谢谢!![](http://ww3.sinaimg.cn/large/0060lm7Tgw1f8iwtsqqm3j30ew0c2dhd.jpg)
net use 出现1326 错误,用户名或密码不正确
我的服务器是winsever2003的。在本机上使用net use 与服务器建立ipc非空连接, 总是出现 net use 1326 错误, 也就是所谓的用户名或密码错误,但是我一定 以及肯定,用户名与密码没有输错。请教各位大神,这个问题如何解决的好 net use \\ip\ipc "密码" /user:"用户名" 我看了下网上的解答,说是控制面板-》文件夹选项-》查看-》简单文件共享 把这个不勾选。 但是我在winsever2003 单根本就没有 "简单文件共享" 这么一栏。 甚是捉急。。。。
java连接oracle数据库不能使用主机名
我的虚拟机使用hostname命令显示的是anywhere,然后我查看虚拟机linux系统下的listener.ora文件如下: ``` LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = anywhere)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) ) ``` 然后我的数据库url是:jdbc:oracle:thin:@anywhere:1521:anywhere;但是连接不成功; 我把url改成:jdbc:oracle:thin:@192.168.0.101:1521:anywhere就可以连接.也就是说用ip可以连接,但是用主机名却不行,应该如何配置?
zeromq进程间的通信怎么使用?
我今天看了下zeromq,也用zeromq写了几个简单的例子,然后看了下它的介绍,发送支持好多通信协议,TCP、UDP、IPC 网上讲的例子都是讲TCP通信的,我想知道如果用来实现进程间的通信,这个要怎么用呢!
IPC直接获取NVR硬盘中的视频录像
公司要做IPC直接获取NVR硬盘中的视频录像这一功能,求大神指导一下,最好具体点!
plsql连接oracle报ora-12170
我本机上安装了oracle和plsql后,在cmd上可以连接数据库,我在net configuration assistant上的本地网络服务名配置上面测试是通过的。 使用plsql登录的时候一直报ora-12170,网上查找了资料试了很多种还是解决不了, 我tnsping ip地址,ping ip,都是可以的,下图是我lsnrctl status后的,![图片说明](https://img-ask.csdn.net/upload/201711/10/1510278840_174367.png) 我的tnsnames.ora是: ORACLR_CONNECTION_DATA = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) (CONNECT_DATA = (SID = CLRExtProc) (PRESENTATION = RO) ) ) LISTENER_ORCL = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.3.14)(PORT = 1521)) ORCL = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.3.14)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = orcl) ) )
IPC无规律异常重启bug的测试思路
小弟现在负责基于dm368设备IPC的无规律异常重启bug的测试工作,现在把368板子内部的软件,kernel,avserver spiua boa这些应用程序测试过了,硬件的复位电路,看门狗,供电这些都测试过了,在做这些测试的时候,设备依旧异常重启。但是无奈在板子的日志库里对于异常重启的log并没有任何有用信息。希望哪位解决过类似bug的高手给指点下思路,设备重启时的log如图![图片说明](https://img-ask.csdn.net/upload/201702/09/1486608812_94216.png)
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
JDK12 Collectors.teeing 你真的需要了解一下
前言 在 Java 12 里面有个非常好用但在官方 JEP 没有公布的功能,因为它只是 Collector 中的一个小改动,它的作用是 merge 两个 collector 的结果,这句话显得很抽象,老规矩,我们先来看个图(这真是一个不和谐的图????): 管道改造经常会用这个小东西,通常我们叫它「三通」,它的主要作用就是将 downstream1 和 downstre...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
【图解经典算法题】如何用一行代码解决约瑟夫环问题
约瑟夫环问题算是很经典的题了,估计大家都听说过,然后我就在一次笔试中遇到了,下面我就用 3 种方法来详细讲解一下这道题,最后一种方法学了之后保证让你可以让你装逼。 问题描述:编号为 1-N 的 N 个士兵围坐在一起形成一个圆圈,从编号为 1 的士兵开始依次报数(1,2,3…这样依次报),数到 m 的 士兵会被杀死出列,之后的士兵再从 1 开始报数。直到最后剩下一士兵,求这个士兵的编号。 1、方...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
GitHub标星近1万:只需5秒音源,这个网络就能实时“克隆”你的声音
作者 | Google团队 译者 | 凯隐 编辑 | Jane 出品 | AI科技大本营(ID:rgznai100) 本文中,Google 团队提出了一种文本语音合成(text to speech)神经系统,能通过少量样本学习到多个不同说话者(speaker)的语音特征,并合成他们的讲话音频。此外,对于训练时网络没有接触过的说话者,也能在不重新训练的情况下,仅通过未知...
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员...
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC...
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下...
【管理系统课程设计】美少女手把手教你后台管理
【文章后台管理系统】URL设计与建模分析+项目源码+运行界面 栏目管理、文章列表、用户管理、角色管理、权限管理模块(文章最后附有源码) 1. 这是一个什么系统? 1.1 学习后台管理系统的原因 随着时代的变迁,现如今各大云服务平台横空出世,市面上有许多如学生信息系统、图书阅读系统、停车场管理系统等的管理系统,而本人家里就有人在用烟草销售系统,直接在网上完成挑选、购买与提交收货点,方便又快捷。 试想,若没有烟草销售系统,本人家人想要购买烟草,还要独自前往药...
4G EPS 第四代移动通信系统
目录 文章目录目录4G 与 LTE/EPCLTE/EPC 的架构E-UTRANE-UTRAN 协议栈eNodeBEPCMMES-GWP-GWHSSLTE/EPC 协议栈概览 4G 与 LTE/EPC 4G,即第四代移动通信系统,提供了 3G 不能满足的无线网络宽带化,主要提供数据(上网)业务。而 LTE(Long Term Evolution,长期演进技术)是电信领域用于手机及数据终端的高速无线通...
相关热词 c# clr dll c# 如何orm c# 固定大小的字符数组 c#框架设计 c# 删除数据库 c# 中文文字 图片转 c# 成员属性 接口 c#如何将程序封装 16进制负数转换 c# c#练手项目
立即提问