tcp 负载均衡,tcp长连接的重复使用 5C

问题:
1、我有很多终端设备(非手机)通过TCP连接负载均衡服务器,并被分配到相应的后端服务器(通过端口连接,但这些后端服务器除了运行mina TCP长连接服务也提供http服务),请问这些终端设备是否直接与后端服务器TCP连接,而非与负载均衡服务连接?
2、我想通过手机APP发送消息给上述问题中的终端设备(例如设备 A),手机App是通过http负载均衡到上述的后端服务器,问题就是我手机APP如何找到后端服务器(这个服务器有TCP连接 连接着终端设备A),然后通过后端服务器发送消息给终端设备A

3个回答

嗯,负载均衡帮你转发连接,所以你连接到最终的服务器上
手机APP把信息发送到服务器,让服务器来把信息转发给对应的设备

终端设备是可以直接与后端服务器TCP连接,只要后端服务器本身没有什么限制就行
APP如何找到后端服务器:通过IP地址啊,只有网络是通的,通过服务器转发消息到终端设备肯定是没有问题的。

xman3638
xman3638 也就是说我APP想给设备A(假设设备A)发指令,设备A是通过tcp长连接到服务器ss,但http负载均衡的时候app的被分配到了另一个服务器pp,那我怎么把指令发给我该设备A,这个pp并没有连接到设备A的长连接
4 年多之前 回复
xman3638
xman3638 谢谢您的提示,但是app连的后端服务器是有好多个的,每个后端服务器的TCP长连接是不同的(即每个后端服务器连的设备是不同的)
4 年多之前 回复

谢谢您的提示,但是app连的后端服务器是有好多个的,每个后端服务器的TCP长连接是不同的(即每个后端服务器连的设备是不同的)。
也就是说我APP想给设备A(假设设备A)发指令,设备A是通过tcp长连接到服务器ss,但http负载均衡的时候app的被分配到了另一个服务器pp,那我怎么把指令发给我该设备A,这个pp并没有连接到设备A的长连接

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
TCP负载均衡如何获取客户端真实IP
用NGINX做的TCP负载均衡,无法获取到客户端真实IP,有朋友有什么好的解决方案吗?或者用其他什么负载均衡 软件能做到TCP负载均衡获取到客户端真实的IP? 是TCP协议的负载均衡,不是HTTP协议的负载均衡。
有没有弄过WCF TCP绑定的负载均衡的大神
如题 传说 WCF的TCP绑定只能由WCF方式访问(表述可能不正确),那WCF的TCP绑定的负载均衡如何实现啊
haproxy 做rabbitmq的集群负载均衡和高可用如果保持tcp长连接
我利用haproxy+rabbitmq搭建了一个消息队列的集群, haproxy为我提供集群的高可用和负载均衡,但是我发现haproxy的配置要求必须配置 客户端连接和服务端连接的超时时间,这就有一个问题,因为客户端监听消息队列需要长久保持一个连接,但是haproxy却在客户端连接一段时间没有消息后就会断开连接,这样就没办法保证我的队列监听程序能够持续连接,这个应该怎么办?
使用nginx 连接内网,报getsockopt TCP_NODELAY: Invalid argument错误
我在使用nginx连接内网的时候,ssh显示连接成功,但是页面显示502,终端遇到如下问题. ![图片说明](https://img-ask.csdn.net/upload/201912/28/1577499474_113849.png) 我的nginx配置如下 ![图片说明](https://img-ask.csdn.net/upload/201912/28/1577499614_120369.png) 连接命令如下 ![图片说明](https://img-ask.csdn.net/upload/201912/28/1577499755_873127.png) 是什么原因呢?
activemq的集群负载均衡
ZooKeeper + LevelDB + Static discovery 做activemq的集群负载均衡 在activemq.xml中已经添加了如下的配置 集群a: ``` <networkConnectors> <networkConnector uri="static:(tcp://10.1.60.32:53531,tcp://10.1.60.32:53532,tcp://10.1.60.32:53533)" duplex="false"/> </networkConnectors> ``` 集群b: ``` <networkConnectors> <networkConnector uri="static:(tcp://10.1.60.41:51511,tcp://10.1.60.42:51512,tcp://10.1.60.43:51513)" duplex="false"/> </networkConnectors> ``` 如上是我的配置,我是2个集群各3台activemq,每个单点都做了另外一个集群的networkConnnetor配置,activemq实例启动都是正常的,但是启动后无法通过桥接互相消费消息,报如下的错误,2边的日志中都有: A端的日志报: 2016-05-13 17:24:17,238 | WARN | Failed to add Connection Broker->Broker2-48906-1463129863611-233:1 | org.apache.activemq.broker.TransportConnection | triggerStartAsyncNetworkBridgeCreation: remoteBroker=tcp:///10.1.60.32:53531@51183, localBroker= vm://Broker#346 java.lang.SecurityException: User name [null] or password is invalid. at b端的日志报: 2016-05-13 17:25:25,671 | WARN | Failed to add Connection Broker2->Broker-50573-1463130022715-221:1 | org.apache.activemq.broker.TransportConnection | triggerStartAsyncNetworkBridgeCreation: remoteBroker=tcp:///10.1.60.43:51513@37179, localBroker= vm://Broker2#328 java.lang.SecurityException: User name [null] or password is invalid. at 百度了很久都没有类似的问题得到解决,不知道有没有谁碰到过此类问题?
nginx负载均衡 求组在线等
负载均衡配置后,可以访问前台页面,但是输入用户名密码后闪一下就没了。 http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; # Load config files from the /etc/nginx/conf.d directory # The default server is in conf.d/default.conf # include /etc/nginx/conf.d/*.conf; upstream tomcats{ server 192.168.42.128:8080; server 192.168.42.129:8080; } server { listen 80; server_name 192.168.42.130; location / { proxy_pass http://tomcats; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } }
mysql连接报错:ERROR 2002 (HY000):Can't connect to MySQL server on '172.18.0.4' (115)
使用MySQL客户端连接远程amoeba进行远程控制数据库报错: ``` root@uduntu:~# mysql -u root -P8066 -h 172.18.0.4 -p Enter password: ERROR 2002 (HY000): Can't connect to MySQL server on '172.18.0.4' (115) ``` 求解答????在线等,可进行远程协调 下面是amoeba.xml内容: ``` <?xml version="1.0" encoding="gbk"?> <!DOCTYPE amoeba:configuration SYSTEM "amoeba.dtd"> <amoeba:configuration xmlns:amoeba="http://amoeba.meidusa.com/"> <server> <!-- proxy server绑定的端口 --> <property name="port">8066</property> <!-- proxy server绑定的IP --> <property name="ipAddress">127.0.0.1</property> <!-- proxy server net IO Read thread size --> <property name="readThreadPoolSize">100</property> <!-- proxy server client process thread size --> <property name="clientSideThreadPoolSize">80</property> <!-- mysql server data packet process thread size --> <property name="serverSideThreadPoolSize">100</property> <!-- socket Send and receive BufferSize(unit:K) --> <property name="netBufferSize">100</property> <!-- Enable/disable TCP_NODELAY (disable/enable Nagle's algorithm). --> <property name="tcpNoDelay">false</property> <!-- 对外验证的用户名 --> <property name="user">root</property> <!-- 对外验证的密码 --> <property name="password">redhat</property> </server> <!-- 每个ConnectionManager都将作为一个线程启动。 manager负责Connection IO读写/死亡检测 --> <connectionManagerList> <connectionManager name="defaultManager"> <className>com.meidusa.amoeba.net.AuthingableConnectionManager</className> </connectionManager> </connectionManagerList> <dbServerList> <!-- 一台mysqlServer 需要配置一个pool, 如果多台 平等的mysql需要进行loadBalance, 平台已经提供一个具有负载均衡能力的objectPool:com.meidusa.amoeba.mysql.server.MultipleServerPool 简单的配置是属性加上 virtual="true",该Pool 不允许配置factoryConfig 或者自己写一个ObjectPool。 --> <dbServer name="server1"> <!-- PoolableObjectFactory实现类 --> <factoryConfig> <className>com.meidusa.amoeba.mysql.net.MysqlServerConnectionFactory</className> <property name="manager">defaultManager</property> <!-- 真实mysql数据库端口 --> <property name="port">3306</property> <!-- 真实mysql数据库IP --> <property name="ipAddress">172.18.0.2</property> <property name="schema">test</property> <!-- 用于登陆mysql的用户名 --> <property name="user">jia</property> <!-- 用于登陆mysql的密码 --> <property name="password">redhat</property> </factoryConfig> <!-- ObjectPool实现类 --> <poolConfig> <className>com.meidusa.amoeba.net.poolable.PoolableObjectPool</className> <property name="maxActive">200</property> <property name="maxIdle">200</property> <property name="minIdle">10</property> <property name="minEvictableIdleTimeMillis">600000</property> <property name="timeBetweenEvictionRunsMillis">600000</property> <property name="testOnBorrow">true</property> <property name="testWhileIdle">true</property> </poolConfig> </dbServer> <dbServer name="multiPool" virtual="true"> <poolConfig> <className>com.meidusa.amoeba.server.MultipleServerPool</className> <!-- 负载均衡参数 1=ROUNDROBIN , 2=WEIGHTBASED --> <property name="loadbalance">1</property> <!-- 参与该pool负载均衡的poolName列表以逗号分割 --> <property name="poolNames">server1</property> </poolConfig> </dbServer> </dbServerList> <queryRouter> <className>com.meidusa.amoeba.mysql.parser.MysqlQueryRouter</className> <property name="ruleConfig">${amoeba.home}/conf/rule.xml</property> <property name="functionConfig">${amoeba.home}/conf/functionMap.xml</property> <property name="ruleFunctionConfig">${amoeba.home}/conf/ruleFunctionMap.xml</property> <property name="LRUMapSize">1500</property> <property name="defaultPool">server1</property> <!-- <property name="writePool">server1</property> <property name="readPool">server1</property> --> <property name="needParse">true</property> </queryRouter> </amoeba:configuration> ```
nginx 反向代理三个本地tomcat 负载均衡配置 无法加载js css 图片等 也不报错
打开速度非常慢,到最后js css无法加载是什么问题 我的项目是ssh+jquery+easyui的 单独访问tomcat完全正常 访问nginx就这样了 。求大神指点啊 ![访问之后的页面,所有js css等都加载不出来](https://img-ask.csdn.net/upload/201503/19/1426745491_495484.png) ![tomcat访问是这样的](https://img-ask.csdn.net/upload/201503/19/1426745533_580775.png) 访问webapp根目录是可以的 我的页面都是在web-inf目录下的貌似不行 ![图片说明](https://img-ask.csdn.net/upload/201503/19/1426746884_806301.png) ![图片说明](https://img-ask.csdn.net/upload/201503/19/1426746915_887499.png) nginx配置如下: ``` #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; include gzip.conf; #gzip on; #weight=1 upstream tomcat1 { server localhost:7777; server localhost:7778; server localhost:7779; } server { listen 80; server_name localhost; charset utf-8; #access_log logs/host.access.log main; location / { root html; index index.html index.htm index.jsp; proxy_pass http://tomcat1; proxy_set_header X-Real-IP $remote_addr; client_max_body_size 100m; } location ~ .*.jsp$ { index index.jsp; proxy_pass http://tomcat1; } #location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$ { #expires 30d; #} #location ~ .*\.(js|css)?$ { #expires 1h; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } ``` tomcat配置如下: 全部都一样端口依次是7777,7778,7779 ``` <?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="9001" shutdown="SHUTDOWN"> <!-- Security listener. Documentation at /docs/config/listeners.html <Listener className="org.apache.catalina.security.SecurityListener" /> --> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="7777" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="9443" URIEncoding="UTF-8"/> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the BIO implementation that requires the JSSE style configuration. When using the APR/native implementation, the OpenSSL style configuration is required as described in the APR/native documentation --> <!-- <Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="9009" protocol="AJP/1.3" redirectPort="9443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat1"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- Use the LockOutRealm to prevent attempts to guess user passwords via a brute-force attack --> <Realm className="org.apache.catalina.realm.LockOutRealm"> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> </Realm> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" /> </Host> </Engine> </Service> </Server> ```
nginx tcp代理后服务端能收到客户端消息,但客户端收不到服务端返回的消息
nginx tcp代理后服务端能收到客户端消息,但客户端收不到服务端返回的消息。 nginx 配置如下: stream{ upstream mqttserver{ hash $remote_addr consistent; server 127.0.0.1:55534 max_fails=1 fail_timeout=10s; } server{ listen 55533 so_keepalive=on; proxy_connect_timeout 5s; proxy_timeout 30s; proxy_pass mqttserver; } }
openwrt 设置iptables+haproxy实现https透明代理
pve下虚拟openwrt做软路由,网关为192.168.1.1。openwrt布置haproxy做代理转发。 内网用户1.247需经网关1.1被haproxy转发上网。 iptables设置如下: ``` iptables -A INPUT -p tcp --dport 443 -j ACCEPT iptables -A OUTPUT -p tcp --sport 443 -j ACCEPT iptables -A FORWARD -p tcp --dport 443 -j ACCEPT iptables -A FORWARD -p tcp --sport 443 -j ACCEPT #iptables -t nat -A PREROUTING -p tcp -s 192.168.1.247 --dport 443 -j DNAT --to 192.168.1.1:8443 iptables -t nat -A PREROUTING -p tcp -s 192.168.1.247 --dport 80 -j DNAT --to 192.168.1.1:8800 ``` haproxy 已经简单配置 ``` global defaults log global mode tcp option dontlognull timeout connect 1000 timeout client 150000 timeout server 150000 frontend ss-in bind 192.168.1.247:8443 default_backend ss-out backend ss-out server server1 x.x.x.x:xx maxconn 20480 ``` 目前的问题是访问外网http80端口可以被haproxy转发并上网,iptables 设置生效。 但是访问https 443端口的话只能被转发,haproxy能够接收,但是客户端1.247无法正常上网。 为了证实haproxy的https转发效果,把客户端1.247配置代理 ``` 服务器: 192.168.1.1 端口:8443 ``` 客户端1.247能进行正常https访问,并且经由haproxy转发工作正常。 只是单独在iptables 里配置 ``` iptables -t nat -A PREROUTING -p tcp -s 192.168.1.247 --dport 443 -j DNAT --to 192.168.1.1:8443 ``` 就是不工作。请问各位大侠,我是iptables配置不正确吗? 本人才接触网络知识,各位轻喷,请指正,谢谢!
API网关报错负载均衡器不能找到客户端的服务器?Consul+zuul
直接上错误信息:其中svms-file-8585时我的另一个服务id ``` Registering service with consul: NewService{id='svms-file-8585', name='svms-file', tags=[serverType=2, secure=false], address='172.16.128.95', meta=null, port=8585, enableTagOverride=null, check=Check{script='null', interval='10s', ttl='null', http='http://172.16.128.95:8585/health', method='null', header={}, tcp='null', timeout='null', deregisterCriticalServiceAfter='null', tlsSkipVerify=null, status='null'}, checks=null} ``` ------------------------------------------------------------ ``` com.netflix.zuul.exception.ZuulException: Forwarding error at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.handleException(RibbonRoutingFilter.java:198) ~[spring-cloud-netflix-zuul-2.1.1.RELEASE.jar:2.1.1.RELEASE] at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.forward(RibbonRoutingFilter.java:173) ~[spring-cloud-netflix-zuul-2.1.1.RELEASE.jar:2.1.1.RELEASE] at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.run(RibbonRoutingFilter.java:119) ~[spring-cloud-netflix-zuul-2.1.1.RELEASE.jar:2.1.1.RELEASE] at com.netflix.zuul.ZuulFilter.runFilter(ZuulFilter.java:117) ~[zuul-core-1.3.1.jar:1.3.1] at com.netflix.zuul.FilterProcessor.processZuulFilter(FilterProcessor.java:193) ~[zuul-core-1.3.1.jar:1.3.1] at com.netflix.zuul.FilterProcessor.runFilters(FilterProcessor.java:157) ~[zuul-core-1.3.1.jar:1.3.1] at com.netflix.zuul.FilterProcessor.route(FilterProcessor.java:118) ~[zuul-core-1.3.1.jar:1.3.1] at com.netflix.zuul.ZuulRunner.route(ZuulRunner.java:96) ~[zuul-core-1.3.1.jar:1.3.1] at com.netflix.zuul.http.ZuulServlet.route(ZuulServlet.java:116) ~[zuul-core-1.3.1.jar:1.3.1] at com.netflix.zuul.http.ZuulServlet.service(ZuulServlet.java:81) ~[zuul-core-1.3.1.jar:1.3.1] at org.springframework.web.servlet.mvc.ServletWrappingController.handleRequestInternal(ServletWrappingController.java:165) [spring-webmvc-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.cloud.netflix.zuul.web.ZuulController.handleRequest(ZuulController.java:45) [spring-cloud-netflix-zuul-2.1.1.RELEASE.jar:2.1.1.RELEASE] at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:52) [spring-webmvc-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1038) [spring-webmvc-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942) [spring-webmvc-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005) [spring-webmvc-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:897) [spring-webmvc-5.1.5.RELEASE.jar:5.1.5.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:634) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882) [spring-webmvc-5.1.5.RELEASE.jar:5.1.5.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:741) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) [tomcat-embed-websocket-9.0.16.jar:9.0.16] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.springframework.boot.actuate.web.trace.servlet.HttpTraceFilter.doFilterInternal(HttpTraceFilter.java:90) [spring-boot-actuator-2.1.3.RELEASE.jar:2.1.3.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99) [spring-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:92) [spring-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93) [spring-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.filterAndRecordMetrics(WebMvcMetricsFilter.java:117) [spring-boot-actuator-2.1.3.RELEASE.jar:2.1.3.RELEASE] at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:106) [spring-boot-actuator-2.1.3.RELEASE.jar:2.1.3.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200) [spring-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:200) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:834) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1415) [tomcat-embed-core-9.0.16.jar:9.0.16] at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-9.0.16.jar:9.0.16] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_161] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_161] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-9.0.16.jar:9.0.16] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_161] Caused by: com.netflix.client.ClientException: Load balancer does not have available server for client: svms-file-8585 at com.netflix.loadbalancer.LoadBalancerContext.getServerFromLoadBalancer(LoadBalancerContext.java:483) ~[ribbon-loadbalancer-2.3.0.jar:2.3.0] at com.netflix.loadbalancer.reactive.LoadBalancerCommand$1.call(LoadBalancerCommand.java:184) ~[ribbon-loadbalancer-2.3.0.jar:2.3.0] at com.netflix.loadbalancer.reactive.LoadBalancerCommand$1.call(LoadBalancerCommand.java:180) ~[ribbon-loadbalancer-2.3.0.jar:2.3.0] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeConcatMap.call(OnSubscribeConcatMap.java:94) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeConcatMap.call(OnSubscribeConcatMap.java:42) ~[rxjava-1.3.8.jar:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OperatorRetryWithPredicate$SourceSubscriber$1.call(OperatorRetryWithPredicate.java:127) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.schedulers.TrampolineScheduler$InnerCurrentThreadScheduler.enqueue(TrampolineScheduler.java:73) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.schedulers.TrampolineScheduler$InnerCurrentThreadScheduler.schedule(TrampolineScheduler.java:52) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OperatorRetryWithPredicate$SourceSubscriber.onNext(OperatorRetryWithPredicate.java:79) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OperatorRetryWithPredicate$SourceSubscriber.onNext(OperatorRetryWithPredicate.java:45) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.util.ScalarSynchronousObservable$WeakSingleProducer.request(ScalarSynchronousObservable.java:276) ~[rxjava-1.3.8.jar:1.3.8] at rx.Subscriber.setProducer(Subscriber.java:209) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.util.ScalarSynchronousObservable$JustOnSubscribe.call(ScalarSynchronousObservable.java:138) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.util.ScalarSynchronousObservable$JustOnSubscribe.call(ScalarSynchronousObservable.java:129) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.Observable.subscribe(Observable.java:10423) ~[rxjava-1.3.8.jar:1.3.8] at rx.Observable.subscribe(Observable.java:10390) ~[rxjava-1.3.8.jar:1.3.8] at rx.observables.BlockingObservable.blockForSingle(BlockingObservable.java:443) ~[rxjava-1.3.8.jar:1.3.8] at rx.observables.BlockingObservable.single(BlockingObservable.java:340) ~[rxjava-1.3.8.jar:1.3.8] at com.netflix.client.AbstractLoadBalancerAwareClient.executeWithLoadBalancer(AbstractLoadBalancerAwareClient.java:112) ~[ribbon-loadbalancer-2.3.0.jar:2.3.0] at org.springframework.cloud.netflix.zuul.filters.route.support.AbstractRibbonCommand.run(AbstractRibbonCommand.java:221) ~[spring-cloud-netflix-zuul-2.1.1.RELEASE.jar:2.1.1.RELEASE] at org.springframework.cloud.netflix.zuul.filters.route.support.AbstractRibbonCommand.run(AbstractRibbonCommand.java:55) ~[spring-cloud-netflix-zuul-2.1.1.RELEASE.jar:2.1.1.RELEASE] at com.netflix.hystrix.HystrixCommand$2.call(HystrixCommand.java:302) ~[hystrix-core-1.5.18.jar:1.5.18] at com.netflix.hystrix.HystrixCommand$2.call(HystrixCommand.java:298) ~[hystrix-core-1.5.18.jar:1.5.18] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:46) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:51) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:51) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35) ~[rxjava-1.3.8.jar:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:48) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:33) ~[rxjava-1.3.8.jar:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.Observable.unsafeSubscribe(Observable.java:10327) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:51) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30) ~[rxjava-1.3.8.jar:1.3.8] at rx.Observable.subscribe(Observable.java:10423) ~[rxjava-1.3.8.jar:1.3.8] at rx.Observable.subscribe(Observable.java:10390) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.BlockingOperatorToFuture.toFuture(BlockingOperatorToFuture.java:51) ~[rxjava-1.3.8.jar:1.3.8] at rx.observables.BlockingObservable.toFuture(BlockingObservable.java:410) ~[rxjava-1.3.8.jar:1.3.8] at com.netflix.hystrix.HystrixCommand.queue(HystrixCommand.java:378) ~[hystrix-core-1.5.18.jar:1.5.18] at com.netflix.hystrix.HystrixCommand.execute(HystrixCommand.java:344) ~[hystrix-core-1.5.18.jar:1.5.18] at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.forward(RibbonRoutingFilter.java:167) ~[spring-cloud-netflix-zuul-2.1.1.RELEASE.jar:2.1.1.RELEASE] ... 64 common frames omitted ```
用Nginx实现均衡负载效率反而降低,为什么?
我在Win7系统下安装了两个tomcat,一个8080端口,一个8088端口。跑着同样的WEB项目。然后在Win7下用VMWare装了个CentOS6.5虚拟机,在该虚拟机上安装并配置了Nginx。然后在该虚拟机上用apache ab进行压力测试,结果用nginx反而比不用差一些。请高手帮助解惑!!! 下面是我的测试方法和结果: [root@localhost local]# ab -n 10000 -c 1000 -H "Authorization: Digest username="cpe", realm="SWACS", nonce="2ef7d7f74e136a548362a488fa1ca753",uri="/SWTMS/acs", qop="auth", nc=00000001, cnonce="1243442698", response="78374f1538516f6055faf0dcab6571a7",opaque="A0720F8F9F1C4EC1B108C201E4660C79"" -p ./inform.txt http://172.16.15.110:8080/SWTMS/acs(这是直接访问tomcat,访问nginx则是把url改成http://172.16.15.111/SWTMS/acs) This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking 172.16.15.110 (be patient) Completed 1000 requests Completed 2000 requests Completed 3000 requests Completed 4000 requests Completed 5000 requests Completed 6000 requests Completed 7000 requests Completed 8000 requests Completed 9000 requests Completed 10000 requests Finished 10000 requests Server Software: Apache-Coyote/1.1 Server Hostname: 172.16.15.110 Server Port: 8080 Document Path: /SWTMS/acs Document Length: 549 bytes Concurrency Level: 1000 Time taken for tests: 10.269 seconds Complete requests: 10000 Failed requests: 0 Write errors: 0 Total transferred: 7910000 bytes Total POSTed: 32570000 HTML transferred: 5490000 bytes Requests per second: 973.81 [#/sec] (mean) Time per request: 1026.895 [ms] (mean) Time per request: 1.027 [ms] (mean, across all concurrent requests) 下面是我的Nginx配置: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; error_log logs/error.log info; pid logs/nginx.pid; worker_rlimit_nofile 65535; events { worker_connections 65535; multi_accept on; use epoll; } http { include mime.types; default_type application/octet-stream; charset utf-8; #默认编码 server_names_hash_bucket_size 128; #服务器名字的hash表大小 client_header_buffer_size 32k; #上传文件大小限制 large_client_header_buffers 4 64k; #设定请求缓 client_max_body_size 8m; #设定请求缓 log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; #FastCGI相关参数是为了改善网站的性能:减少资源占用,提高访问速度。下面参数看字面意思都能理解。 fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; #gzip模块设置 gzip on; #开启gzip压缩输出 gzip_min_length 1k; #最小压缩文件大小 gzip_buffers 4 16k; #压缩缓冲区 gzip_http_version 1.1; #压缩版本(默认1.1,前端如果是squid2.5请使用1.0) gzip_comp_level 2; #压缩等级 gzip_types text/plain application/x-javascript text/css application/xml; #压缩类型,默认就已经包含text/html,所以下面就不用再写了,写上去也不会有问题,但是会有一个warn。 gzip_vary on; upstream tomcats { server 172.16.15.110:8088 weight=1; server 172.16.15.110:8080 weight=1; #ip_hash; } server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; proxy_pass http://tomcats; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 50m; client_body_buffer_size 256k; proxy_connect_timeout 10; proxy_send_timeout 15; proxy_read_timeout 15; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} }
LVS高性能高可用:TCP socket bind failed. Reschedu....
![图片说明](https://img-ask.csdn.net/upload/201706/28/1498611021_719097.png) 以下是代理机的配置: 1、yum -y install keepalived 2、> /etc/keepalived/keepalived.conf 3、vi /etc/keepalived/keepalived.conf ``` global_defs { router_id LVS_DEVEL } vrrp_instance VI_1 { state MASTER interface ens32 # 虚拟ip所在的网卡 virtual_router_id 51 # 标识ID,请务必保证内网唯一性,和备机必须一致! priority 100 # 权重需比备机高 advert_int 1 # 主备检测间隔 # 主备通信密码,必须一致 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.43.200 # 多个需换行填写 } } virtual_server 192.168.43.200 8080 { delay_loop 6 # 每隔6秒检测Realserver健康状况 lb_algo rr lb_kind DR # 采用DR模式,即直接路由,其他还有TUN和NAT两种模式 nat_mask 255.255.255.0 persistence_timeout 1 # #会话保持时间,相当于session 保持时间 protocol TCP real_server 192.168.43.31 8080 { # 第一个realserver,请注意80和后面 { 之间有个空格! weight 1 # 权重 TCP_CHECK { connect_timeout 10 # 超时10秒,则失败 nb_get_retry 3 # 失败重试次数 delay_before_retry 3 # 重试间隔时间 } } real_server 192.168.43.32 8080 { weight 1 TCP_CHECK { connect_timeout 10 # 超时10秒,则失败 nb_get_retry 3 # 失败重试次数 delay_before_retry 3 # 重试间隔时间 } } } ``` 4、 启动服务 systemctl start keepalived.service **# 问题描述:** 可以实现转发,但无法实现负载均衡和故障转移
在本地(windows)上可以查看png格式图片,在linux服务器上就看不到呢,会报异常java.lang.IllegalArgumentException: image == null!
这是错误信息: ``` java.lang.IllegalArgumentException: image == null! at javax.imageio.ImageTypeSpecifier.createFromRenderedImage(ImageTypeSpecifier.java:925) at javax.imageio.ImageIO.getWriter(ImageIO.java:1591) at javax.imageio.ImageIO.write(ImageIO.java:1578) at com.jit.ftp.FtpUtil.encodeImageToString(FtpUtil.java:212) at com.jit.ftp.FtpUtil.imagesBookNow(FtpUtil.java:251) at com.jit.jso.soil_manage.action.SoilAction.toViewSoil(SoilAction.java:292) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:176) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:440) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:428) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:925) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:856) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:936) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:827) at javax.servlet.http.HttpServlet.service(HttpServlet.java:621) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:812) at javax.servlet.http.HttpServlet.service(HttpServlet.java:722) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:304) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at com.alibaba.druid.support.http.WebStatFilter.doFilter(WebStatFilter.java:124) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at com.jit.filter.SessionFilter.doFilter(SessionFilter.java:53) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:88) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:240) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:164) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:498) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:164) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:562) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) at org.apache.catalina.ha.session.JvmRouteBinderValve.invoke(JvmRouteBinderValve.java:218) at org.apache.catalina.ha.tcp.ReplicationValve.invoke(ReplicationValve.java:333) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:394) at org.apache.coyote.ajp.AjpAprProcessor.process(AjpAprProcessor.java:283) at org.apache.coyote.ajp.AjpAprProtocol$AjpConnectionHandler.process(AjpAprProtocol.java:189) at org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:1714) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) ``` 这是部分代码: ``` FTPClient ftp = new FTPClient(); int reply; ftp.connect(url, port);// 连接FTP服务器 // 如果采用默认端口,可以使用ftp.connect(url)的方式直接连接FTP服务器 ftp.login(userName, passWord);// 登录 reply = ftp.getReplyCode(); ftp.enterLocalPassiveMode(); //将路径转成输入流后再转成64位base编码 FTPClient ftpc = new FTPClient(); boolean t=ftp.changeWorkingDirectory("/"+strList.get(1).toString().substring(1, strList.get(1).length())); System.out.println("11"+t); InputStream ips = ftp.retrieveFileStream(strList.get(0)); ftp.logout(); // closeConnections(ftpc); String field = FtpUtil.encodeImageToString(strList.get(1)+"/"+strList.get(0),ips);//第一个参数路径是文件夹和图片的路径 pathList.add(field); ``` ```if(ips==null){ return ""; } System.out.println(ips); System.out.println(imagePath); System.out.println(type); BufferedImage image = ImageIO.read(ips); System.out.println(image); String imageString = null; ByteArrayOutputStream bos = new ByteArrayOutputStream(); ImageIO.write(image, type, bos); byte[] imageBytes = bos.toByteArray(); BASE64Encoder encoder = new BASE64Encoder(); imageString = encoder.encode(imageBytes); bos.close(); return "data:image/jpeg;base64,"+imageString; } catch (IOException e) { e.printStackTrace(); } return ""; 只有png格式的图片无法读出,BufferedImage image = ImageIO.read(ips); image 返回的是null ,求大神解答
window server 2008 ,远程登陆黑屏,必须重启服务器才可正常连接。
1 重新起动服务器后,可以正常连接。 2 间隔一段时间后,远程登陆服务器后黑屏,无法操作,按键都不好使。 3 去机房,将显示器和键盘连接到本地服务器上后,仍然不能操作,鼠标也无法移动,键盘点击任何按键也无法操作。 4 重启服务器后,查看系统日志,发现如下报错: ![图片说明](https://img-ask.csdn.net/upload/201905/10/1557477719_606597.png) ![图片说明](https://img-ask.csdn.net/upload/201905/10/1557477734_82182.png) 在网上看了很多,尝试过的办法如下: 1 修改默认端口号3389为7177,未解决。 2 Certificate键值删除方法,未解决。 3 netsh int tcp set global chimney=disabled netsh int tcp set global rss=disabled 增加EnableTCPA键值对未解决。 没分了,不好意思。 希望大佬指点。谢谢。
使用lvs配置的k8s master,k8s node要用虚拟ip访问master应该如何配置
master配置如下10.29.9.120: /usr/bin/kube-apiserver --logtostderr=true --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction --advertise-address=192.168.200.200 --bind-address=192.168.200.200 --insecure-bind-address=127.0.0.1 --authorization-mode=Node,RBAC --runtime-config=rbac.authorization.k8s.io/v1alpha1 --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-cluster-ip-range=10.254.0.0/16 --service-node-port-range=8400-10000 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --etcd-servers=https://10.29.9.120:2379,https://10.29.42.4:2379,https://10.29.22.137:2379 --enable-swagger-ui=true --allow-privileged=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h --v=2 inet 192.168.200.200/32 scope global lo:15 valid_lft forever preferred_lft forever 虚拟服务器配置如下10.29.20.22: IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.200.200:80 rr TCP 192.168.200.200:6443 wrr 其他的k8s node应该怎么配置? 现在报错如下: May 30 16:32:22 shaphicprd03491 kubelet[27731]: I0530 16:32:22.567808 27731 bootstrap.go:239] Failed to connect to apiserver: Get https://192.168.200.200:6443/healthz?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
rabbitmq普通集群(3节点)+haproxy,节点无规律宕掉
现在生产上有一个rabbitmq(RabbitMQ 3.6.3, Erlang 18.1),三个节点,普通集群,集群前面有一个haproxy进行负载均衡,最近发现集群中的三个节点会随机出现宕掉的情况,影响业务。 下面是haproxy的配置 ``` global log 127.0.0.1 local0 log 127.0.0.1 local2 info maxconn 4096 user haproxy group haproxy daemon pidfile /var/run/haproxy.pid defaults log global mode tcp option tcplog option dontlognull #保证HAProxy不记录上级负载均衡发送过来的用于检测状态没有数据的心跳包 retries 3 #重试次数 option redispatch maxconn 4096 #最大连接数 #balance source #如果想让HAProxy按照客户端的IP地址进行负载均衡策略,即同一IP地址的所有请求都发送到同一服务器时需要配置此选项 balance leastconn timeout connect 5000 timeout client 70000 timeout server 70000 listen admin_stat #haproxy的web管理端口 8888,自行设置 bind 0.0.0.0:8888 mode http stats refresh 30s #haproxy web管理url,自行设置 stats uri /haproxy_stats stats realm Haproxy\ Statistics #haproxy web管理用户名密码,自行设置 stats auth admin:Byb1Asr9 stats hide-version listen rabbitmq #监听5672端口(如果haproxy安装在集群节点上时请选择非5672端口如5670),并转发给三个节点的5672端口,采用轮询策略 bind 0.0.0.0:5672 mode tcp balance roundrobin server rabbitmq-1 192.168.1.15:5672 check inter 2000 rise 2 fall 3 server rabbitmq-2 192.168.1.16:5672 check inter 2000 rise 2 fall 3 server rabbitmq-3 192.168.1.17:5672 check inter 2000 rise 2 fall 3 ``` 下图是rabbitmq的信息 ![图片说明](https://img-ask.csdn.net/upload/201806/07/1528360511_731929.jpg) 节点宕掉的时候,在节点的mq日志(rabbitmq@mq1.log)里只能看到下面的信息,没有其他的错误信息 在系统的/var/log/message这个时间段,没有出现错误。 ``` =INFO REPORT==== 5-Jun-2018::01:30:09 === accepting AMQP connection <0.227.33> (192.168.1.7:43276 -> 192.168.1.16:5672) =INFO REPORT==== 5-Jun-2018::01:30:09 === accepting AMQP connection <0.224.33> (192.168.1.7:43279 -> 192.168.1.16:5672) =INFO REPORT==== 5-Jun-2018::01:30:09 === accepting AMQP connection <0.294.33> (192.168.1.7:43282 -> 192.168.1.16:5672) =WARNING REPORT==== 6-Jun-2018::12:18:07 === closing AMQP connection <0.227.33> (192.168.1.7:43276 -> 192.168.1.16:5672): client unexpectedly closed TCP connection =INFO REPORT==== 6-Jun-2018::19:14:47 === rabbit on node rabbit@mq1 down =INFO REPORT==== 6-Jun-2018::19:14:47 === accepting AMQP connection <0.16310.70> (192.168.1.7:42508 -> 192.168.1.16:5672) =INFO REPORT==== 6-Jun-2018::19:14:47 === accepting AMQP connection <0.16320.70> (192.168.1.7:42511 -> 192.168.1.16:5672) =INFO REPORT==== 6-Jun-2018::19:14:48 === accepting AMQP connection <0.16390.70> (192.168.1.7:42514 -> 192.168.1.16:5672) =INFO REPORT==== 6-Jun-2018::19:14:48 === Statistics event collector started. ``` 在/var/log/secure里宕掉的时间点有如下日志,其他的信息都没有了: ``` Jun 6 19:14:48 mq1 su: pam_unix(su:session): session closed for user rabbitmq ``` 想问一下,各位,有没有谁碰到类似的问题,或者能否提供一些解析问题的思路?谢谢了
LVS+nginx配置完成后访问VIP没有进行转发。
MASTER级配置: ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 100 priority 150 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.23.52.200 } } virtual_server 10.23.52.200 80{ delay_loop 6 lb_algo wrr lb_kind DR persistence_timeout 1 protocol TCP real_server 10.23.52.113 80{ weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_:retry 3 } } real_server 10.23.52.114 80{ weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } 查看监听情况: [root@localhost keepalived]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.23.52.200:80 wrr persistent 1 -> 10.23.52.113:80 Route 1 0 0 -> 10.23.52.114:80 Route 1 0 0 [root@localhost keepalived]# 查看日志: [root@localhost keepalived]# tail -f /var/log/messages Mar 6 10:31:25 localhost kernel: IPVS: [wrr] scheduler registered. Mar 6 10:31:26 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE Mar 6 10:31:27 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE Mar 6 10:31:27 localhost Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs. Mar 6 10:31:27 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.23.52.200 Mar 6 10:31:32 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.23.52.200 Mar 6 10:32:08 localhost Keepalived_healthcheckers: TCP connection to [10.23.52.113]:80 failed !!! Mar 6 10:32:08 localhost Keepalived_healthcheckers: Removing service [10.23.52.113]:80 from VS [10.23.52.200]:80 Mar 6 10:32:20 localhost Keepalived_healthcheckers: TCP connection to [10.23.52.113]:80 success. Mar 6 10:32:20 localhost Keepalived_healthcheckers: Adding service [10.23.52.113]:80 to VS [10.23.52.200]:80 访问真是IP: [root@localhost keepalived]# curl 10.23.52.113 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.我是113号机器 </em></p> </body> </html> [root@localhost keepalived]# 访问VIP一直没有反应: [root@localhost keepalived]# curl 10.23.52.200 curl: (7) couldn't connect to host [root@localhost keepalived]#
nginx+tomcat集群下的压力测试
我现在在单机下尝试模拟nginx+tomcat集群的配置,开了3台虚拟机,1台作nginx服务器另外两台作tomcat服务器。 tomcat下的也设置了maxThreads和timeOut的值。 nginx配的是将所有请求全部转发给后端的两台tomcat。 最后再装了nginx的虚拟机上用ab进行压力测试,在 -n 10000 -c 1500的时候 去测试192.168.171.134:18082没有问题,而测用127.0.0.1测nginx的时候则报错了。 apr_socket_recv: Connection timed out (110) 这个错误去查了说是修改sysctl.conf的参数,关于内核的优化我也去改过了,不过没用。按理说如果是这个问题的话,测192.168.171.134的时候同样是1500并发也应该报错的,但只有测nginx的时候才会报错,而且如果不用nginx的proxy_pass,单单测nginx下的html页面也不会报错。 还有比较奇怪的是,为什么配了nginx做负载均衡后无论用ab还是webbench测试得到的结果都是加上nginx后更慢呢而且还容易fail,我用的是tomcat的主页index.jsp作的测试页面 那么是不是我nginx哪里配的不对呢,我一时想不明白,有人能帮助我吗? nginx.conf user nobody; worker_processes 2; worker_rlimit_nofile 204800; events { use epoll; worker_connections 204800; } http { include mime.types; default_type application/octet-stream; fastcgi_intercept_errors on; charset utf-8; server_names_hash_bucket_size 128; client_header_buffer_size 4k; large_client_header_buffers 4 32k; client_max_body_size 300m; sendfile on; tcp_nopush on; keepalive_timeout 60; tcp_nodelay on; client_body_buffer_size 512k; proxy_connect_timeout 5; proxy_read_timeout 60; proxy_send_timeout 5; proxy_buffer_size 16k; proxy_buffers 4 64k; proxy_busy_buffers_size 128k; proxy_temp_file_write_size 128k; gzip on; gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.1; gzip_comp_level 2; gzip_types text/plain application/x-javascript text/css application/xml; gzip_vary on; upstream abc { server 192.168.171.4:8080 weight=1 max_fails=2 fail_timeout=30s; server 192.168.171.134:18082 weight=1 max_fails=2 fail_timeout=30s; } server { listen 80; server_name localhost; large_client_header_buffers 4 16k; client_max_body_size 300m; client_body_buffer_size 128k; proxy_connect_timeout 6000; proxy_read_timeout 6000; proxy_send_timeout 6000; proxy_buffer_size 64k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; location / { proxy_next_upstream http_502 http_504 http_404 error timeout invalid_header; proxy_pass http://abc; } } }
相见恨晚的超实用网站
搞学习 知乎:www.zhihu.com 简答题:http://www.jiandati.com/ 网易公开课:https://open.163.com/ted/ 网易云课堂:https://study.163.com/ 中国大学MOOC:www.icourse163.org 网易云课堂:study.163.com 哔哩哔哩弹幕网:www.bilibili.com 我要自学网:www.51zxw
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
字节跳动视频编解码面经
引言 本文主要是记录一下面试字节跳动的经历。 三四月份投了字节跳动的实习(图形图像岗位),然后hr打电话过来问了一下会不会opengl,c++,shador,当时只会一点c++,其他两个都不会,也就直接被拒了。 七月初内推了字节跳动的提前批,因为内推没有具体的岗位,hr又打电话问要不要考虑一下图形图像岗,我说实习投过这个岗位不合适,不会opengl和shador,然后hr就说秋招更看重基础。我当时
开源一个功能完整的SpringBoot项目框架
福利来了,给大家带来一个福利。 最近想了解一下有关Spring Boot的开源项目,看了很多开源的框架,大多是一些demo或者是一个未成形的项目,基本功能都不完整,尤其是用户权限和菜单方面几乎没有完整的。 想到我之前做的框架,里面通用模块有:用户模块,权限模块,菜单模块,功能模块也齐全了,每一个功能都是完整的。 打算把这个框架分享出来,供大家使用和学习。 为什么用框架? 框架可以学习整体
源码阅读(19):Java中主要的Map结构——HashMap容器(下1)
(接上文《源码阅读(18):Java中主要的Map结构——HashMap容器(中)》) 3.4.4、HashMap添加K-V键值对(红黑树方式) 上文我们介绍了在HashMap中table数组的某个索引位上,基于单向链表添加新的K-V键值对对象(HashMap.Node&lt;K, V&gt;类的实例),但是我们同时知道在某些的场景下,HashMap中table数据的某个索引位上,数据是按照红黑树
c++制作的植物大战僵尸,开源,一代二代结合游戏
    此游戏全部由本人自己制作完成。游戏大部分的素材来源于原版游戏素材,少部分搜集于网络,以及自己制作。 此游戏为同人游戏而且仅供学习交流使用,任何人未经授权,不得对本游戏进行更改、盗用等,否则后果自负。 目前有六种僵尸和六种植物,植物和僵尸的动画都是本人做的。qq:2117610943 开源代码下载 提取码:3vzm 点击下载--&gt; 11月28日 新增四种植物 统一植物画风,全部修
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
Python——画一棵漂亮的樱花树(不同种樱花+玫瑰+圣诞树喔)
最近翻到一篇知乎,上面有不少用Python(大多是turtle库)绘制的树图,感觉很漂亮,我整理了一下,挑了一些我觉得不错的代码分享给大家(这些我都测试过,确实可以生成喔~) one 樱花树 动态生成樱花 效果图(这个是动态的): 实现代码 import turtle as T import random import time # 画樱花的躯干(60,t) def Tree(branch
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
Python 基础(一):入门必备知识
目录1 标识符2 关键字3 引号4 编码5 输入输出6 缩进7 多行8 注释9 数据类型10 运算符10.1 常用运算符10.2 运算符优先级 1 标识符 标识符是编程时使用的名字,用于给变量、函数、语句块等命名,Python 中标识符由字母、数字、下划线组成,不能以数字开头,区分大小写。 以下划线开头的标识符有特殊含义,单下划线开头的标识符,如:_xxx ,表示不能直接访问的类属性,需通过类提供
深度学习图像算法在内容安全领域的应用
互联网给人们生活带来便利的同时也隐含了大量不良信息,防范互联网平台有害内容传播引起了多方面的高度关注。本次演讲从技术层面分享网易易盾在内容安全领域的算法实践经验,包括深度学习图
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
Python 编程实用技巧
Python是一门很灵活的语言,也有很多实用的方法,有时候实现一个功能可以用多种方法实现,我这里总结了一些常用的方法,并会持续更新。
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
"狗屁不通文章生成器"登顶GitHub热榜,分分钟写出万字形式主义大作
GitHub 被誉为全球最大的同性交友网站,……,陪伴我们已经走过 10+ 年时间,它托管了大量的软件代码,同时也承载了程序员无尽的欢乐。 万字申请,废话报告,魔幻形式主义大作怎么写?兄dei,狗屁不通文章生成器了解一下。这个富有灵魂的项目名吸引了众人的目光。项目仅仅诞生一周,便冲上了GitHub趋势榜榜首(Js中文网 -前端进阶资源教程)、是榜首哦
推荐几款比较实用的工具,网站
1.盘百度PanDownload 这个云盘工具是免费的,可以进行资源搜索,提速(偶尔会抽风????) 不要去某站买付费的???? PanDownload下载地址 2.BeJSON 这是一款拥有各种在线工具的网站,推荐它的主要原因是网站简洁,功能齐全,广告相比其他广告好太多了 bejson网站 3.二维码美化 这个网站的二维码美化很好看,网站界面也很...
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
相关热词 c#处理浮点数 c# 生成字母数字随机数 c# 动态曲线 控件 c# oracle 开发 c#选择字体大小的控件 c# usb 批量传输 c#10进制转8进制 c#转base64 c# 科学计算 c#下拉列表获取串口
立即提问