多线程socket网络编程在线程中出现段错误,recv返回-1

图片说明

如图,通过右边的变量信息可知在第一个线程中socket没错,recv成功接收到消息,但把socket传递给下一个消息循环接收处理线程却报错:在一个非套接字上尝试了一个操作。继续执行,则会出现第二张图上的错误提示图片说明

即段错误,不知该怎么解决,望有大神能够解答下我。多谢
下面是两个线程函数代码

//服务器线程函数
DWORD WINAPI ServerThread(LPVOID lpParam)
{
  int AddrSize;
  char buff[1024] = {0};        //存放接受信息
  TH tp;                        //传递的参数
  struct sockaddr_in their_addr;
  EnterCriticalSection(&gs);
  AddrSize = sizeof(struct sockaddr_in);
  tp.hwnd = (HWND)lpParam;
  tp.socket = accept(sockfd ,(struct sockaddr*)&their_addr ,&AddrSize);
  if(tp.socket != INVALID_SOCKET)
  {
    recv(tp.socket,buff,20,0);
    MessageRecv(buff,tp.socket,tp.hwnd);
    cThread = (HANDLE)CreateThread(NULL,0,ClientThread ,&tp, 0, NULL);
    if (cThread == NULL)
    {
      GetLastError();
      ShowError();
    }
  }
  LeaveCriticalSection(&gs);
}

//通信线程函数
DWORD WINAPI ClientThread(LPVOID lpParam)
{
  char buff[1024] = {0};
  TH *tp = (TH *)lpParam;
  EnterCriticalSection(&gs);
  while(1)
  {
    if(recv(tp->socket,buff,1024,0) != SOCKET_ERROR)
    {
      MessageRecv(buff,tp->socket,tp->hwnd);
    }
    else
    {
      GetLastError();
      ShowError();
    }
  }
  LeaveCriticalSection(&gs);
}

3个回答

TH tp; //传递的参数 这个是局部变量,你把局部变量传递给线程,在主函数运行完之后,就把tp销毁了,所以在线程 函数中 就抱错了。

lhg1714538808
lhg1714538808 谢谢,问题解决,socket传进去了,我加了一句:SOCKET sock = tp->socket;查看sock确认socket已传入
一年多之前 回复

多线程 socket数据请求要加锁,不然同时操作一块内存数据污染,还会crash

lhg1714538808
lhg1714538808 谢谢
一年多之前 回复

使用信号量保护比较好一些

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
socket编程多线程如何处理客户端不同请求

我现在在服务端想采用多线程处理客户端的连接,发消息,收消息,更新客户列表, 但是我是如何才能知道客户端想要干什么呢? 这个别人的代码 ,他是通过switch判断的,我现在想明白如何实现这个switch过程,控制台程序。 希望大佬回答详细点 ,初如编程很多不明白![图片说明](https://img-ask.csdn.net/upload/201708/01/1501555996_986604.png)

完成端口,接收数据不完全,recv返回0

一个项目,使用完成端口,SOCKET的套接字。recv函数返回0。 具体问题是这样的,我能接收大部分数据,但是每次都少了1436个比特。我看了一下,是因为recv返回0。 然后,我看到网上说这个函数,说是连接被关闭了。我又在发送端发送完了之后,调用shutdown,而不是用的closesocket。 接收端部分代码: while (true) { RtlZeroMemory(fileBuf, DATA_BUFSIZE); nlen = recv(perHandleData->socket, fileBuf, DATA_BUFSIZE, 0); fileInfo->file.Write(fileBuf, nlen); fileCount += nlen; if (fileCount >= fileInfo->dwFileSize||nIndex>=10) { break; } if (nlen == 0) { nIndex++; } } ``` ``` 我发送端用的是循环发送,接收端用的是循环接收。接收端跳出有两种情况,全部接受和recv返回了0。其实只要recv返回了一次0的话,就可以直接跳出循环的了。我这里还是用的十次。 而发送端并没有closesocket,只是调用shutdown停止发送。 请问,我接受不全,是什么原因啊?我看到网上说的另一种情况是:recv函数在等待协议接收数据时网络中断了。 具体是什么原因我也不清楚。请哪位大牛帮帮忙! 分不多,麻烦啦

一个关于C++多线程socket 套接字的问题

如下博客中的代码是不是缺客户端的代码了,本人是初学者,菜鸟一枚,想要个全的呆猫,麻烦谁提供一下,谢啦 http://blog.csdn.net/chw1989/article/details/7453217

c#的异步socket回调是多线程回调,怎么处理的

在写一个unity的项目网络层,用c#的异步socket,BeginSend,EndSend,BeginReceive,EndReceive,但是发现回调的方式是多线程回调: 下面的代码我在回调的地方打印出线程id,发现不一样: 如果是多线程回调,那么收到数据recve回调的时候想要做包解析,必须要自己把这个事件丢到消息队列中放到网络层线程中去处理,这样又好麻烦,感觉这个异步的接口应该不会这么麻烦吧,怀疑是不是用错了。有没有遇到相同问题的,求教一下怎么处理的。 /// 异步发送 public void AsynSend() { if (m_socket == null) return; //编码 try { m_socket.BeginSend(m_sendBuffer, 0, m_sendPos, SocketFlags.None, asyncResult => { //完成发送消息 int bytes = m_socket.EndSend(asyncResult); if(bytes <= 0) { Close(); return; } UnityEngine.Debug.LogWarning("send length:"+bytes); UnityEngine.Debug.LogWarning("current send thread:" + Thread.CurrentThread.ManagedThreadId); m_sendPos -= bytes; }, null); } catch (Exception ex) { Close(); UnityEngine.Debug.LogWarning("异常信息:"+ ex.Message); } } //异步接受 public void AsynRecive() { try { //开始接收数据 m_socket.BeginReceive(m_recvBuffer, m_recvPos, m_recvBuffer.Length - m_recvPos, SocketFlags.None, asyncResult => { int bytes = m_socket.EndReceive(asyncResult); UnityEngine.Debug.LogWarning("recv length:"+bytes); UnityEngine.Debug.LogWarning("current recv thread:" + Thread.CurrentThread.ManagedThreadId); AsynRecive(); if (bytes <= 0) { Close(); return; } m_recvPos += bytes; ParseProtocol(); }, null); } catch (Exception ex) { Close(); UnityEngine.Debug.LogWarning("异常信息:"+ex.Message); } }

php socket_recvfrom多线程

<div class="post-text" itemprop="text"> <p>I have a problem of waiting socket_recvfrom to receive something otherwise the script won't continue running </p> <pre><code>for( $i= 1024 ; $i &lt;= 65535 ; $i++ ) { $sock[$i] = socket_create(AF_INET, SOCK_DGRAM, SOL_UDP); socket_bind($sock[$i], $sourceips['madcoder'],$i); socket_connect($sock[$i], '115.188.58.144 ', 1195); $request = 'data'; socket_write($sock[$i], $request); $from = ''; $port = 0; $valeur = socket_recvfrom($sock[$i], $buf, 1222, 0, $from, $port); } </code></pre> <p>this code basically send a udp request on a different local port each time and see if he got a response The pogramme now is stuck waiting to receive a packet thanks to socket_recvfrom </p> <p>I want to make it run as fast as i could without risking to lose a response </p> <p>i could use </p> <pre><code>socket_set_option($sock[$i], SOL_SOCKET, SO_RCVTIMEO, array('sec' =&gt; 1, 'usec' =&gt; 0)); </code></pre> <p>but the problem is waiting for over a 1 second is too much and there is a problem with setting a value of under a second </p> <p>changing the value to </p> <pre><code>array('sec' =&gt; 0, 'usec' =&gt; 100) </code></pre> <p>or</p> <pre><code>array('sec' =&gt; 0.1, 'usec' =&gt; 0) </code></pre> <p>won't work so i was thinking of multi threading ,The only way i know to multi thread is a "for" loop like i did but i really wish if i could try something else more effective </p> </div>

pyqt5 使用多线程进行socket通信,界面未响应后程序退出

本来准备开好几个线程通信,写了两个正常运行,加了第三个后界面未响应,pycharm报错Process finished with exit code -1073740791 (0xC0000409) 现在一个线程也这样,不知是哪里出了问题。 这是UI类函数 class TabDemo(QTabWidget,QMainWindow): def __init__(self,parent=None): super(TabDemo,self).__init__(parent) self.resize(1200,900) self.tab1 = QWidget() self.tab2 = QWidget() self.tab3 = QWidget() self.tab4 = QWidget() self.tab5 = QWidget() self.addTab(self.tab1,'Voltage Curent') self.addTab(self.tab2,'Parameters') self.addTab(self.tab3,'Waveforms') self.addTab(self.tab4,'Charts') self.addTab(self.tab5,'Save Load') self.tab1UI() self.tab2UI() self.tab3UI() self.tab4UI() self.tab5UI() # self.Tcptrans = TcpThread() # self.Tcptrans1 = TcpThread1() # self.Tcptrans2 = TcpThread2() self.Tcptrans3 = TcpThread3() # self.Tcptrans.start() # self.Tcptrans1.start() # self.Tcptrans2.start() self.Tcptrans3.start() # self.Tcptrans.sinOut.connect(self.display) # self.Tcptrans1.sinOut1.connect(self.display1) # self.Tcptrans2.sinOut2.connect(self.display2) self.Tcptrans3.sinOut3.connect(self.display3) self.setWindowTitle('电能质量监测系统') 这是tab2UI def tab2UI(self): grid = QGridLayout() grid.setSpacing(10) l1=QLabel('Parameters') l2=QLabel('24V Supply') l3=QLabel('Volts') l4=QLabel('DSP Temp') l5=QLabel('Dg.c') self.t1=QTextBrowser() self.t2=QTextBrowser() self.table=QTableWidget(9,20) self.g1=QGraphicsView() self.table.setVerticalHeaderLabels(['ReactivePowerA', 'ReactivePowerC', 'ActivePowerA', 'ActivePowerC','CurrentA','CurrentC','THD IA', 'THD IC','Meter ID']) ''' self.l1.setGeometry(QtCore.QRect(190, 10, 101, 16)) self.l2.setGeometry(QtCore.QRect(50, 30, 91, 16)) self.l3.setGeometry(QtCore.QRect(170, 60, 72, 15)) self.l4.setGeometry(QtCore.QRect(50, 130, 72, 15)) self.l5.setGeometry(QtCore.QRect(170, 170, 72, 15)) self.t1.setGeometry(QtCore.QRect(30, 50, 111, 31)) self.t2.setGeometry(QtCore.QRect(30, 160, 111, 31)) self.table.setGeometry(QtCore.QRect(100, 20, 871, 291)) ''' grid.addWidget(l1,1,3) grid.addWidget(self.g1,2,0,5,16) grid.addWidget(l2,2,17) grid.addWidget(self.t1,3,17) grid.addWidget(l3,3,18) grid.addWidget(l4,4,17) grid.addWidget(self.t2,5,17) grid.addWidget(l5,5,18) grid.addWidget(self.table,6,0,20,21) self.tab2.setLayout(grid) 这是槽函数 def display3(self,Meterreading): for i in range(9): for j in range(20): self.Item=QTableWidgetItem('%s' % Meterreading[i,j]) self.table.setItem(i,j,self.Item) 这是定义的子线程类class TcpThread3(QThread): sinOut3 = pyqtSignal(np.ndarray) def __init__(self, parent=None): super(TcpThread3, self).__init__(parent) self.working = True def __del__(self): # 线程状态改变与线程终止 self.working = False self.wait() def run(self): if self.working == True: self.tcp_client_socket2 = socket(AF_INET, SOCK_STREAM) self.tcp_client_socket2.connect(("127.0.0.1", 6666)) while self.working == True: self.meg3 = b'\x01\x09\x00\x00\x01\x68\xDD\xB5' self.tcp_client_socket3.send(self.meg3) self.recv_data3 = self.tcp_client_socket3.recv(1024) l3 = list() l4 = list() CT = 1 PT = 1 for i in range(3, 723, 2): temp = self.recv_data3[i:i + 2] result = struct.unpack('!h', temp)[0] l3.append(result) list4 = l3[0:320] a = np.array(list4).reshape(20, 16) for i in range(10): l4.append(a[..., i]) ReactivePowerA = l4[0] * 0.0001 * CT * PT ReactivePowerC = l4[1] * 0.0001 * CT * PT ActivePowerA = l4[2] * 0.0001 * CT * PT ActivePowerC = l4[3] * 0.0001 * CT * PT CurrentA = l4[4] * 0.01 * CT CurrentC = l4[5] * 0.01 * CT THDIA = l4[6] * 0.1 THDIC = l4[7] * 0.1 lastone = (a[..., 8] * 65536 + a[..., 9]) * 0.01 Meterreading = np.vstack((ReactivePowerA, ReactivePowerC, ActivePowerA, ActivePowerC, CurrentA, CurrentC, THDIA, THDIC, lastone)) self.sinOut3.emit(Meterreading) QApplication.processEvents() self.sleep(1)

两个线程操作同一个TCP socket,分别负责读写。

读的线程用的是select,写的线程是每30秒往服务器发一个包,如果在这个过程中我发现连接异常了那么我在一个线程里面关闭socket的时候会不会造成程序异常退出,请问在这个过程中我需要注意哪些方面?另外我的socket是异步的,我想每次给服务器发送信息之前先判断一下连接是否正常,现在的做法是每次给服务器发消息前先去connect一次,但是我觉得这样做不太优雅,想顺便问一下大家有什么好的检测方法。

SOCKET多个send,一个recv为什么没有问题?

多个recv,一个send的情况又是如何?谢谢老兄弟 !

epoll异步服务端程序,客户端采用多线程访问,服务端总是返回errno 9和107

#include <stdio.h> #include <fcntl.h> #include <errno.h> #include <signal.h> #include <unistd.h> #include <string.h> #include <pthread.h> #include <sys/stat.h> #include <sys/epoll.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <sys/resource.h> #include <iostream> using namespace std; #define MAX_EVENTS 65535 #define SERVER_PORT 8887 #define LISTEN_BACKLOG 2048 char send_buf[64] = {0}; int setnonblocking(int& sockfd) { if (fcntl(sockfd, F_SETFL, fcntl(sockfd, F_GETFD, 0)|O_NONBLOCK) == -1) { return -1; } return 0; } void* recv_send_data(void *data) { int _socket = *(int *)data; char recvBuf[16] = {0}; int recv_len = -1; int res = 1; while(res) { recv_len = recv(_socket, recvBuf, 13, 0); if(recv_len < 0) { if(errno == EAGAIN || errno == EINTR || errno == EWOULDBLOCK) { continue; } else { cout << "recv error! errno: " << errno << endl; break; } } else if(recv_len == 0) //对端已正常关闭 { res = 0; } if(recv_len == sizeof(recvBuf)) { res = 1; } else { res = 0; } } if(recv_len > 0) { send(_socket, send_buf, strlen(send_buf), 0); } close(_socket); } int main() { signal(SIGPIPE,SIG_IGN); sprintf(send_buf, "%s", "Hello world!"); struct rlimit rt; rt.rlim_max = rt.rlim_cur = 1048576; int epollFd = epoll_create(MAX_EVENTS); setrlimit(RLIMIT_NOFILE, &rt); /*******创建服务端socket,绑定、监听*******/ struct sockaddr_in server_addr; bzero(&server_addr, sizeof(server_addr)); server_addr.sin_family = AF_INET; server_addr.sin_addr.s_addr = htons(INADDR_ANY); server_addr.sin_port = htons(SERVER_PORT); int server_socket = socket(AF_INET, SOCK_STREAM, 0); if(server_socket < 0) { cout << "create server socket failed!" << endl; return -1; } setnonblocking(server_socket); int opt = 1; setsockopt(server_socket, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt)); if(bind(server_socket, (struct sockaddr*)&server_addr, sizeof(server_addr))) { cout << "server bind failed!" << endl; return -1; } if(listen(server_socket, LISTEN_BACKLOG)) { cout << "server listen failed" << endl; return -1; } /**************向epollFd添加fd监听事件**************/ struct epoll_event server_ev; server_ev.events = EPOLLIN | EPOLLET; server_ev.data.fd = server_socket; if(-1 == epoll_ctl(epollFd, EPOLL_CTL_ADD, server_socket, &server_ev)) { cout << "epoll_ctl server socket failed" << endl; return -1; } while(true) { struct epoll_event events[MAX_EVENTS]; int nfds = epoll_wait(epollFd, events, MAX_EVENTS, -1); if(nfds < 0) { cout << "epoll_wait failed" << endl; return -1; } for(int i = 0; i < nfds; ++i) { if(events[i].data.fd == server_socket) { struct sockaddr_in clientAddr; socklen_t length = sizeof(clientAddr); int remote_socket = accept(events[i].data.fd, (struct sockaddr*)&clientAddr, &length); if(remote_socket < 0) { cout << "accept socket failed!" << endl; continue; } cout << "socket connect successfully" << endl; setnonblocking(remote_socket); struct epoll_event client_ev; client_ev.data.fd = remote_socket; client_ev.events = EPOLLIN | EPOLLET; if(-1 == epoll_ctl(epollFd, EPOLL_CTL_ADD, remote_socket, &client_ev)) { cout << "epoll_ctl client socket failed" << endl; return -1; } } else { pthread_t thread; pthread_attr_t attr; pthread_attr_init(&attr); pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED); if(pthread_create(&thread, &attr, recv_send_data, (void*)&(events[i].data.fd))) { cout << "create thread failed" << endl; return -1; } } } } close(server_socket); cout << "abort" << endl; return 0; }

java NIO 多线程

<div class="iteye-blog-content-contain" style="font-size: 14px;"> <pre name="code" class="java">public void run() { exitRequest = false ; while( !exitRequest) { try { // 初期化 socketChannel = SocketChannel.open() ; selector = Selector.open() ; socketChannel.socket().setReuseAddress( true) ; socketChannel.configureBlocking( false) ; socketChannel.socket().setReceiveBufferSize( RECV_BUFFER) ; selectionKey = socketChannel.register( selector, SelectionKey.OP_CONNECT) ; selectionKey.attach( new SocketNioControl( notifyObject, selector, socketChannel, selectionKey)) ; socketChannel.connect( inetAddress) ; int timer = 0 ; while( !socketChannel.isConnected()) { selector.select(100) ; if( exitRequest) { break ; } for( SelectionKey key : selector.selectedKeys()) { if( key.isConnectable()) { SocketChannel socketChannel = ( SocketChannel)key.channel() ; SocketNioControl socketControl = ( SocketNioControl)key.attachment() ; socketChannel.finishConnect() ; key.interestOps( SelectionKey.OP_READ) ; socketControl.connect() ; break ; } } if( timer &gt;= TIMER_CONNECT * 1000) { throw new ConnectException() ; } timer += 100 ; } try { while( !exitRequest) { selector.select() ; for( SelectionKey key : selector.selectedKeys()) { SocketNioControl socketControl = ( SocketNioControl)key.attachment() ; if( key.isReadable()) { socketControl.read() ; } if( key.isWritable()) { socketControl.write() ; } } } } catch ( ConnectException ioex) { } catch ( CancelledKeyException ckex) { } } catch ( UnknownHostException uhex) { ErrorMessage.logging( uhex) ; } catch ( Exception ex) { ErrorMessage.logging( ex) ; } finally { try { for( SelectionKey key : selector.keys()) { SocketNioControl socketControl = ( SocketNioControl)key.attachment() ; socketControl.disconnect() ; } if( selector != null) { selector.close() ; selector = null ; } for( int timer = 0; timer &lt; TIMER_RETRY * 1000; timer += 100) { Thread.sleep( 100) ; if( exitRequest) { break ; } } } catch( IOException ioe) { } catch( InterruptedException iex) { } } } }</pre> <p> 线程一直被占用,导致系统hungup.是哪里出问题呢?另外<span style="font-family: monospace; font-size: 1em; line-height: 1.5;"> selector.select() 的值一直是0;</span><span style="font-family: monospace; font-size: 1em; line-height: 1.5;">这一句是起什么作用的?</span></p> </div>

ZMQ套接字如何处理缺乏线程安全性的问题?

<div class="post-text" itemprop="text"> <p>I've been using ZMQ in some Python applications for a while, but only very recently I decided to reimplement one of them in Go and I realized that ZMQ sockets are not thread-safe.</p> <p>The original Python implementation uses an event loop that looks like this:</p> <pre class="lang-py prettyprint-override"><code>while running: socks = dict(poller.poll(TIMEOUT)) if socks.get(router) == zmq.POLLIN: client_id = router.recv() _ = router.recv() data = router.recv() requests.append((client_id, data)) for req in requests: rep = handle_request(req) if rep: replies.append(rep) requests.remove(req) for client_id, data in replies: router.send(client_id, zmq.SNDMORE) router.send(b'', zmq.SNDMORE) router.send(data) del replies[:] </code></pre> <p>The problem is that the reply might not be ready on the first pass, so whenever I have pending requests, I have to poll with a very short timeout or the clients will wait for more than they should, and the application ends up using a lot of CPU for polling.</p> <p>When I decided to reimplement it in Go, I thought it would be as simple as this, avoiding the problem by using infinite timeout on polling:</p> <pre class="lang-go prettyprint-override"><code>for { sockets, _ := poller.Poll(-1) for _, socket := range sockets { switch s := socket.Socket; s { case router: msg, _ := s.RecvMessage(0) client_id := msg[0] data := msg[2] go handleRequest(router, client_id, data) } } } </code></pre> <p>But that ideal implementation only works when I have a single client connected, or a light load. Under heavy load I get random assertion errors inside libzmq. I tried the following:</p> <ol> <li><p>Following the <a href="https://godoc.org/github.com/pebbe/zmq4#Context.NewSocket" rel="nofollow">zmq4 docs</a> I tried adding a sync.Mutex and lock/unlock on all socket operations. It fails. I assume it's because ZMQ uses its own threads for flushing.</p></li> <li><p>Creating one goroutine for polling/receiving and one for sending, and use channels in the same way I used the req/rep queues in the Python version. It fails, as I'm still sharing the socket.</p></li> <li><p>Same as 2, but setting <code>GOMAXPROCS=1</code>. It fails, and throughput was very limited because replies were being held back until the <code>Poll()</code> call returned.</p></li> <li><p>Use the req/rep channels as in 2, but use <code>runtime.LockOSThread</code> to keep all socket operations in the same thread as the socket. Has the same problem as above. It doesn't fail, but throughput was very limited.</p></li> <li><p>Same as 4, but using the poll timeout strategy from the Python version. It works, but has the same problem the Python version does. </p></li> <li><p>Share the context instead of the socket and create one socket for sending and one for receiving in separate goroutines, communicating with channels. It works, but I'll have to rewrite the client libs to use two sockets instead of one.</p></li> <li><p>Get rid of zmq and use raw TCP sockets, which are thread-safe. It works perfectly, but I'll also have to rewrite the client libs. </p></li> </ol> <p>So, it looks like 6 is how ZMQ was really intended to be used, as that's the only way I got it to work seamlessly with goroutines, but I wonder if there's any other way I haven't tried. Any ideas?</p> <hr> <p><strong>Update</strong></p> <p>With the answers here I realized I can just add an <code>inproc</code> PULL socket to the poller and have a goroutine connect and push a byte to break out of the infinite wait. It's not as versatile as the solutions suggested here, but it works and I can even backport it to the Python version.</p> </div>

libevent多线程服务器错误

#include "lib_net.h" #include "lib_thread.h" #include "lib_public.h" #include<event.h> #include<event2/util.h> #include<signal.h> #define BACKLOG 10 #define MAX_EVENTS 500 #define THRD_NUM 5 char ip[24]; short port; struct st_listenserv { int sockfd; struct event *ev_listen; struct event_base *base; }; struct st_thrd_work { int clifd; pthread_t pid; int status; struct event_base *base; struct event *ev_read; struct event *ev_write; char buf[1024]; }; int last_active = 0; struct st_thrd_work *st_thrd; void initsocket(struct st_listenserv *listenserv); void accept_cb(int fd, short events, void *arg); void send_cb(int fd, short events, void *arg); void release_read(struct st_thrd_work *thrd_work); void release_write(struct st_thrd_work *thrd_work); void thrd_work_cb(int fd, short events, void *arg); void thrd_work(struct st_thrd_work *st_work); void thrd_work_process(void *arg); int main(int argc, char *argv[]) { int i=0; sigset(SIGPIPE, SIG_IGN); if(argc < 3) { perror("input server port"); return -1; } memcpy(ip, argv[1], 24); port = atoi(argv[2]); struct st_listenserv listenserv; initsocket(&listenserv); //创建线程池 st_thrd = calloc(THRD_NUM, sizeof(struct st_thrd_work)); for(i=0; i<THRD_NUM; ++i) { st_thrd[i].base = event_base_new(); } for(i=0; i<THRD_NUM; ++i) { thrd_work(&st_thrd[i]); } listenserv.base = event_base_new(); listenserv.ev_listen = event_new(listenserv.base, listenserv.sockfd,EV_READ | EV_PERSIST,accept_cb,NULL); event_add(listenserv.ev_listen, NULL); event_base_dispatch(listenserv.base); } void initsocket(struct st_listenserv *listenserv) { listenserv->sockfd = lib_tcpsrv_init(ip,port); if(listenserv->sockfd < 0) { perror("server create socket error"); return; } if(lib_set_nonblock(listenserv->sockfd) < 0) { perror("set nonblock error"); return; } printf("init socket ok\n"); } void accept_cb(int fd, short events, void *arg) { struct sockaddr_in cin; socklen_t socklen = sizeof(cin); int clifd = lib_tcpsrv_accept(fd, &cin); if(clifd <0) { perror("accept error"); } if(lib_set_nonblock(clifd) < 0) { perror("set nonblock error"); return; } printf("new conneciotn [%s:%d]\n", inet_ntoa(cin.sin_addr), ntohs(cin.sin_port)); int tid = (++last_active) % THRD_NUM; struct st_thrd_work *thrd = st_thrd + tid; last_active = tid; thrd->clifd = clifd; printf("{%lu : %d}\n", thrd->pid, thrd->clifd); thrd->ev_read = event_new(thrd->base, thrd->clifd, EV_READ|EV_PERSIST, thrd_work_cb, thrd); if(thrd->ev_read != NULL) event_add(thrd->ev_read, NULL); if(last_active > 1000) last_active = 0; } void thrd_work_cb(int fd, short events, void *arg) { struct st_thrd_work *thrd_work = (struct st_thrd_work *)arg; //有事件时进行处理 int recvlen = 0; if( thrd_work != NULL) { recvlen = lib_tcp_recv(thrd_work->clifd, thrd_work->buf,1024, -1); if(recvlen < 0) { perror("recv error"); close(thrd_work->clifd); release_read(thrd_work); return; } printf("recv data:%s\n", thrd_work->buf); memset(thrd_work->buf, 0, sizeof(thrd_work->buf)); thrd_work->ev_write = event_new(thrd_work->base, thrd_work->clifd, EV_WRITE, send_cb, thrd_work); if(thrd_work->ev_write != NULL) event_add(thrd_work->ev_write, NULL); } } void thrd_work_process(void *arg) { struct st_thrd_work *st_work = (struct st_thrd_work *)arg; printf("====%lu====\n", st_work->pid); do { if(st_work->clifd > 0) { break; } } while (1); event_base_dispatch(st_work->base); event_base_free(st_work->base); } void thrd_work(struct st_thrd_work *st_work) { if(st_work != NULL) { lib_thread_create(&(st_work->pid), thrd_work_process, st_work, 1); } } void send_cb(int fd, short events, void *arg) { struct st_thrd_work *thrd_work = (struct st_thrd_work*)arg; if(thrd_work->clifd <= 0) return; int sendlen = lib_tcp_send(thrd_work->clifd, thrd_work->buf, 1024); if(sendlen < 0) { perror("send error"); close(thrd_work->clifd); release_write(thrd_work); } memset(thrd_work->buf, 0, sizeof(thrd_work->buf)); } void release_read(struct st_thrd_work *thrd_work) { if(thrd_work == NULL) { return; } if(NULL != thrd_work->ev_read) { event_del(thrd_work->ev_read); event_free(thrd_work->ev_read); } } void release_write(struct st_thrd_work *thrd_work) { if(thrd_work == NULL) return; if(thrd_work->ev_write != NULL) { event_free(thrd_work->ev_write); } } 完整代码:https://github.com/Addision/test_libevent_thrd_server 总是报段错误,不知道什么地方有问题,求解答

用C++怎么改写这段socket多线程服务器端代码?

不太会用C++,这个程序前端是python写的,后端是C++,需要用socket通讯。client端我已经用python写好了,不知道server端怎么用C++实现。 代码如下: ```python from socket import * from threading import Thread IP = '127.0.0.1' PORT = 8888 BUFLEN = 512 def clientHandler(dataSocket, addr): while True: recved = dataSocket.recv(BUFLEN) # 当对方关闭连接的时候,返回空字符串 if not recved: print(f'客户端{addr} 关闭了连接') break # 读取的字节数据是bytes类型,需要解码为字符串 check = recved.decode() if check == '111111111111': linkItems = '01478' elif check == '211111111111': linkItems = '01258' else: linkItems = 'FFFFFF' info = list(check) print(info) # print(f'收到{addr}信息: {info}') # dataSocket.send(f'{linkItems}'.encode()) dataSocket.send(linkItems.encode()) dataSocket.close() listenSocket = socket(AF_INET, SOCK_STREAM) listenSocket.bind((IP, PORT)) listenSocket.listen(8) print(f'服务端启动成功,在{PORT}端口等待客户端连接...') while True: dataSocket, addr = listenSocket.accept() # Establish connection with client. addr = str(addr) print(f'一个客户端 {addr} 连接成功') # 创建新线程处理和这个客户端的消息收发 th = Thread(target=clientHandler, args=(dataSocket, addr)) th.start() listenSocket.close() ```

这是一个关于PYTHON多线程多进程的问题

这是我写的服务器代码 ``` import socket,os from multiprocessing import Process def haha(yong): while True: data=yong.recv(100) if data.decode()!='q': yong.send(data) else: yong.send('q'.encode()) yong.close() break if __name__ == '__main__': service = socket.socket(socket.AF_INET, socket.SOCK_STREAM) service.bind(('127.0.0.1', 6)) service.listen(4) print(os.getpid()) while True: yong,add=service.accept() jaiyou=Process(target=haha,args=(yong,)) jaiyou.start() service.close() ``` 这是我写的客户机代码 ``` import socket,os quest1=socket.socket(socket.AF_INET,socket.SOCK_STREAM) quest1.connect(('127.0.0.1',6)) while True: quest1.send(input().encode()) data=quest1.recv(100).decode() if data=='q': break; else: print(os.getpid()) continue quest1.close() ``` 当多个客户机 连接一个服务器的时候,每个进程ID号都是不同的,这我可以理解,多进程嘛 但是,当我把服务器代码中的进程process换成thread线程的时候。多个客户机的进程Id还不同,这就奇怪了,线程不应该都在一个进程Id里吗

用多线程可以实现UDP多任务,今天尝试引用多进程实现UDP多任务时却报错 EOFError: EOF when reading a line 这是怎么回事呢?

import socket import multiprocessing def recv(udp_socket): while True: recv_data= udp_socket.recvfrom(1024) print(recv_data[1],":",recv_data[0].decode("utf-8")) def send(udp_socket,client_addr): while True: send_data= input("请输入要发送的消息:") udp_socket.sendto(send_data.encode("utf-8"),client_addr) def main(): udp_socket= socket.socket(socket.AF_INET,socket.SOCK_DGRAM) client_addr= ("127.0.0.1",7788) udp_socket.bind(("127.0.0.1",8080)) p1=multiprocessing.Process(target=recv,args=(udp_socket,)) p2=multiprocessing.Process(target=send,args=(udp_socket,client_addr)) p1.start() p2.start() p1.join() p2.join() udp_socket.close() if __name__ == "__main__": main()

为什么在MFC中使用线程会出现程序崩溃?是我实用的方法不敌还是怎么回事?

在使用MFC中使用的线程是 API函数,调用CreateThread(); 可是使用的时候调试一步一步走都是正常的,但是一起运行就发现程序崩溃。我是一个初学的菜鸟,可能是代码有点乱,麻烦看一下!!! 谢谢各位大佬: ``` // UDPserver.cpp : 实现文件 // #include "stdafx.h" #include "ServerUDP.h" #include "UDPserver.h" #include <Winsock2.h>//加裁头文件 #include <stdio.h>//加载标准输入输出头文件 #define IDP_SOCKETS_INIT_FAILED 103 SOCKET m_revSocket; // CUDPserver CUDPserver::CUDPserver() { } CUDPserver::~CUDPserver() { } // CUDPserver 成员函数 // Server 成员函数 bool CUDPserver::Socket()//初始化 { //初始化Winscok if (!AfxSocketInit()) { AfxMessageBox(IDP_SOCKETS_INIT_FAILED); return 1; } // SetSockOpt(); WORD wVersionRequested;//版本号 WSADATA wsaData; int err; wVersionRequested = MAKEWORD( 1, 1 );//1.1版本的套接字 err = WSAStartup( wVersionRequested, &wsaData ); if ( err != 0 ) { return false; }//加载套接字库,加裁失败则返回 if ( LOBYTE( wsaData.wVersion ) != 1 ||HIBYTE( wsaData.wVersion ) != 1 ) { WSACleanup( ); return false; }//如果不是1.1的则退出 return true; } #include"Set_up.h" CSet_up up; bool CUDPserver::GetLocalAddress(){ CString strAddress; int nCardNo = 1; //PIP_ADAPTER_INFO结构体指针存储本机网卡信息 PIP_ADAPTER_INFO pIpAdapterInfo = new IP_ADAPTER_INFO(); //得到结构体大小,用于GetAdaptersInfo参数 unsigned long stSize = sizeof(IP_ADAPTER_INFO); //调用GetAdaptersInfo函数,填充pIpAdapterInfo指针变量;其中stSize参数既是一个输入量也是一个输出量 int nRel = GetAdaptersInfo(pIpAdapterInfo,&stSize); //记录网卡数量 int netCardNum = 0; //记录每张网卡上的IP地址数量 int IPnumPerNetCard = 0; if (ERROR_BUFFER_OVERFLOW == nRel) { //如果函数返回的是ERROR_BUFFER_OVERFLOW //则说明GetAdaptersInfo参数传递的内存空间不够,同时其传出stSize,表示需要的空间大小 //这也是说明为什么stSize既是一个输入量也是一个输出量 //释放原来的内存空间 delete pIpAdapterInfo; //重新申请内存空间用来存储所有网卡信息 pIpAdapterInfo = (PIP_ADAPTER_INFO)new BYTE[stSize]; //再次调用GetAdaptersInfo函数,填充pIpAdapterInfo指针变量 nRel=GetAdaptersInfo(pIpAdapterInfo,&stSize); } if (ERROR_SUCCESS == nRel) { //输出网卡信息 //可能有多网卡,因此通过循环去判断 while (pIpAdapterInfo) { //可能网卡有多IP,因此通过循环去判断 IP_ADDR_STRING *pIpAddrString =&(pIpAdapterInfo->IpAddressList); switch(pIpAdapterInfo->Type) { case MIB_IF_TYPE_OTHER: case MIB_IF_TYPE_ETHERNET: case MIB_IF_TYPE_TOKENRING: case MIB_IF_TYPE_FDDI: case MIB_IF_TYPE_PPP: case MIB_IF_TYPE_LOOPBACK: case MIB_IF_TYPE_SLIP: { strAddress = pIpAddrString->IpAddress.String; // 需要注意的是有时可能获取的IP地址是0.0.0.0,这时需要过滤掉 if(CString("0.0.0.0")==strAddress) break; // std::cout<<_T("第")<< nCardNo<<_T("张网卡的IP地址是")<< strAddress<<std::endl; // long PID = _ttol(strAddress); //CString 转成 char*,该语句缺一不 // mxcj.m_strIP = (DWORD)PID; // 再强制转换成DWORD m_DIP= strAddress; nCardNo++; break; } default: // 未知类型网卡就跳出 break; } pIpAdapterInfo = pIpAdapterInfo->Next; } } //释放内存空间 if (pIpAdapterInfo) { delete pIpAdapterInfo; } //initsocket();//创建套接字 return true; } bool CUDPserver::initsocket() { /*创建套接字*/ //AF_INET表示IPv4,SOCK_STREAM数据传输方式,IPPROTO_TCP传输协议; m_listenSocket = socket(AF_INET,SOCK_STREAM,IPPROTO_TCP); if (m_listenSocket == INVALID_SOCKET) { //printf("套接字创建失败"); WSACleanup(); return 0; } Bind(); return true; } bool CUDPserver::Bind()// 绑定地址端口 { sockaddr_in addrListen; addrListen.sin_family = AF_INET; //指定IP格式 addrListen.sin_port = htons(m_iDKH); //绑定端口号 addrListen.sin_addr.S_un.S_addr = INADDR_ANY; //表示任何IP service.sin_addr.s_addr = inet_addr("127.0.0.1"); if (bind(m_listenSocket, (SOCKADDR*)&addrListen, sizeof(addrListen)) == SOCKET_ERROR) //(SOCKADDR*) { //printf("绑定失败"); closesocket(m_listenSocket); return 0; } Connect(); //连接开始监听 return true; } unsigned int WINAPI ThreadProFunc(void *pParam); bool CUDPserver::Connect() //连接 { /*开始监听*/ if (listen(m_listenSocket, 5) == SOCKET_ERROR) { //printf("监听出错"); closesocket(m_listenSocket); return 0; } /*等待连接,连接后建立一个新的套接字*/ //SOCKET revSocket; //对应此时所建立连接的套接字的句柄 //HANDLE hThread; // DWORD dwThreadId; //sockaddr_in remoteAddr; //接收连接到服务器上的地址信息 //int remoteAddrLen = sizeof(remoteAddr); //printf("等待连接...\n"); /*等待客户端请求,服务器接收请求*/ //m_revSocket = accept(m_listenSocket, (SOCKADDR*)&remoteAddr, &remoteAddrLen); //等待客户端接入,直到有客户端连接上来为止 /*if (m_revSocket == INVALID_SOCKET) { closesocket(m_listenSocket); WSACleanup(); return 0; } else { /* 启动等待连接线程 */ HANDLE acceptThread = CreateThread(NULL, 0, WaitAcceptThread, (LPVOID)m_listenSocket, 0, NULL); WaitForSingleObject(acceptThread, INFINITE); // 等待线程结束 // return 0; //} return true; } unsigned int WINAPI ThreadProFunc(void *pParam) { CUDPserver server; server.Receive(); char revData[255] = ""; while(1){ /*通过建立的连接进行通信*/ int res = recv(server.m_revSocket, revData, 255, 0); if (res > 0) { //printf("Bytes received: %d\n", res); // server.m_wndOutputBuild.AddString(_T("调试输出正显示在此处。")); //printf("客户端发送的数据: %s\n", revData); return 0; } //sleep(1000); return 0; } } UINT __cdecl CUDPserver::hellothread(LPVOID lparam){ CUDPserver server; server.Receive(); char revData[255] = ""; while(1){ /*通过建立的连接进行通信*/ int res = recv(server.m_revSocket, revData, 255, 0); if (res > 0) { //printf("Bytes received: %d\n", res); // server.m_wndOutputBuild.AddString(_T("调试输出正显示在此处。")); //printf("客户端发送的数据: %s\n", revData); return 0; } //sleep(1000); return 0; } } HANDLE bufferMutex; DWORD WINAPI WaitAcceptThread(LPVOID IpParameter) { SOCKET m_socket = (SOCKET)IpParameter; // int rval; sockaddr_in remoteAddr; //接收连接到服务器上的地址信息 int remoteAddrLen = sizeof(remoteAddr); while(true){ /*等待客户端请求,服务器接收请求*/ m_revSocket = accept(m_socket, (SOCKADDR*)&remoteAddr, &remoteAddrLen); //等待客户端接入,直到有客户端连接上来为止 if (m_revSocket == INVALID_SOCKET) { //printf("客户端发出请求,服务器接收请求失败:\n",WSAGetLastError()); closesocket(m_revSocket); WSACleanup(); return 0; } HANDLE receiveThread = CreateThread(NULL, 0, RecMsgThread, (LPVOID)m_revSocket, 0, NULL); WaitForSingleObject(bufferMutex, INFINITE); if(NULL == receiveThread) { //printf("\nCreatThread AnswerThread() failed.\n"); return 0; } ReleaseSemaphore(bufferMutex, 1, NULL); } } DWORD WINAPI RecMsgThread(LPVOID IpParameter) { SOCKET ClientSocket=(SOCKET)(LPVOID)IpParameter; int rval; while(1) { char recvBuf[1024]; rval = recv(ClientSocket, recvBuf, 1024, 0); WaitForSingleObject(bufferMutex, INFINITE); if (rval == SOCKET_ERROR) { // printf("ONE Client Exit\n"); // vector<SOCKET>::iterator result = find(clientSocketGroup.begin(), clientSocketGroup.end(), ClientSocket); // clientSocketGroup.erase(result); // for (map<SOCKET, string>::iterator i=m_ipSocket.begin(); i!=m_ipSocket.end(); i++) // { // if (i->first == ClientSocket) // { // printf("%s下线\n",m_ipSocket[ClientSocket].c_str()); // m_ipSocket.erase(i); // break; // } // } closesocket(ClientSocket); ReleaseSemaphore(bufferMutex, 1, NULL); break; } recvBuf; if(recvBuf[0] == -113){ if(recvBuf[0]== -1){ return 0; } } // printf("%s Says: %s\n", m_ipSocket[ClientSocket].c_str(), recvBuf); // 接收信息 Sleep(1000); ReleaseSemaphore(bufferMutex, 1, NULL); } return 0; } ```

C语言 socket 编写简单服务端客户端通信问题

大家好,感谢你的回复。 我用c 写了一个socket通信的小程序,写好了服务端和客户端,可遇到个问题,就是每次启动客户端只能发送第一条消息,之后服务端就不能再收到消息了。 Talk is cheak , show me the code. 服务端代码: ```#include<stdio.h> #include<stdlib.h> #include<WinSock2.h> #pragma comment(lib,"ws2_32.lib") int main(){ WSADATA wsd; SOCKET sockServer; SOCKADDR_IN serveraddr;// 服务端套接字 该结构中包含了要结合的地址和端口号 SOCKET sockClient; SOCKADDR_IN clientaddr; WSAStartup(MAKEWORD(2,2),&wsd);//初始化网络接口 sockServer = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);//创建服务器socket if (sockServer != -1 && sockServer != INVALID_SOCKET){ printf("Socket has been created :%d", sockServer); } else{ printf("Socket create failed."); exit(0); } serveraddr.sin_addr.S_un.S_addr = htonl(INADDR_ANY); serveraddr.sin_family = AF_INET; serveraddr.sin_port = htons(6000);//绑定端口6000 int bindInfo = bind(sockServer, (SOCKADDR*)&serveraddr, sizeof(SOCKADDR)); char recvBuf[100]; int len = sizeof(SOCKADDR); listen(sockServer, 5);//5为等待连接数 while (1){ sockClient = accept(sockServer, (SOCKADDR*)&clientaddr, &len); //接收客户端数据 recv(sockClient, recvBuf, strlen(recvBuf)+1, MSG_PEEK); printf("to me:%s\n",recvBuf); memset(recvBuf, 0, 100); } closesocket(sockClient); system("pause"); return 0; } ``` 客户端代码: ``` #include<winsock2.h> #include<stdio.h> #pragma comment(lib, "ws2_32.lib") void main() { WSADATA wsaData; SOCKET sockClient;//客户端Socket SOCKADDR_IN addrServer;//服务端地址 WSAStartup(MAKEWORD(2, 2), &wsaData); char message[20] = "HelloSocket!"; //定义要连接的服务端地址 addrServer.sin_addr.S_un.S_addr = inet_addr("127.0.0.1");//目标IP(127.0.0.1是回送地址) addrServer.sin_family = AF_INET; addrServer.sin_port = htons(6000);//连接端口6000 //新建客户端socket sockClient = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); int s = 0; s = connect(sockClient, (SOCKADDR*)&addrServer, sizeof(SOCKADDR)); while (1){ //发送数据 int a = send(sockClient, message, strlen(message) + 1, 0); printf("to server:"); scanf("%s", message); } //关闭socket closesocket(sockClient); WSACleanup(); } ``` 求懂的高手来解答一下。 谢谢了!

为什么我的golang程序会创建这么多线程?

<div class="post-text" itemprop="text"> <p>My server runned for a time and about 200 connection created and did some calculations and closed, I found that it took up about 2,7G memory and never decreased after serveral days. The program itself didn't occupy that much , And I checked it by <code>memstats</code>. by <code>cat /proc/11686/status | grep -i threads</code> I got <code>Threads: 177</code>,so I think the reason that it took up so much memory is that it created to many threads .Why <code>go</code> create so much threads? Is it because I use too many <code>go func()</code>? And I'm sure goroutines didn't increase and they exited normally.</p> <p>PS </p> <p>There is so many code in my program, so I exclude the details, just keep the main</p> <p>And my problem is when <code>go</code> create a thread to do something. and is it normal to have so many thread? I think it is not concerned much to the code.</p> <p>main.go</p> <pre><code>package main import ( "sanguo/base/log" "fmt" "runtime" "math/rand" "time" "net" "os" ) type GameServer struct { Host string } func (server *GameServer) Start() { // load system data log.Debug("/*************************SREVER START********************************/") tcpAddr, err := net.ResolveTCPAddr("tcp4", server.Host) if err != nil { log.Error(err.Error()) os.Exit(-1) } go func(){ for{ select { case &lt;-time.After(30*time.Second): LookUp("read memstats") } } }() listener, err := net.ListenTCP("tcp", tcpAddr) if err != nil { log.Error(err.Error()) os.Exit(-1) } log.Debug("/*************************SERVER SUCC********************************/") for { conn, err := listener.AcceptTCP() if err != nil { continue } log.Debug("Accept a new connection ", conn.RemoteAddr()) go handleClient(conn) } } func handleClient(conn *net.TCPConn) { sess := NewSession(conn) sess.Start() } func main() { rand.Seed(time.Now().Unix()) runtime.GOMAXPROCS(runtime.NumCPU()) log.SetLevel(0) filew := log.NewFileWriter("log", true) err := filew.StartLogger() if err != nil { fmt.Println("Failed start log",err) return } var server GameServer server.Host = "127.0.0.1:9999" server.Start() } </code></pre> <p>session.go</p> <pre><code>package main import ( "io" "encoding/binary" "encoding/json" "github.com/felixge/tcpkeepalive" "net" "sanguo/base/log" "strings" "sync" "time" ) type Session struct { conn *net.TCPConn //the tcp connection from client recvChan chan *bufferedManager.Token //data from client closeNotiChan chan bool // ok bool lock sync.Mutex } func NewSession(connection *net.TCPConn) (sess *Session) { var client Session client.conn = connection client.recvChan = make(chan []byte, 1024) client.closeNotiChan = make(chan bool) client.ok = true log.Debug("New Connection", &amp;client) kaConn, err := tcpkeepalive.EnableKeepAlive(connection) if err != nil { log.Debug("EnableKeepAlive err ", err) } else { kaConn.SetKeepAliveIdle(120 * time.Second) kaConn.SetKeepAliveCount(4) kaConn.SetKeepAliveInterval(5 * time.Second) } return &amp;client } func (sess *Session) Close() { sess.lock.Lock() if sess.ok { sess.ok = false close(sess.closeNotiChan) sess.conn.Close() log.Trace("Sess Close Succ", sess, sess.uid) } sess.lock.Unlock() } func (sess *Session) handleRecv() { defer func(){ if err := recover(); err != nil { log.Critical("Panic", err) } log.Trace("Session Recv Exit", sess, sess.uid) sess.Close() }() ch := sess.recvChan header := make([]byte, 2) for { /**block until recieve len(header)**/ n, err := io.ReadFull(sess.conn, header) if n == 0 &amp;&amp; err == io.EOF { //Opposite socket is closed log.Warn("Socket Read EOF And Close", sess) break } else if err != nil { //Sth wrong with this socket log.Warn("Socket Wrong:", err) break } size := binary.LittleEndian.Uint16(header) + 4 data := make([]byte, size) n, err = io.ReadFull(sess.conn, t.Data) if n == 0 &amp;&amp; err == io.EOF { log.Warn("Socket Read EOF And Close", sess) break } else if err != nil { log.Warn("Socket Wrong:", err) break } ch &lt;- data //send data to Client to process } } func (sess *Session) handleDispatch() { defer func(){ log.Trace("Session Dispatch Exit", sess, sess.uid) sess.Close() }() for { select { case msg, _ := &lt;-sess.recvChan: log.Debug("msg", msg) sess.SendDirectly("helloworldhellowor", 1) case &lt;-sess.closeNotiChan: return } } } func (sess *Session) Start() { defer func() { if err := recover(); err != nil { log.Critical("Panic", err) } }() go sess.handleRecv() sess.handleDispatch() close(sess.recvChan) log.Warn("Session Start Exit", sess, sess.uid) } func (sess *Session) SendDirectly(back interface{}, op int) bool { back_json, err := json.Marshal(back) if err != nil { log.Error("Can't encode json message ", err, back) return false } log.Debug(sess.uid, "OUT cmd:", op, string(back_json)) _, err = sess.conn.Write(back_json) if err != nil { log.Error("send fail", err) return false } return true } </code></pre> </div>

nonetype object has no attribute recv

在多线程模块threading中出现这样的问题nonetype object has no attribute recv,请问是哪里出错了,求大神看看

大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了

大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

点沙成金:英特尔芯片制造全过程揭密

“亚马逊丛林里的蝴蝶扇动几下翅膀就可能引起两周后美国德州的一次飓风……” 这句人人皆知的话最初用来描述非线性系统中微小参数的变化所引起的系统极大变化。 而在更长的时间尺度内,我们所生活的这个世界就是这样一个异常复杂的非线性系统…… 水泥、穹顶、透视——关于时间与技艺的蝴蝶效应 公元前3000年,古埃及人将尼罗河中挖出的泥浆与纳特龙盐湖中的矿物盐混合,再掺入煅烧石灰石制成的石灰,由此得来了人...

讲一个程序员如何副业月赚三万的真实故事

loonggg读完需要3分钟速读仅需 1 分钟大家好,我是你们的校长。我之前讲过,这年头,只要肯动脑,肯行动,程序员凭借自己的技术,赚钱的方式还是有很多种的。仅仅靠在公司出卖自己的劳动时...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

我说我不会算法,阿里把我挂了。

不说了,字节跳动也反手把我挂了。

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

你打算用Java 8一辈子都不打算升级到Java 14,真香

我们程序员应该抱着尝鲜、猎奇的心态,否则就容易固步自封,技术停滞不前。

无代码时代来临,程序员如何保住饭碗?

编程语言层出不穷,从最初的机器语言到如今2500种以上的高级语言,程序员们大呼“学到头秃”。程序员一边面临编程语言不断推陈出新,一边面临由于许多代码已存在,程序员编写新应用程序时存在重复“搬砖”的现象。 无代码/低代码编程应运而生。无代码/低代码是一种创建应用的方法,它可以让开发者使用最少的编码知识来快速开发应用程序。开发者通过图形界面中,可视化建模来组装和配置应用程序。这样一来,开发者直...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

《经典算法案例》01-08:如何使用质数设计扫雷(Minesweeper)游戏

我们都玩过Windows操作系统中的经典游戏扫雷(Minesweeper),如果把质数当作一颗雷,那么,表格中红色的数字哪些是雷(质数)?您能找出多少个呢?文中用列表的方式罗列了10000以内的自然数、质数(素数),6的倍数等,方便大家观察质数的分布规律及特性,以便对算法求解有指导意义。另外,判断质数是初学算法,理解算法重要性的一个非常好的案例。

《Oracle Java SE编程自学与面试指南》最佳学习路线图(2020最新版)

正确选择比瞎努力更重要!

一文带你入门Java Stream流,太强了

两个星期以前,就有读者强烈要求我写一篇 Java Stream 流的文章,我说市面上不是已经有很多了吗,结果你猜他怎么说:“就想看你写的啊!”你看你看,多么苍白的喜欢啊。那就“勉为其难”写一篇吧,嘻嘻。 单从“Stream”这个单词上来看,它似乎和 java.io 包下的 InputStream 和 OutputStream 有些关系。实际上呢,没毛关系。Java 8 新增的 Stream 是为...

立即提问
相关内容推荐