dreamer1231
2019-08-30 10:20 阅读 44

为什么多个克隆系统调用需要一个go子例程?

I created small sample program to check sub-routine system call.

package main

func print() {
}

func main() {
    go print()
}

Straces of go subroutine

clone(child_stack=0xc000044000, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM) = 27010
clone(child_stack=0xc000046000, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM) = 27011
clone(child_stack=0xc000040000, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM) = 27012
futex(0x4c24a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0xc000034848, FUTEX_WAKE_PRIVATE, 1) = 1
exit_group(0)                           = ?

It is observed that three times clone system call called for single subroutine but stack size is small as go is claiming. Can you please let me know why three clone system call called for single subroutine.

In the similar manner when creating a pthread single time clone system call called. but stack size is big.

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h> //Header file for sleep(). man 3 sleep for details.
#include <pthread.h>

void *myThreadFun(void *vargp)
{
        return NULL;
}

int main()
{
        pthread_t thread_id;
        pthread_create(&thread_id, NULL, myThreadFun, NULL);
        pthread_join(thread_id, NULL);
        exit(0);
}

Straces of pthread

clone(child_stack=0x7fb49d960ff0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARET_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7fb49d9619d0, tls=0x7fb49d961700, child_tidptr=0x7fb49d9619d0) = 27370
futex(0x7fb49d9619d0, FUTEX_WAIT, 27370, NULL) = -1 EAGAIN (Resource temporarily unavailable)
exit_group(0) = ?

Why multiple clone system calls called for single go subroutine? because in the program only single sub-routine was created like single pthread in second program of C language. For what purpose other two clone called?

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享

1条回答 默认 最新

  • 已采纳
    dqstti8945 dqstti8945 2019-08-30 18:54

    Running this no-op program:

    package main
    
    func main() {
    }
    

    and tracing the clone calls shows the same three clone calls:

    $ go build nop.go
    $ strace -e trace=clone ./nop
    clone(child_stack=0xc000060000, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM) = 12602
    clone(child_stack=0xc000062000, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM) = 12603
    clone(child_stack=0xc00005c000, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM) = 12605
    +++ exited with 0 +++
    

    so what you have shown here is that Go is able to create a goroutine with no clone calls:

    $ cat oneproc.go
    package main
    
    func dummy() {
    }
    
    func main() {
        go dummy()
    }
    $ go build oneproc.go
    $ strace -e trace=clone ./oneproc
    clone(child_stack=0xc000060000, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM) = 13090
    clone(child_stack=0xc000062000, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM) = 13091
    clone(child_stack=0xc00005c000, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM) = 13092
    +++ exited with 0 +++
    

    (which is not really surprising—Goroutines are not threads).

    Go runtime (Go 1.11/12-ish)

    You asked for additional details in comments. There is a design document for the current system (which no doubt will become out of date if it is not already), and of course, there is the Go runtime source itself.

    There is a pretty informative (and large) comment at the top of proc.go that talks about how goroutines ("G"s) are mapped into worker threads ("M"s) that have processor resources ("P"). This is only indirectly relevant to why there are initially three OS clone calls (resulting in 4 threads total), but it is all important. Note that additional OS-level threads can and will be created later if and when it appears to be useful, especially if and when an M blocks in a system call.

    The actual clone system calls happen through newosproc and newosproc0 in os_linux.go. Other, non-Linux OSes have their own separate implementations. If you search for calls to newosproc you'll find just the one in proc.go, in function newm1. This is called from two more places in proc.go: newm and templateThread. The templateThread is a special helper that may never be used and (I believe) is not part of the three initial clones, so we get to ignore it, and only look for calls to newm. There are 6 of these, all in proc.go:

    • main calls systemstack(func() { newm(sysmon, nil) }). sysmon is also in proc.go; see it for what it does, which is partly to trigger garbage collection as needed, and partly to keep the rest of the scheduler going.

    • startTheWorldWithSema, which lets the runtime system start up, calls newm(nil, p) for each P. There is always at least one P, so this could be the second one. However, there's an initial m0 object, so this may not be a / the second one clone—it's not clear.

    • In sigqueue.go, signal_enable calls sigenable (in signal_unix.go) which, depending on values in sigtable (from sigtab_linux_generic.go) that are definitely true, winds up calling ensureSigM (also in signal_unix.go), which calls LockOSThread, which ensures that we'll create another M. (The go in the closure within ensureSigM created the G to be bound to this new locked-to-OS-thread M.) As these calls are fired up from init functions I think they happen before startTheWorldWithSema so that it creates the extra M in the loop noted above. They might happen after starting the world, but in that case it's still a matter of getting the M created before entering your main.

    All of this definitely accounts for two of the threads: one to run sysmon and one to handle signals. It may or may not account for the third thread. It's all based on reading the code, rather than actually running and testing it, so it may contain errors.

    点赞 评论 复制链接分享

相关推荐