douyue7408
2018-02-10 20:29
浏览 44
已采纳

Go如何如此快速地计算斐波那契递归?

This is not a correct version of it, I am just playing around Go but I was shocked how fast Go calculated the 42nd(43 actually) number in Fibonacci sequence.

Can someone please explain how come it calculates it this fast? I tried comparing it to python(I know its slow compared to other languages) but python took > 1 minute and I had to break the recursion.

package main

import "fmt"

func fib(a uint) uint {
    if a <= 1 {
        return 1
    }
    return fib(a-1) + fib(a-2)
}

func main() {
    fmt.Println(fib(42))
}


[ `go run Hello.go` | done: 2.316821835s ]
433494437
  • 写回答
  • 关注问题
  • 收藏
  • 邀请回答

1条回答 默认 最新

  • doumo0206 2018-02-10 21:22
    已采纳

    Its compiler isn't as smart or mature as C's (at least not yet), but Go is still closer to C in its time performance than Python (space performance is a separate thing, and not what you asked about). Just being a compiled language instead of an interpreted language gives it a major leg up in time performance over Python (and it is still faster than PyPy in general, but not as much faster).

    Why compiled languages generally offer greater time performance than interpreted languages has been thoroughly covered elsewhere. You can research this question on stackoverflow and elsewhere on the internet. For example, here's the TL;DR in one stackoverflow answer to that question:

    Native programs run using instructions written for the processor they run on.

    And here's the TL;DR in another answer:

    Interpreted languages are slower because their method, object and global variable space model is dynamic

    You can also find plenty of benchmark case studies and results comparing implementations in different languages if you look for them.

    Performance improvements to the Go compiler and Go toolchain are also frequently made, which you can read about in the release notes (and elsewhere) such as this excerpt about version 1.8:

    The new back end, based on static single assignment form (SSA), generates more compact, more efficient code and provides a better platform for optimizations such as bounds check elimination. The new back end reduces the CPU time required by our benchmark programs by 20-30% on 32-bit ARM systems.

    已采纳该答案
    打赏 评论

相关推荐 更多相似问题