What is the maximum recursion depth in Python, and how to increase it?

Go To StackoverFlow.com

276

I have this tail recursive function here:

def fib(n, sum):
    if n < 1:
        return sum
    else:
        return fib(n-1, sum+n)

c = 998
print(fib(c, 0))

It works up to n=997, then it just breaks and spits a "maximum recursion depth exceeded in comparison" RuntimeError. Is this just a stack overflow? Is there a way to get around it?

2010-07-23 23:04
by quantumSoup
See also http://stackoverflow.com/questions/5061582/setting-stacksize-in-a-python-scrip - Thomas Ahle 2014-04-28 19:09
memoization could speed up your function and increase its effective recursive depth by making previously calculated values terminate instead of increasing the stack size - Cyoce 2016-01-11 18:47
Just out of curiosity, why did you name this fib? Were you intending to use this as a helper to calculate the Fibonacci sequence? If so, what's your approach - yoniLavi 2018-03-10 21:46


324

It is a guard against a stack overflow, yes. Python (or rather, the CPython implementation) doesn't optimize tail recursion, and unbridled recursion causes stack overflows. You can change the recursion limit with sys.setrecursionlimit, but doing so is dangerous -- the standard limit is a little conservative, but Python stackframes can be quite big.

Python isn't a functional language and tail recursion is not a particularly efficient technique. Rewriting the algorithm iteratively, if possible, is generally a better idea.

2010-07-23 23:08
by Thomas Wouters
From my experience, you need to increase the limit both in the sys and the resource modules: http://stackoverflow.com/a/16248113/20552 - Thomas Ahle 2014-04-28 19:10
as a tactic to convert it to an iterative version, a tail call optimization decorator could be usedjfs 2014-10-14 18:28
you can use http://svn.python.org/projects/python/trunk/Tools/scripts/find_recursionlimit.py to find out your OS upper limi - Ullullu 2015-09-16 13:55
For those interested in the source, the default recursion limit is set to 1000 https://hg.python.org/cpython/file/tip/Python/ceval.c#l691 and it can be changed using the API at https://hg.python.org/cpython/file/tip/Python/sysmodule.c#l643 which in turn sets the limit to the new value at https://hg.python.org/cpython/file/tip/Python/ceval.c#l70 - Pramod 2015-10-07 18:51
@ThomasAhle Why is it not enough to set sys.setrecursionlimit? What exactly would happen if I only set sys.setrecusionlimit, but not touch resource - max 2016-10-08 07:07
Tail recursion is a perfectly efficient technique in a programming language optimized for it. For the right sort of problem, it may be considerably more expressive an an iterative implementation.

The answer probably means "in Python specifically" but that isn't what it say - Peter R 2017-03-06 15:04

i got the same error but have no idea how i could make this script iterative instead of recursive... any ideas? the script had only crawled 34/54 pages... : - Anthony 2018-10-10 22:27
i guess i could first retrieve all the diff page links and then iterate through thos - Anthony 2018-10-10 22:31


87

Looks like you just need to set a higher recursion depth

sys.setrecursionlimit(1500)
2010-07-23 23:07
by David Young


38

It's to avoid a stack overflow. The Python interpreter limits the depths of recursion to help you avoid infinite recursions, resulting in stack overflows. Try increasing the recursion limit (sys.setrecursionlimit) or re-writing your code without recursion.

from python website:

sys.getrecursionlimit()

Return the current value of the recursion limit, the maximum depth of the Python interpreter stack. This limit prevents infinite recursion from causing an overflow of the C stack and crashing Python. It can be set by setrecursionlimit().

2010-07-23 23:08
by Scharron
On my Anaconda x64, 3.5 Python on Windows, the default limit is 1000 - Guillaume Chevalier 2015-12-04 21:48
My windows default limit is also 1000, but it's 2000 on my mac's Anaconda 3 x64 3.6 installation. I think the main point is the default limit can be different in different installations. However, this is unbelievably the only answer until recently that actually tells you how to check the value. The highest-voted answer doesn't even mention sys.getrecursionlimit(), and that was half of the original question - rob3c 2017-11-03 18:12


17

Use a language that guarantees tail-call optimisation. Or use iteration. Alternatively, get cute with decorators.

2010-07-23 23:12
by Marcelo Cantos
That's rather throwing the baby out with the bathwater - Russell Borogove 2010-07-24 00:09
@Russell: Only one of the options I offered advises this - Marcelo Cantos 2010-07-24 03:22


10

I realize this is an old question but for those reading, I would recommend against using recursion for problems such as this - lists are much faster and avoid recursion entirely. I would implement this as:

def fibonacci(n):
    f = [0,1,1]
    for i in xrange(3,n):
        f.append(f[i-1] + f[i-2])
    return 'The %.0fth fibonacci number is: %.0f' % (n,f[-1])

(Use n+1 in xrange if you start counting your fibonacci sequence from 0 instead of 1.)

2013-09-06 03:17
by Daniel
why use O(n) space when you can use O(1) - Janus Troelsen 2014-03-12 09:11
Just in case the O(n) space comment was confusing: don't use a list. List will keep all the values when all you need is the nth value. A simple algorithm would be to keep the last two fibonacci numbers and add them until you get to the one you need. There are better algorithms too - Milimetric 2014-07-14 19:12
Also for those reading: xrange does not work in Python 3 - Mathime 2016-06-18 14:42
@Mathime: xrange is called simply range, in Python 3 - Eric O Lebigot 2016-08-03 09:50
@EOL I'm aware of thi - Mathime 2016-08-03 09:51
@Mathime I was making things explicit for those reading these comments - Eric O Lebigot 2016-08-03 09:54
Regarding O(1): One can get the n-th Fibonacci number without computing all the previous ones by using matrix powers via diagonalization. See pages 2 and 3 in here - Martin Ueding 2016-09-29 19:27


10

Of course Fibonacci numbers can be computed in O(n) by applying the Binet formula:

from math import floor, sqrt

def fib(n):                                                     
    return int(floor(((1+sqrt(5))**n-(1-sqrt(5))**n)/(2**n*sqrt(5))+0.5))

As the commenters note it's not O(1) but O(n) because of 2**n. Also a difference is that you only get one value, while with recursion you get all values of Fibonacci(n) up to that value.

2015-10-08 06:19
by rwst
There is no maximum size of a long in python - ppperry 2015-11-21 18:14
It's worth noting that this fails for larger n because of floating point imprecision - the difference between (1+sqrt(5))**n and (1+sqrt(5))**(n+1) becomes less than 1 ulp, so you start getting incorrect results - Mego 2016-07-07 14:02
There are actually no big integers in NumPy - Eric O Lebigot 2016-08-03 09:52
@Mego What? It's the difference between (1+sqrt(5))**n and ((1+sqrt(5))**n)+1 that becomes less than 1 ulp! (small typo) Also, {@}rwst That's not O(1)! Calculating 2**n takes at least O(n) time - user202729 2018-01-05 01:43
@user202729 That's not true, calculating 2**n is effectively O(log(n)) using Exponentiattion by squaring - Sam 2018-02-18 18:02
@Sam log(n) multiplications, but each multiplication takes more than O(n log n) (often O(n^2) in practice) time as each number has O(n) digits long. You need at least O(n) memory to store the result - user202729 2018-02-19 01:40
@user202729 Any number is O(log(n)) digits long unless it's represented in unary. For instance "1" is 1 digit long in binary, and 1,000,000 is 10 digits long in binary - Sam 2018-02-25 01:22
@Sam But fib(n) is O(n) digits long - user202729 2018-02-25 03:37
Guys, in practice, in most machines, fast exponentiation is O(log n). Since latency of integer multiplication is usualy a fixed number, the cost of each multiplication is irrelevant. The problem of Binet formula is that it doesn't work for little larger ns due to rounding errors in floating point types. The rounding error becomes a big problem much earlier than the cost of the multiplications shows up and the exponentiation becomes O(n) - mentatkgs 2018-06-09 01:45


7

I had a similar issue with the error "Max recursion depth exceeded". I discovered the error was being triggered by a corrupt file in the directory I was looping over with os.walk. If you have trouble solving this issue and you are working with file paths, be sure to narrow it down, as it might be a corrupt file.

2014-10-14 18:14
by Tyler
The OP does give his code, and his experiment is reproducible at will. It does not involve corrupt files - T. Verron 2015-03-01 19:25
You're right, but my answer isn't geared towards the OP, since this was over four years ago. My answer is aimed to help those with MRD errors indirectly caused by corrupt files - since this is one of the first search results. It helped someone, since it was up voted. Thanks for the down vote - Tyler 2015-03-02 20:36
This was the only thing I found anywhere when searching for my issue that connected a "max recursion depth" traceback to a corrupted file. Thanks - Jeff 2017-07-18 17:23


6

resource.setrlimit must also be used to increase the stack size and prevent segfault

The Linux kernel limits the stack of processes.

Python stores local variables on the stack of the interpreter, and so recursion takes up stack space of the interpreter.

If the Python interpreter tries to go over the stack limit, the Linux kernel segfaults it.

The stack limit size is controlled with the getrlimit and setrlimit system calls.

Python offers access to those system calls through the resource module.

import resource
import sys

print resource.getrlimit(resource.RLIMIT_STACK)
print sys.getrecursionlimit()
print

# Will segfault without this line.
resource.setrlimit(resource.RLIMIT_STACK, [0x10000000, resource.RLIM_INFINITY])
sys.setrecursionlimit(0x100000)

def f(i):
    print i
    sys.stdout.flush()
    f(i + 1)
f(0)

Of course, if you keep increasing ulimit, your RAM will run out, which will either slow your computer to a halt due to swap madness, or kill Python via the OOM Killer.

From bash, you can see and set the stack limit (in kb) with:

ulimit -s
ulimit -s 10000

Default value for me is 8Mb.

See also:

Tested on Ubuntu 16.10, Python 2.7.12.

2017-01-28 23:40
by XXX
Attempting to set rlimit_stack after Stack Clash remediations may result in failure or related problems. Also see Red Hat Issue 1463241jww 2017-06-21 16:35
I used this (the Python resource part) to help my implementation of Kosaraju's algorithm on professor Tim Roughgarden's mean (huge) dataset. My implementation worked on small sets, certainly the issue with a large dataset was the recursion/stack limit... Or was it? Well, yes it was! Thanks - nilo 2019-01-28 08:04


6

If you often need to change the recursion limit (e.g. while solving programming puzzles) you can define a simple context manager like this:

import sys

class recursionlimit:
    def __init__(self, limit):
        self.limit = limit
        self.old_limit = sys.getrecursionlimit()

    def __enter__(self):
        sys.setrecursionlimit(self.limit)

    def __exit__(self, type, value, tb):
        sys.setrecursionlimit(self.old_limit)

Then to call a function with a custom limit you can do:

with recursionlimit(1500):
    print(fib(1000, 0))

On exit from the body of the with statement the recursion limit will be restored to the default value.

2018-05-01 16:37
by Eugene Yarmash


4

Use generators?

def fib():
    a, b = 0, 1
    while True:
        yield a
        a, b = b, a + b

fibs = fib() #seems to be the only way to get the following line to work is to
             #assign the infinite generator to a variable

f = [fibs.next() for x in xrange(1001)]

for num in f:
        print num

above fib() function adapted from: http://intermediatepythonista.com/python-generators

2015-07-30 18:32
by alex
the reason for having to assign a generator to a variable is because [fibs().next() for ...] would make a new generator each time - tox123 2016-08-10 19:02


4

If you want to get only few Fibonacci numbers, you can use matrix method.

from numpy import matrix

def fib(n):
    return (matrix('0 1; 1 1', dtype='object') ** n).item(1)

It's fast as numpy uses fast exponentiation algorithm. You get answer in O(log n). And it's better than Binet's formula because it uses only integers. But if you want all Fibonacci numbers up to n, then it's better to do it by memorisation.

2018-02-17 15:57
by bebidek
Sadly you can't use numpy in most competitive programming judges. But yes sir, your solution is my favorite.

I've used the matrix soluction for some problems. It is the best solution when you need a very large fibonacci number and you can't use a modulus.

If you are allowed to use a modulus, the pisano period the better way to do it - mentatkgs 2018-06-09 01:49



2

Many recommend that increasing recursion limit is a good solution however it is not because there will be always limit. Instead use an iterative solution.

def fib(n):
    a,b = 1,1
    for i in range(n-1):
        a,b = b,a+b
    return a
print fib(5)
2016-04-13 08:56
by Harun ERGUL


1

As @alex suggested, you could use a generator function to do this. Here's the equivalent of the code in your question:

def fib(n):
    def fibseq(n):
        """ Iteratively return the first n Fibonacci numbers, starting from 0 """
        a, b = 0, 1
        for _ in xrange(n):
            yield a
            a, b = b, a + b

    return sum(v for v in fibseq(n))

print format(fib(100000), ',d')  # -> no recursion depth error
2016-07-12 16:58
by martineau
Ads