I am implementing Kosaraju's Strong Connected Component(SCC) graph search algorithm in Python.
The program runs great on small data set, but when I run it on a super-large graph (more than 800,000 nodes), it says "Segmentation Fault".
What might be the cause of it? Thank you!
Additional Info: First I got this Error when running on the super-large data set:
"RuntimeError: maximum recursion depth exceeded in cmp"
Then I reset the recursion limit using
sys.setrecursionlimit(50000)
but got a 'Segmentation fault'
Believe me it's not a infinite loop, it runs correct on relatively smaller data. It is possible the program exhausted the resources?
The highest possible limit is platform-dependent. A user may need to set the limit higher when she has a program that requires deep recursion and a platform that supports a higher limit. This should be done with care, because a too-high limit can lead to a crash.::::::
You didn't specify an OS. The reference to crash might mean segmentaion fault on your OS. Try a smaller stack. But IIRC the algorithm you're using puts the rntire SSC on the stack so you may run out of stack - James Thiele 2012-04-05 21:17
This happens when a python extension (written in C) tries to access a memory beyond reach.
You can trace it in following ways.
sys.settrace
at the very first line of the code.Use gdb
as described by Mark in this answer.. At the command prompt
gdb python
(gdb) run /path/to/script.py
## wait for segfault ##
(gdb) backtrace
## stack trace of the c code
I understand you've solved your issue, but for others reading this thread, here is the answer: you have to increase the stack that your operating system allocates for the python process.
The way to do it, is operating system dependant. In linux, you can check with the command ulimit -s
your current value and you can increase it with ulimit -s <new_value>
Try doubling the previous value and continue doubling if it does not work, until you find one that does or run out of memory.
lsof
and use grep
orwc -l
to keep track of everything - cdated 2013-02-05 17:57
Segmentation fault is a generic one, there are many possible reasons for this:
Updating the ulimit worked for my Kosaraju's SCC implementation by fixing the segfault on both Python (Python segfault.. who knew!) and C++ implementations.
For my MAC, I found out the possible maximum via :
$ ulimit -s -H
65532
Google search found me this article, and I did not see the following "personal solution" discussed.
My recent annoyance with Python 3.7 on Windows Subsystem for Linux is that: on two machines with the same Pandas library, one gives me segmentation fault
and the other reports warning. It was not clear which one was newer, but "re-installing" pandas
solves the problem.
Command that I ran on the buggy machine.
conda install pandas
More details: I was running identical scripts (synced through Git), and both are Windows 10 machine with WSL + Anaconda. Here go the screenshots to make the case. Also, on the machine where command-line python
will complain about Segmentation fault (core dumped)
, Jupyter lab simply restarts the kernel every single time. Worse still, no warning was given at all.