Why Programming Environments deliberately set a Limit to Recursion-Depth

One of the facts about Computing which new programmers need to know, is that although the concept sounds intriguing, that their code could be based on ‘Infinite Recursion’, if this is in fact attempted, by code which is missing a necessary end-condition, that caps further recursion, executing the code will in principle, consume an amount of memory that fills the entire available memory within a few seconds, possibly without the programmer even noticing so. And will fail to produce a result.

Then, unless something external intervenes, an OOM condition will take place, that can crash the session. In that case, only the peers will be laughing, while the actual programmer might find the whole result rather frustrating.

This is why running images always have a maximum stack-depth set, and even LISP needs to have a limit set, because code would be easy to write, that tries to be infinite.

The special case needs to be tested for, that provides a direct result, before the attempt is instructed, to attempt recursion in general.

If a programmer needs very deep recursion, he can set this limit to some ridiculously-high value, but it needs to remain finite, so that in the event of an innocent coding error, control is returned to the programmer within seconds.

Likewise, I have heard from a fellow programmer, that he was once writing innocent code, that included hard-drive I/O. And due to a simple error, his program filled up much of his hard-drive with useless data, within seconds, before he could realize what was happening, and stop his program.