Two Examples of Improper Integrals

In a recent posting I proposed to answer a question using an indefinite integral, which would more-correctly have been solved using the corresponding, definite integral. The issue there was that if this integral was rewritten as some arbitrary definite integral, this could in some cases have resulted in what’s called an ‘improper integral’. And what my reader may not realize, is that improper integrals exist, with well-behaved solutions, just as some infinite series converge.

And so, I have written a work-sheet below, which reminds people who may not remember their Calculus 2 exactly, of what forms improper integrals can take:

Link to a Letter-Sized PDF File

Link to an EPUB File for Phones

 

(Edit 6/05/2019, 18h25 : )

I have just revised the work-sheets above, to include some plots, and to provide a clearer understanding to anybody who might be interested in them, but who did not study Calculus 2. But some readers of the EPUB version may notice wonky formatting.

When I export Math notation to regular HTML, or to anything which is based on regular HTML, such as to an EPUB File which is not using MathML, I am faced with a problem every time correct Mathematical notation requires that 3 glyphs be stacked, as is the case with the (definite) integral operator, and with the Sigma operator, the latter of which denotes a summation. The only way I see around this issue is, to give the operator in question both a subscript and a super-script.

While the result can be read and understood, doing so requires additional concentration by the reader. I’ve written earlier postings, in which I described this problem, but the advantages here are, a notation which regular EPUB readers can display, as well as my ability to include the Computer Algebra and thus the plots, of “SageMath”, using the “LyX” graphical front-end to LaTeX, which makes the typesetting easier for me.


 

This limitation does not exist when only exporting the results to a PDF-File. But, in order to take advantage of the more-correct formatting of the resulting PDF File, I’d need to create two separate versions of my own document, one for export to (exact) PDF, and one for export to (messy) EPUB. While I did this amount of work for simpler work-sheets, I’m unwilling to do this for the more-complex work-sheet I just linked to.

 

(Edit 6/05/2019, 18h55 : )

There’s an added challenge to me, in the form of something my particular software is unable to perform. When I’m using LyX to typeset my work-sheets, the following two possibilities emerge:

  • Those work-sheets may consist entirely of Math written in my own hand, in which case I am able to export them to an XHTML File that contains MathML, and this will enable me to set up a master document, in which 2 but not 3 elements can be stacked. Then, the ‘Limus’ notation will be correct, and the integrals will be so-so. But the resulting master document can then be exported in two ways that eventually end up as a PDF and as an EPUB3 File, the latter requiring MathML from the EPUB reader app.
  • Those work-sheets may contain Computer Algebra and/or Plots that are essential, in which case only .TEX Files of the SageTex variety can be exported, which in turn can only be converted into plain HTML. This will result in an EPUB File that is inferior, but that all mobile EPUB reader apps can view. But simultaneously, through a separate master document and additional work on my part, a pristine PDF File can result, which still requires a full-sized monitor or other output device to read.

So, unless I find ways to export SageTex Files specifically, to XHTML with MathML, I’ll be facing issues in how to create typeset documents in the near future.

Dirk

 

Computers and Floating-Point Numbers: In Layman’s Terms

There exist WiKiPedia pages that do explain how single- and double-precision floating-point numbers are formatted – both used by computers – but which are so heavily bogged down with tedious details, that the reader would need to be a Computer Scientist already, to be able to understand them. And in that case, those articles can act as a quick reference. But they would do little, to explain the subject to laypeople. The mere fact that single- and double-precision numbers are explained on the WiKi in two separate articles, could already act as a deterrent for most people to try understanding the basic concepts.

I will try to explain this subject in basic terms.

While computers store data in bits that are organized into words – those words either being 32-bit or 64-bit words on most popular architectures – even by the CPU, those words are interpreted as representing numbers in different ways. One way is either as signed or as unsigned ‘integers’, which is another way of saying ‘whole numbers’. And another is either as 32-bit or as 64-bit floating-point numbers. Obviously, the floating-point numbers are used to express fractions, as well as very large or very small values, as well as fractions which are accurate to a high number of digits ‘after the decimal point’.

A CPU must be given an exact opcode, to perform Math on the different representations of numbers, where what type of number they are, is already reflected at compile-time, by which opcode has been encoded, to use the numbers. So obviously, some non-trivial Math goes into defining, how these different number-formats work. I’m to focus on the two most-popular floating-point formats.

Understanding how floating-point numbers work on computers, first requires understanding how Scientists use Scientific Notation. In the Engineering world, as well as in the Household, what most people are used to, is that the number of digits a number has to the left of the decimal-point, be grouped in threes, and that the magnitude of the number is expressed with prefixes such as kilo- , mega- , giga- , tera- , peta- , or, going in the other direction, with milli- , micro- , nano- , pico- , femto- or atto- .

In Science, this notation is so encumbering that the Scientists try to avoid it. What Scientists will do, is state a field of decimal digits, which will always begin with a single (non-zero) digit, followed by the decimal point, followed by an arbitrary number of fractional digits, followed by a multiplication-symbol, followed by the base of 10 raised to either a positive or a negative power. This power also states, how many places further right, or how many places further left, the reader should visualize the decimal point. For example, Avogadro’s number is expressed as

6.022 × 1023

IF we are told to limit our precision to 3 places after the decimal point. If we were told to give 6 places behind the decimal point, we would give it as

6.022141 × 1023

What this means, is that relative to where it is written, the decimal point would need to be shifted to the right 23 places, to arrive at a number, that has the correct order of magnitude.

When I went to High-School, we were drilled to use this notation ad nauseum, so that even if it seemed ridiculous, we would answer in our sleep that to express how much ‘a dozen’ was, using Scientific Notation, yielded

1.2 × 10+1

More importantly, Scientists feel comfortable using the format, because they can express such ideas as ‘how many atoms of regular matter are thought to exist in the known universe’, as long as they were not ashamed to write a ridiculous power of ten:

1 × 1080

Or, how many stars are thought to exist in our galaxy:

( 2 × 1011 … 4 × 1011 )

The latter of which should read, ‘from 200 billion to 400 billion’.

When Computing started, its Scientists had the idea to adapt Scientific Notation to the Binary Number System. What they did was to break down the available word-sizes, essentially, into three fields:

  1. A so-called “Significand”, which would correspond to the Mantissa,
  2. An Exponent,
  3. A Sign-bit for the entire number.

The main difference to Scientific Notation however was, that floating-point numbers on computers, would do everything in powers of two, rather than in powers of ten.

A standard, 32-bit floating-point number reserves 23 bits for the fraction, and 8 bits for the exponent of 2, while a standard, 64-bit floating-point number reserves 52 bits for the fraction, and 11 bits for the exponent of 2. This assignment is arbitrary, but sometimes necessary to know, for implementing certain types of subroutines or hardware.

But one thing that works as well in binary as it does in decimal, is that bits could occur after a point, as easily as they could occur before a point.

Hence, this would be the number 3 in binary:

11

While this would be the fraction 3/4 in binary:

0.11

(Updated 11/09/2019 : )

Continue reading Computers and Floating-Point Numbers: In Layman’s Terms

Why Programming Environments deliberately set a Limit to Recursion-Depth

One of the facts about Computing which new programmers need to know, is that although the concept sounds intriguing, that their code could be based on ‘Infinite Recursion’, if this is in fact attempted, by code which is missing a necessary end-condition, that caps further recursion, executing the code will in principle, consume an amount of memory that fills the entire available memory within a few seconds, possibly without the programmer even noticing so. And will fail to produce a result.

Then, unless something external intervenes, an OOM condition will take place, that can crash the session. In that case, only the peers will be laughing, while the actual programmer might find the whole result rather frustrating.

This is why running images always have a maximum stack-depth set, and even LISP needs to have a limit set, because code would be easy to write, that tries to be infinite.

The special case needs to be tested for, that provides a direct result, before the attempt is instructed, to attempt recursion in general.

If a programmer needs very deep recursion, he can set this limit to some ridiculously-high value, but it needs to remain finite, so that in the event of an innocent coding error, control is returned to the programmer within seconds.

Likewise, I have heard from a fellow programmer, that he was once writing innocent code, that included hard-drive I/O. And due to a simple error, his program filled up much of his hard-drive with useless data, within seconds, before he could realize what was happening, and stop his program.

Dirk