An exercise at converting an arbitrary video clip into ASCII-art.

One of the throw-back activities in Computing, which has existed since the 1990s, was so-called ‘ASCII-Art’, in which regular characters represented an image.

When this form of Art is created by a Human, it can look quite nice. But, if a mere computer program is given a sequence of images to convert into characters in a batch-process, the results are usually inferior, because all the program will be able to do is, to translate each cell of the images to an ASCII character, the brightness of which is supposed to represent the original brightness of the cell of the image. The complex shape of the actual text characters is not taken into account – at least, by any programs I have access to – and will also interfere with the viewer’s ability to recognize the intended image, because those shapes will just represent some random ‘noise’ in the image, without which, merely to have been given grey-scale tiles would have probably made it easier for the viewer to recognize the image.

In spite of recognizing this, I have persevered, and converted an arbitrary video-clip of mine into ASCII-art, programmatically. The following is the link by which it can be viewed:

(Link within my Web-site.)

And Yes, the viewer would need to enable JavaScript from my site, in order to obtain an actual animation, because that is what advances the actual ‘iframe’.


(Updated 6/26/2021, 14h45… )

Continue reading An exercise at converting an arbitrary video clip into ASCII-art.

Just Created a Working Instance of TinyMCE

One of the more-interesting features of JavaScript in years gone by, was the in-browser HTML editor named ‘TinyMCE’. What this scriptlet does, is run on the browser, and allow people to edit the contents of a <textarea>, in WYSIWYG style, for submission to an arbitrary Web-application.

For me, this piece of JavaScript has little use. Other Web-applications of mine, already give me HTML editing which is rich enough, not to depend on TinyMCE. But there has been a facet of this scriptlet, which has irked beginning-users and Web-designers so far, which is that by default, it offers no ‘Save’ Menu-entry. The reason it does not is twofold:

  1. TinyMCE is meant to be integrated into some more-complex Web-page, where its input is also given a defined purpose, and
  2. By itself, this scriptlet just runs on the browser, from where it has no privilege to store its edited contents anywhere, neither on the server, nor on the client-machine running the browser.

And so some people have wondered, how they could exploit this amazing technology, just to save the edited HTML locally, to the hard-drive of the computer running the browser. And there are many possible ways to solve that problem, out of which I’ve just implemented one:

It’s possible to add a ‘Submit’ button, which sends the edited content to the server, which can in turn display it as a Web-page, that the user can save to his hard drive, using the “Save Page As…” Menu-command belonging to his browser. I cannot think of a solution that would be easier. However, if somebody wanted to use this mechanism, then next, he’d also need to open the .HTML-File saved to his hard-drive, and edit out the parts of it, that make it a Web-page, thus editing the saved HTML-File down to just the part that displays between the <body>…</body> tags.


(Update 2018/08/13 : )

Because this example of JavaScript sends the text to my server, which echos it back to the browser, I suppose that in theory I could reprogram my CGI-script, to keep a complete record of all the text-fragments submitted. But in practice, I see no point in doing so, and therefore also keep no record of what has been typed.

In addition, because I’ve suggested the URL as an ‘httpS://’ URL, a Secure Socket Layer gets used, so that No user will need to worry about the communication itself, to and from my server, being monitored by any third party.



The failings of low-end consumer software, to typeset Math as (HTML) MathML.

One of the features which HTML5 has, and which many Web-browsers support, is the ability to typeset Mathematical formulae, which is known as ‘MathML’. Actually, MathML is an extension of XML, which also happens to be supported when inserted into HTML.

The “WiKiPedia” uses some such solution, partially because they need their formulae to look as sharp as possible at any resolution, but also, because they’d only have so much capacity, to store many, many image-files. In fact, the WiKiPedia uses a number of lossless techniques, to store sharp images as well as formulae. ( :1 )

But from a personal perspective, I’d appreciate a GUI, which allows me to export MathML. It’s fine to learn the syntax and code the HTML by hand, but in my life, the number of syntax-variations I’d need to invest to learn, would be almost as great, as the total number of software-packages I have installed, since each software-package, potentially uses yet-another syntax.

What I find however, is that if our software is open-source, very little of it will actually export to MathML. It would be very nice if we could get our Linux-based LaTeX engines, to export to this format, in a way that specifically preserves Math well. But what I find is, even though I posses a powerful GUI to help me manage various LaTeX renderings, that GUI being named “Kile”, that GUI relies back on a simple command-line tool named ‘latex2html’. Whatever that command-line outputs, that’s what all of Kile will output, if we tell it to render LaTeX specifically to HTML. ‘latex2html’ in turn, depends on ‘netpbm’, which counts as very old, legacy software.

One reason ‘latex2html’ will fail us, is the fact that in general, its intent is to render LaTeX, but not Math in any specific way. And so, just to posses the .TEX Files, will not guarantee a Linux user, that his resulting HTML will be stellar. ‘latex2html’ will generally output PNG Images, and will embed those images in the HTML File, on the premise that aside from the rasterization, PNG Format is lossless. Further, if the LaTeX code was generated by “wxMaxima”, using its ‘pdfLaTeX’ export format, we end up with incorrectly-aligned syntax, just because that dialect of LaTeX has been optimized by wxMaxima, for use in generating .PDF Files next.

(Updated 05/27/2018 : )

Continue reading The failings of low-end consumer software, to typeset Math as (HTML) MathML.

Instant Article versus AMP Showdown Looms.

There seems to be a development on the horizon, which has not hit the front pages yet, but which I think will become a major topic in the near future.

Facebook has announced that it is releasing a “WordPress” plugin, which will allow the creation of “Instant Article” articles, basically on a WYSIWYG basis. This could potentially become big. It should not be forgotten, that Google has a competing software product named “Accelerated Mobile Pages”, or ‘AMP’. This recent news about the WP-plugin, made me aware for the first time that both products exist.

Apparently, the way Facebook Instant Article works, is that the XML which usually makes up an RSS feed, has extended functionality. It would still get fetched from the server via HTTP 2.0 , but should give a better user experience to the owners of smart-phones, who have found for a long time that regular Web-browsing is still awkward from any type of phone. Granted, there do exist Web-browsers that are meant to optimize the layout of text dynamically, as well as versions of many sites that are optimized for phones, but apparently, this all still leaves users wanting.

And so Instant Article XML cannot really be said to be an enhanced type of XML, because the nature of XML is already stated in the acronym: ‘Extensible markup language’. In general, XML may contain definitions of custom tags, followed by actual content that uses these tags. This exists alongside a certain usage of XML, in which the tags are merely defined by a specific application, which uses a similar format to store data.

But Instant Article intends to be XML which contains tags, which some other XML would not contain. And while any advanced browser capable of subscribing to an RSS feed might also be able to view Instant Articles, the main advantage of this format is supposed to be, that it will adapt itself to easier viewing on smart-phones specifically. AFAIK, Facebook is also going to rely on its iPhone and Android apps, to display the Instant Articles in ways that require platform-specific implementation of the XML. Specifically, if the navigation of content is supposed to be possible ‘by tilting the phone’, then this goes beyond what XML tags can do, that are defined entirely in XML.

The Google product ‘AMP’ is supposedly not based on XML, nor on RSS feeds, but rather on HTML, which has added tags, which the browser can interpret due to a JavaScript library. This could be seen as homologous to how ‘JQuery’ can be understood by most browsers, because they are also able to download JavaScript libraries and work with those. But AMP is also designed to adapt itself to the type of browser dynamically, as well as to the size of each display, and give a better user experience than plain-old HTML does.

One aspect which both these products seem to sport, is the intention of providing greater content by way of images and video, and less by way of text. And this is one reason for which private hosting may not play any great role in this area in the near future. For the reader to be fetching this blog from my server, for instance, the browser is only needed to fetch a few kilobytes of data. With images that can turn into megabytes, and with HQ video that can turn into gigabytes.

I would not pretend to have the bandwidth needed, to stream video directly to the readers of my blog. And so there may also be little point, for me to look into ‘Instant Article’ or ‘AMP’ authoring for now.

And yet, the dominance of one of these platforms, or both, is likely to be determined on the basis of authoring, as well as on the basis of hosting / streaming. AFAIK, ‘AMP’ still needs to be coded in a somewhat difficult way, by the content authors. The fact that Facebook is releasing a WordPress plugin means, that affiliated publishers will also be able to create content more fluidly than before. And, the hosting service is likely ‘to have dibs’, before the open source version is released next month, even if users would like to get in on the game.

And so it would seem that the pressure is on Google for the moment. But I’m sure that Google will do what the competition does, which will be to offer something in response.

With Instant Article, the source is to be streamed by way of Facebook itself. With ‘AMP’, Google has already made its Cloud Platform available, to act as an additional component to the system, acting as the ‘AMP-Cache’, by which perhaps a less-restricted set of authors will be able to make content available.

And either way, I think that the usage scenario will be, that more in the spirit of how television used to work, viewers will be able to select their content, by tuning in to a specific feed they’re interested in, maybe to get up-to-date information.

For the past 20 years or so of the WWW,  HTML has dominated the scene. I see this development as a potentially valid form of progress, especially since it does not seem to be providing a monopoly to the providers. I welcome ‘AMP’ and ‘Instant Article’ content to my phone.

As far as my personal blog is concerned, while I cannot stream, I also have a solution. I can upload a video I would like people to see to YouTube, and can drag-and-drop the YouTube links into my blogs. Here, they would form URLs that seem to play as if embedded into my blog entry, while truly being streamed from the Google / YouTube server. For my purposes this should be good enough, since I also do not produce a lot of video footage, which would truly fascinate my blog readers.

And yet in comparison, I also appreciate the fact that there is no regulatory system in place, which would tell me that I cannot use HTML and PHP in this way. And therefore, I also appreciate that the extended usage of XML and JS-libraries, seems to be opening up new possibilities.

Only, I don’t think that many viewers are aware of this yet, since in many cases ‘Instant Article’ and ‘AMP’ are already providing content, in ways that do not need to announce their presence, while working on our phones.