One specific feature of the BOOX Max2 e-Reader falls short.

In a previous posting I wrote, that I am having a good experience so far, with my newly-acquired 13.3″ Onyx BOOX Max2 e-Reader. And a detail which I mentioned was, an eventual need to USB this device to a PC or Laptop. As an alternative, what many users expect, is some way to use Wi-Fi to transfer files. Yet, ‘SAMBA’ is Linux-software, while “SMB Protocol” is Windows-based, so that Onyx does not step outside its boundaries, to try to offer either. However, they try to make up for this.

If the user has the firmware update from December in 2018 installed, then under ‘Apps -> Transfer’, there is a modest app, which will act as a minimal Web-server and which, in addition to displaying a QR-code, also displays a URL with an IP-address and a port-number, which exist on the LAN, and which the user is meant to point a PC- or Laptop-based Web-browser at. There is no reason why the functionality of this URL would be limited to one O/S, under which the Web-browser is running.

A Web-page displays on the PC browser, that can use the inherent functionality of browsers to choose files on the PC, and to Upload those to the server, which resides on the e-Reader as long as the user doesn’t close this app.

  • This feature is mainly meant to allow e-Books to be transferred, and the choice of folders they appear in suggests, it wasn’t fully meant for other types of files, such as APK-Files.
  • Onyx could have tried a little harder and made it possible to transfer files in the opposite direction, let’s say from a specific folder on the e-Reader, to the Web-browser of the PC… Some people might think that this is redundant for an e-Reader because Books can be given to the e-Reader and later deleted on it. However, because the Max2 can also be used to store sketches with its Stylus, such a feature might not have truly been redundant.

So, because of the two bulleted reasons above, the need will ultimately remain, to USB this device to a PC.


Continue reading One specific feature of the BOOX Max2 e-Reader falls short.


One statement I made early in this blog, was that I would not endorse or indict any purely commercial products. I am sorry to say that right now, I feel the need to break my promise to some degree.

I just bought myself a ‘Roku’ – and when we say that word, we are supposed to roll the ‘R’. In order not to praise this little gadget too much, I will try to hold it short on this subject.

A Roku is an Internet-TV device, similar to Apple TV. We connect it to a digital TV via HDMI cable, and also connect it to the Internet, and then we can watch movies on it in 1080p, that are being streamed to us over the Internet. It has the option of an Ethernet connection, but also has a blazing-fast 802.11ac WiFi capability.

One advantage it has over Apple TV, is that it is multi-sourced. It allows us to subscribe to a plethora of channels, each of which is being sold by an independent group of entrepreneurs.

One channel in its list that is important – to the success of Roku – is ‘Netflix’. The first thing I did when I set it up, was to activate the Netflix account I already hold. And at that point something happened which was unusual for me. I started a Netflix movie, which I usually only use to set up laptops and other configurations, but then became absorbed into the movie! I mean, I would have the ability to watch this same movie on numerous other devices, but never really got into the mood to do so. But then, to be able to watch the same movie on my big living-room TV, put me in the mood.

And I would say, that this is a major selling-point for the Roku, especially for Linux-people like me. It often happens to us that we can only watch content in reduced resolution. For example, if we watch Netflix on a Linux-based Chrome browser, I think we are limited to 720p. So to have that access to 1080p may mean something tom certain audiences.

On that note, the movie I just finished watching – “Ex-Machina” was also intended as a test, to see whether my current Internet configuration can support the bit-rate. But since I also just received a new Modem, capable of 802.11ac speeds, the answer was a clear Yes for me. I was expecting a lot of troubling spots within the movie – hesitations and hangs. But in truth, there was only a single stutter, when a buffering-symbol appeared for a few seconds, because just at that moment, my speed did not keep up. But for the most part the playback was good.

Another note about this device would be, that one channel we can link, is our ‘CinemaNow’ channel. The way this works for the most part, we need to have a Netflix subscription, in order to link that to Roku. Likewise, we need to have a CinemaNow subscription, in order to link that as well.

CinemaNow is a service which allows us to buy movie-licenses online, to add those to a collection and view them. But unless I am mistaken, I think that our CinemaNow account will also show all our ‘Ultraviolet’ licenses. This could mean that if somebody out there has a large Ultraviolet collection, he should be able to collect those into his CinemaNow account, and then watch them on Roku in 1080p! Please do not be angry with me, if I got this wrong.

I have yet to link my own CinemaNow account, because my phone is charging right at this instant. The way it works, we control our master-account – if you will – from a PC-Web-browser. But then in order to browse the Channel Store, the recommended thing to do is to download either the Android or the iOS app, and to browse that from our mobile device. We set up a PIN number to restrict what we can buy from our mobile devices.

When my phone has finished charging, I think the next thing I will do is link my CinemaNow account.

There is one word of caution which I must add however. Much of what Roku customers get to see, is based on having paid subscriptions of one sort or another. What we get to see purely for free, is somewhat limited. We can watch “Sky News”. But I do not know how enthusiastic most people would be about Sky News. ;) Sorry. Now I have not only endorsed the product, but cast an aspersion on it as well…


(Edit : ) I have just linked my CinemaNow account to my Roku device, and doing so worked like a charm. Mind you, my own Ultraviolet collection is small, but all my movies are there!

In order not to hurt any brand-names, I should also add, that the Roku device itself can go up to 4K resolution – if the content is available. But I, personally, just do not have a 4K TV to benefit from that.

And, I hear that CinemaNow also offers some free content. but I think that the free content by itself is also not the main attraction for CinemaNow.

Further, if the reader does not think much of Sky News, there is a better alternative: ‘Reuters TV’, and it is free.

And, we can link our ‘Google Play Movies & TV’ as an additional channel, but granted, this assumes we have purchased content from that source. I have not.

(Edit 02/12/2017 : ) I have discovered that for some reason, our Ultraviolet collection on CinemaNow will not play. I get to see my full collection, but if I click on any one movie to play it, I get a message-box which says: “The selected content is currently unavailable. Please try again later.” :(


Two Hypothetical Ways, in which Push Notifications Could Work Over WiFi

The reality is that, being 52 years old and only having studied briefly in my distant past, my formal knowledge in Computing is actually lacking these days, and one subject which I know too little about, is how Push Notifications work. Back in my day, if a laptop was ‘asleep’ – i.e. In Standby – it was generally unable to be woken externally via WiFi, but did have hardware clocks that could wake it at scheduled times. Yet we know that mobile devices today, including Android and iOS devices, are able to receive push notifications from various servers, which do precisely that, and that this feature even works from behind a firewall. And so I can muse over how this might work.

I can think of two ways in which this can hypothetically work:

  1. The application framework can centralize the receipt of push notifications for the client device, to one UDP port number. If that port number receives a packet, the WiFi chip-set wakes up the main CPU.
  2. Each application that wants to receive them, can establish a client connection to a server in advance, which is to send them.

The problem with approach (1) is that, behind a firewall, by default, a device cannot be listening on a fixed port number, known to it. I.e., the same WAN IP Address could be associated with two devices, and a magic packet sent to one fixed port number, even if we know that IP Address, cannot be mapped to wake up the correct device. But this problem can be solved via UPnP, so that each device could open a listening port number for itself on the WAN, and know what its number is.

We do not always know that UPnP is available for every NAT implementation.

Approach (2) requires more from the device, in that a base-band CPU needs to keep a list, of which specific UDP ports on the client device will be allowed to wake up the main CPU, if that port receives a packet.

Presumably, this base-band CPU would also first verify, that the packet was received from the IP address, which the port in question is supposed to be connected to, on the other side, before waking the main CPU.

(Edit 12/19/2016 : Google can simply decide that after a certain Android API Number – i.e., Android version – the device needs to have specific features, that earlier Android APIs did not require.

Hence, starting from Android KitKat, or Lollipop, Google could have decided that it was no longer a special app permission, for the user to acknowledge, to wake the device. Likewise, starting from some Android version, possessing a base-band CPU might have become mandatory for the hardware, so that the API can offer a certain type of push notification.)

Also, approach (1) would have as drawback, a lack of authentication. Any networked device could just send this magic packet to any other networked device, provided that both the IP address and the port number it is sensitive to are known.

Approach (2) would bring as an advantage, that only specific apps on the client device could be enabled to receive push notifications, and the O/S would be aware of which UDP ports those are sensitive on, so that the base-band CPU would only be waking up the main CPU, if push notifications were received and associated with an app authorized to wake the device.

Also, with approach (2), the mapping of WAN port numbers back to LAN port numbers would still take place passively, through port triggering, so that the WAN-based server does not need to know, what LAN-based port number the connected port is associated with on the client device.

But, approach (2) has as a real drawback, that a server would need to keep a socket open, for every client it might want to send a push notification to. This might sound unimportant but is really not, since many, many clients could be subscribed to one service, such as Facebook. Are we to assume then, that the Facebook server also keeps one connection open to every client device? And if that connection is ever dropped, should we assume that a sea of client devices reconnect continuously, as soon as their clocks periodically wake them?


Continue reading Two Hypothetical Ways, in which Push Notifications Could Work Over WiFi

The laptop ‘Klystron’ suspends to RAM half-decently.

One subject which I have written a lot about, was that as soon as I close the lid of the laptop I name ‘Klystron’, it seems to lose its WiFi signal, and that that can get in the way of comfortable use, because to close the lid also helps shield the keyboard of dust etc..

This Linux laptop boots decently fast, and yet is still a hassle to reboot very often. And so I needed to come up with a different way of solving my problem, on a practical level. My solution for now, is to tell the laptop to Suspend To RAM, as soon as I close the lid. That way, the WiFi signal is gone more properly, and when the laptop resumes its session, the scripts that govern this behavior also re-initialize the WiFi chipset and its status on my LAN. This causes less confusion with running Samba servers etc., on my other computers.

There is a bit of terminology, which I do not think that the whole population understands, but which I think that people are simply using differently from how it was used in my past.

It used to be, that under Linux, we had ‘Suspend To RAM’ and ‘Suspend To Disk’. In the Windows world, these terms corresponded to ‘Standby’ and ‘Hibernate’ respectively. Well in the terminology today, they stand for ‘Sleep’ and ‘Hibernate’, borrowing those terms from mobile devices.

There are two types of Suspend working in any case.

In past days of Linux, we could not cause a laptop just to Hibernate. We needed to install special packages and modify the Grand Unified Bootloader, before we could even Suspend To Disk. Suspending To RAM used to be less reliable. Well one development with modern Linux which I welcome, among many, is the fact that Sleep and Hibernate should, in most cases, work out-of-the-box.

I just tried Sleep mode tonight, and it works 90%.

One oddity: When we Resume, on this laptop, the message is displayed on the screen numerous times, of a CPU Error. But after a few seconds of CPU errors, apparently the session is restored without corruption. Given that I have 300 (+) processes, I cannot be 100% sure that the Restore is perfectly without corruption. But I am reasonably sure, with one exception:

The second oddity is of greater relevance. After Waking Up, the clock of the laptop seems to be displaced 2 days and a certain number of hours into the future. This bug has been observed on some other devices, and I needed to add a script to the configuration files as a workaround, which simply sets the system clock back that many days and that many hours, after Waking. Thankfully, I believe that doing so, was as much of a workaround as was needed.

One side-effect of not having done so, before being aware of the problem, was that the ‘KNotify’ alarms for the next two days have also all sounded, so that it will take another two days, before personal organizer – PIM – notifications may sound for me again.

The fact that numerous CPU errors are displayed bothers me not. What that means, is that the way the CPU goes to sleep, and then wakes up, involves power-cycling in ways that do not guarantee the integrity of data throughout. But it would seem that good programming of the kernel does provide data integrity, with the exception of the system clock issue.

But the fact that the hardware is a bit testy when using the Linux version of Sleep, also suggests that maybe this is also the kind of laptop that powers down its VRAM. It is a good thing then, that I disabled the advanced compositing effects, that involve vertex arrays.


There is a side-note on the desktop cube animation I wanted to make.

In general, when raster-rendering a complex scene with models, each model is defined by a vertex array, an index array, one or more texture images etc., and the vertex array stores the model geometry statically, as relative to the coordinate-origin of the model. Then, a model-view-projection matrix is applied – or just a rotation matrix for the normal vectors – to position it with respect to the screen. Moving the models is then a question of the CPU updating the model-view matrix only.

Well when a desktop cube animation is the only model in the scene, as part of compositing, I think that the way in which this is managed differs slightly. I think that what happens here, is that instead of the cube having vertex coordinates of +/- 1 all the time, the model-view matrix is kept as an identity matrix.

Instead, the actual vertex data is rewritten to the vertex array, to reposition the vertices with respect to the view.

Why is this significant? Well, if it was true that Suspending the session To RAM also cut power to the VRAM, it would be useful to know, which types of data stored therein will seem corrupted after a resume, and which will not.

Technically, texture images can also get garbled. But if all it takes is one frame cycle for texture images to get refreshed, the net result is that the displayed desktop will look normal again, by the time the user unlocks it.

Similarly, if the vertex array of the only model is being rewritten by the CPU, doing so will also rewrite the header information in the vertex array, that tells the GPU how many vertices there are, as well as rewriting the normal vectors, as when they are a part of any normal vertex animation, etc.. So anything resulting from the vertex array should still not look corrupted.

But one element which generally does not get rewritten, is the index array. The index array states in its header information, whether the array is a point list, a line list, a triangle list, a line strip, a triangle strip… It then states how many primitives exist, for the GPU to draw. And then it states sets of elements, each of which is a vertex number.

The only theoretical reason fw the CPU would rewrite that, would be if the topology of the model was to change, which is as good as never in practice. And so, if the VRAM gets garbled, what was stored in the index array would get lost – and not refreshed.

And this can lead to the view, of numerous nonsensical triangles on the screen, which many of us have learned to associated with a GPU crash.