# I just custom-compiled Ardour 5.3.0 / 6.0pre.

I know an acquaintance, whose name I will protect, who uses “Garage Band” on his Mac, but who has a hard time imagining that there exist many, many different programs like it, for other platforms, and that there must exist such, in order for professional musicians also to have access to a great array of such tools.

Of greater relevance is the fact, that such software exists under Linux as well – not just on Macs or PCs – as well as under Android.

And there is one observation which I would like to add, about what form this takes if users and artists wish to do audio work using Free, Open-Source applications.

Typically, we can access applications that do most of the work that polished, commercial examples offer. But one area in which the free applications do lag behind, is in the availability of sample packs – aka loops – which some artists will use to construct songs.

If Linux developers were to offer those, they would probably also need to ask for money.

Further, Garage Band has it as a specific advantage, that if such loops are simply dropped into the project, this program has the tempo stored, with which that loop was playing by default, in addition to which all DAWs have the tempo of the project set and available. Garage Band will automatically time-stretch the loop, to adapt to the project tempo. Most of the DAW programs I know, do not do this automatically.

A common ability the open-source applications offer though, is to time-stretch the sample manually after importing it, which can be as easy as shift-clicking on one of the edges of the sample and dragging it.

In order for this to be intuitive, it is helpful if the sample has first been processed with a Beat Slicer, so that the exact size of the rectangle will also snap into place with the timing marks on the project view, and the sample-tempo will match the project-tempo.

So what I felt I needed to do tonight, was install the Beat Slicer named Shuriken, as well as the sound-font converter / editor named Polyphone, on the laptop I name ‘Klystron’. Shuriken was a custom-compile, but Polyphone needed to be installed as a .DEB package, which did not come from the package manager.

Shuriken has the ability to detect tempo and also chop input samples, and then to export those samples as .WAV Files in turn, or as .SFZ Sound Fonts, the latter of which might be a bit tricky to work with. The idea is that the output sound font can then be played via a MIDI sampler. But, most applications expect Sound Fonts to be .SF2 Files, not .SFZ . What I hear about that is that SFZ is good, but poorly supported. So, the application Polyphone seemed like an important tool to add, because it allows us to open an SFZ File, and then to export the Sound Font it contains as an SF2.

When installing the Polyphone package, some anxiety came over me, because this package is actually configured to reinstall dependencies, which are already installed. But I still felt that this was still a relatively safe thing to do, because my KDE tool for doing so will always install the dependencies from the same repositories, which they came from anyway. Yet, such a misconfiguration of the unsigned package, was a bit unsettling.

Klystron still works fully.

My reasoning for installing the ability, to turn custom recordings into loops, is the fact that under Open-Source practices, I am not buying loops, and would therefore want to be able to use custom samples as such, provided they have been formatted for this use first.

Otherwise I would have no guarantee, that the exact length of an imported sample, was a whole number of beats or bars of its music. Which in turn would make it awkward, to time-stretch the loop using the mouse.

Finally, I installed yet another DAW named Ardour, which again, was a custom-compile.

The site that makes Ardour available, asks users to donate money, in order simply to receive the binary package. Yet, because Ardour is under the GPL, users like me may download the source code for free and then custom-compile it – a task which the paid-for version takes off the hands of users.

I configured the project with the command


./waf configure --docs --with-backends=alsa,jack
./waf
su
(...)
./waf install



This replaces the usual commands


./configure
make
su
(...)
make install



that I am used to.

This project-supplied, ‘./waf‘ command actually starts a multi-threaded compile by default, on multi-core CPUs.

What I find for the moment, is that everything was a successful project.

When using Ardour, we invoke Tempo-Stretching of the Region / Sample, by clicking on the small tool icon at the top of the application window, that puts the program into “Time-Stretching Mode”, where it defaults to “Grab Mode”, and then just normally clicking on the sample in question, and optionally dragging it. Either way, a dialog box opens, which shows is the percentage already stretched by, and which allows us to enter a different percentage as a number, if we would like to.

Near the center of this thin toolbar is also a button, to set the “Snap Mode” to “Grid”.

Perhaps I should also mention, that the way I compiled it, Ardour offers support for DSSI plug-ins. These differ from LADSPA plug-ins, in that the DSSI act as instruments by default, thus receiving MIDI input and outputting sound.

To allow for their support to be compiled, only the package ‘dssi-dev‘ really needs to be installed from the package manager, which includes the necessary header file. Source code implementing the host belongs to Ardour, and source code implementing the instrument belongs to the plug-in. Both need to include this header file.

When adding a DSSI plug-in to our Ardour project, we select that we wish to add a track, and then in the ensuing dialog, we select a MIDI track, instead of an Audio track. This enables the field which allows us to select an Instrument, where by default it says ‘a-Reasonable Synth’. Instead, detected DSSI Instruments will appear here.

Some confusion can arise, over how to get these virtual instruments to display their GUI. Within the Editor view, all we seem to see belonging to the track, is the Automation button, from which we can select parameters to be controlled by a MIDI controller.

In order to see the full GUI of the instrument, we need to click on the second button in the upper-right corner, which shows us the Mixer view. From there, each track will be shown with a possible set of plug-ins, and because we chose the instrument plug-in from the creation-time of the track, the instrument plug-in will also appear logically before any effect plug-ins. By default the instrument will be shown in a different color as well.

Here, we can double-click on the instrument plug-in widget, thus displaying the full GUI for that plug-in.

And, I did not compile my version of Ardour, to have Wine support, for the purpose of loading Windows-VST plug-ins. The closest I have to that are the native LV2 plug-ins.

As an alternative to letting the user create a MIDI track, to be controlled with a MIDI keyboard, the application offers an Import command from the Session Menu, which displays a large dialog box, from which the user can either select an Audio File or a MIDI File, and which will again allow him to associate the MIDI File with an Instrument selection. When this has succeeded, Ardour 5.3.0 will act as a sequencer.

However, some conflict can be expected from certain MIDI Files, which try to sequence multiple instruments…

In my limited experience, those types of MIDI Files are best Imported at the beginning, before most of the project is set up, so that they can result in multiple tracks being created, from which point on the project can be modified.

Further, because in the Mixer view the sequence in which plug-ins are applied to each track can be changed again by dragging them, it is also possible to add instrument plug-ins to each track from there, that must appear before any effect plug-ins in order to work.

(Update 12/05/2020, 23h50: )

When I glanced back at this posting ‘to remind myself of what I once knew’, I discovered that as I had left it, the description about how to load instrument plug-ins with Ardour, was somewhat confusing:

• First of all, an issue which I’ve always had with Ardour was, that even though I compiled it with ‘ALSA’ support, ALSA support never seems to work – perhaps, because I’m using ‘PulseAudio’. ALSA clients will only work under PulseAudio, if they don’t ask for a specific device, which Ardour does. But oh, well, the ‘JACK’ back-end does work…
• Secondly, even Ardour v6 does not seem to have native ‘DSSI’ support under Linux, only ‘LV2′ and ‘LXVST’ support, the latter for people who want to use their ‘Linux-VST’ Plug-Ins specifically, which are generally not open-source, and which therefore do not install with the package manager. The reason I was obtaining a display of ‘DSSI’ plug-ins, was because I additionally tend to install a package which is called ‘naspro-bridges‘. What this little package does is, to make instruments that would have been provided as ‘DSSI’, visible to LV2 hosts, such as Ardour.
• Thirdly, many people like the ‘ZynAddSubFX’ synth, but discover that, at least under Debian / Stretch, its ‘LV2′ implementation is broken. Therefore, if one has ‘naspro-bridges‘ installed, then one can install the package ‘zynaddsubfx-dssi‘, and the synth will appear (within Ardour, without crashing it)…

Finally, as I was exploring all this, this evening, I ran into a slight panic situation, because something obscure that I had done, was crashing my ‘PulseAudio’ server, which I seemed to be able to switch back to, from having used ‘JACK’, successfully. And upon closer observation I found, that what was causing my PulseAudio server to crash was, that somehow, when I was retesting the ‘ZynAddSubFX’ available synth under the more-convenient application ‘LMMS’, which runs natively under ‘PulseAudio’, its configuration had gotten switched to using a 256-sample buffer, which would have been grotesquely short for ‘PulseAudio’.

Somehow, after my experiments with ‘Ardour’ had failed to launch ‘ALSA’-based sessions, every attempt by LMMS to output actual audio with 256-sample buffers, would make a horrible noise, and then crash ‘PulseAudio’ (entirely from user-space, without finally revealing that I had mis-installed any packages as root). This last effect is understandable, and just changing the settings of ‘LMMS’ to use a 2048-sample buffer again, resolved that problem.

One fact which I’ve noticed, when running ‘ZynAddSubFX’ from within ‘Ardour’, seemed to be, that I could only open the generic settings window, not the native settings window, that the synth would normally have…

(Update 12/06/2020, 0h50: )

As a follow-up, I discovered two additional ways in which the Debian / Stretch computer named ‘Phosphene’, would need to be set up differently from how the Debian / Jessie computer named ‘Klystron’ had been set up in the past:

• Because I was compiling the pre-release of version 6, of Ardour, one available back-end was in fact, ‘pulseaudio’. Therefore, I was able to recompile, including this back-end, and it works. And
• Under Debian / Stretch, the ‘calf’ plug-ins seem to be broken, specifically, in that they either crash when trying to bring up their GUIs, or just refuse. For that reason, and, to improve the stability of that computer, I uninstalled the ‘calf-plugins‘ package again.

(Update 12/07/2020, 14h00: )

My initial reason to explore Ardour again this morning was, to find out, whether when Exporting a session to an audio file, it had as standard feature, to encode its (linear) 16-bit Pulse Code Modulated files with dithering. And the following screen-shot sows that in fact, it does:

This is a technology which adds some amount of noise to the 16-bit signal, so that a non-zero probability exists, that its least significant bit could change value, due to a virtual, appended 17th or 18th bit (overlapping with the signal before being mixed down to a 16-bit format). The way to read the last setting would be, that ‘rectangular dithering’ only offers to add 1 bit of precision, but only adds small noise directly at the Nyquist Frequency. If ‘triangular dithering’ is chosen, then 2 virtual bits are being added, but low-amplitude noise is also being added at half the Nyquist Frequency. If ‘shaped noise’ is chosen for the dithering, then 3 virtual bits are being added probabilistically, but the low-amplitude noise is also spread over the spectrum – mainly at the high end.

Some official explanations of how shaped noise is formed, suggest one thing. But the way I tend to visualize it is, that it gets computed similarly to how triangular noise was, which was, to step through 4 possible combinations of 2 added bits, in a linear sequence. But next, in order to make sure that the actual, added noise is focussed at the high end of the audible spectrum, the bit-order of this shaped, 3-bit noise, that can have 8 values, can simply be reversed.

The question of how many virtual bits of precision are really being added to the exported signal is not as straightforward to estimate, as simply being the number of bits that belong to the dithering pattern. And the reason for that is the possibility, that the dithering pattern will sometimes overlap with the LSBs belonging to the real, exported signal format. I.e., the power of two at which the dithering pattern is to be added to the original signal, plus its length in bits, can exceed that of the LSB of the exported signal.

What I have been musing about this feature is, that it will be totally lost on an audience that only has low-quality playback technology, because their playback devices cannot guarantee the stability with which the ~full~ 16 bits are being played back. But then again, if somebody has been listening to the music with low-quality equipment, then the failure to make out 16, 17 or 18 bits of precision may also be a much weaker defect in their sound, eclipsed by stronger phenomena, such as ultrasound being distorted to becoming audible frequencies, or, regular background noise greatly exceeding -96db, or, other problems with the D/A converter turning what should be quantization noise, into quantization distortion, etc.. Actually, if the playback device is prone to turning quantization noise from the LSB into quantization distortion, then dithering will actually reduce the degree with which this happens.

(Update 12/07/2020, 14h15: )

If the Artist is having a hard time choosing, which dithering pattern to apply, assuming he or she is converting from a 24-bit format to 16-bit, I would suggest that the following two intents are possible:

• The assumption could be that the listener will have high-quality audio equipment, in which case ‘triangular dithering’ might actually be best. It will have a better chance to make the virtual, least significant bit ‘happen’, evoking that at the Nyquist Frequency, Or
• The assumption could be that the listener’s audio equipment is far-gone, in which case ‘shaped dithering’ might be best. It will probably introduce noise at half the Nyquist Frequency, with the same amplitude as triangular dithering did, will have greater amplitude in total, will focus the added energy in the higher regions of the spectrum (where a listener is least likely to hear it), but most importantly, is more likely to average out whatever other errors the listener’s playback equipment is introducing. And then, any ability of shaped dithering to make an additional, virtual bit of precision happen are less probable, because it will only be evoked at ¼ the Nyquist Frequency.

{0, +4, -3, +1, -1, +3, -2, +2}

Dirk