I know an acquaintance, whose name I will protect, who uses “
Garage Band” on his Mac, but who has a hard time imagining that there exist many, many different programs like it, for other platforms, and that there must exist such, in order for professional musicians also to have access to a great array of such tools.
Of greater relevance is the fact, that such software exists under Linux as well – not just on Macs or PCs – as well as under Android.
And there is one observation which I would like to add, about what form this takes if users and artists wish to do audio work using Free, Open-Source applications.
Typically, we can access applications that do most of the work that polished, commercial examples offer. But one area in which the free applications do lag behind, is in the availability of sample packs – aka loops – which some artists will use to construct songs.
If Linux developers were to offer those, they would probably also need to ask for money.
Garage Band has it as a specific advantage, that if such loops are simply dropped into the project, this program has the tempo stored, with which that loop was playing by default, in addition to which all DAWs have the tempo of the project set and available.
Garage Band will automatically time-stretch the loop, to adapt to the project tempo. Most of the DAW programs I know, do not do this automatically.
A common ability the open-source applications offer though, is to time-stretch the sample manually after importing it, which can be as easy as shift-clicking on one of the edges of the sample and dragging it.
In order for this to be intuitive, it is helpful if the sample has first been processed with a
Beat Slicer, so that the exact size of the rectangle will also snap into place with the timing marks on the project view, and the sample-tempo will match the project-tempo.
So what I felt I needed to do tonight, was install the
Beat Slicer named “
Shuriken“, as well as the sound-font converter / editor named “
Polyphone“, on the laptop I name ‘Klystron’.
Shuriken was a custom-compile, but
Polyphone needed to be installed as a .DEB package, which did not come from the package manager.
Shuriken has the ability to detect tempo and also chop input samples, and then to export those samples as .WAV Files in turn, or as .SFZ Sound Fonts, the latter of which might be a bit tricky to work with. The idea is that the output sound font can then be played via a MIDI sampler. But, most applications expect Sound Fonts to be .SF2 Files, not .SFZ . What I hear about that is that SFZ is good, but poorly supported. So, the application
Polyphone seemed like an important tool to add, because it allows us to open an SFZ File, and then to export the Sound Font it contains as an SF2.
When installing the
Polyphone package, some anxiety came over me, because this package is actually configured to reinstall dependencies, which are already installed. But I still felt that this was still a relatively safe thing to do, because my KDE tool for doing so will always install the dependencies from the same repositories, which they came from anyway. Yet, such a misconfiguration of the unsigned package, was a bit unsettling.
Klystron still works fully.
My reasoning for installing the ability, to turn custom recordings into loops, is the fact that under Open-Source practices, I am not buying loops, and would therefore want to be able to use custom samples as such, provided they have been formatted for this use first.
Otherwise I would have no guarantee, that the exact length of an imported sample, was a whole number of beats or bars of its music. Which in turn would make it awkward, to time-stretch the loop using the mouse.
Finally, I installed yet another DAW named “
Ardour“, which again, was a custom-compile.
The site that makes
Ardour available, asks users to donate money, in order simply to receive the binary package. Yet, because
Ardour is under the GPL, users like me may download the source code for free and then custom-compile it – a task which the paid-for version takes off the hands of users.
I configured the project with the command
./waf configure --docs --with-backends=alsa,jack ./waf su (...) ./waf install
This replaces the usual commands
./configure make su (...) make install
that I am used to.
This project-supplied, ‘
./waf‘ command actually starts a multi-threaded compile by default, on multi-core CPUs.
What I find for the moment, is that everything was a successful project.
When using Ardour, we invoke Tempo-Stretching of the Region / Sample, by clicking on the small tool icon at the top of the application window, that puts the program into “Time-Stretching Mode”, where it defaults to “Grab Mode”, and then just normally clicking on the sample in question, and optionally dragging it. Either way, a dialog box opens, which shows is the percentage already stretched by, and which allows us to enter a different percentage as a number, if we would like to.
Near the center of this thin toolbar is also a button, to set the “Snap Mode” to “Grid”.
Perhaps I should also mention, that the way I compiled it,
Ardour offers support for
DSSI plug-ins. These differ from
LADSPA plug-ins, in that the
DSSI act as instruments by default, thus receiving MIDI input and outputting sound.
To allow for their support to be compiled, only the package ‘
dssi-dev‘ really needs to be installed from the package manager, which includes the necessary header file. Source code implementing the host belongs to
Ardour, and source code implementing the instrument belongs to the plug-in. Both need to include this header file.
When adding a
DSSI plug-in to our
Ardour project, we select that we wish to add a track, and then in the ensuing dialog, we select a MIDI track, instead of an Audio track. This enables the field which allows us to select an Instrument, where by default it says ‘a-Reasonable Synth’. Instead, detected
DSSI Instruments will appear here.
Some confusion can arise, over how to get these virtual instruments to display their GUI. Within the Editor view, all we seem to see belonging to the track, is the Automation button, from which we can select parameters to be controlled by a MIDI controller.
In order to see the full GUI of the instrument, we need to click on the second button in the upper-right corner, which shows us the Mixer view. From there, each track will be shown with a possible set of plug-ins, and because we chose the instrument plug-in from the creation-time of the track, the instrument plug-in will also appear logically before any effect plug-ins. By default the instrument will be shown in a different color as well.
Here, we can double-click on the instrument plug-in widget, thus displaying the full GUI for that plug-in.
And, I did not compile my version of
Ardour, to have
Wine support, for the purpose of loading
Windows-VST plug-ins. The closest I have to that are the native
As an alternative to letting the user create a MIDI track, to be controlled with a MIDI keyboard, the application offers an Import command from the Session Menu, which displays a large dialog box, from which the user can either select an Audio File or a MIDI File, and which will again allow him to associate the MIDI File with an Instrument selection. When this has succeeded,
Ardour 5.3.0 will act as a sequencer.
However, some conflict can be expected from certain MIDI Files, which try to sequence multiple instruments…
In my limited experience, those types of MIDI Files are best Imported at the beginning, before most of the project is set up, so that they can result in multiple tracks being created, from which point on the project can be modified.
Further, because in the Mixer view the sequence in which plug-ins are applied to each track can be changed again by dragging them, it is also possible to add instrument plug-ins to each track from there, that must appear before any effect plug-ins in order to work.
(Update 12/05/2020, 23h50: )
When I glanced back at this posting ‘to remind myself of what I once knew’, I discovered that as I had left it, the description about how to load instrument plug-ins with Ardour, was somewhat confusing:
- First of all, an issue which I’ve always had with Ardour was, that even though I compiled it with ‘ALSA’ support, ALSA support never seems to work – perhaps, because I’m using ‘PulseAudio’. ALSA clients will only work under PulseAudio, if they don’t ask for a specific device, which Ardour does. But oh, well, the ‘JACK’ back-end does work…
- Secondly, even Ardour v6 does not seem to have native ‘DSSI’ support under Linux, only ‘LV2′ and ‘LXVST’ support, the latter for people who want to use their ‘Linux-VST’ Plug-Ins specifically, which are generally not open-source, and which therefore do not install with the package manager. The reason I was obtaining a display of ‘DSSI’ plug-ins, was because I additionally tend to install a package which is called ‘
naspro-bridges‘. What this little package does is, to make instruments that would have been provided as ‘DSSI’, visible to LV2 hosts, such as Ardour.
- Thirdly, many people like the ‘ZynAddSubFX’ synth, but discover that, at least under Debian / Stretch, its ‘LV2′ implementation is broken. Therefore, if one has ‘
naspro-bridges‘ installed, then one can install the package ‘
zynaddsubfx-dssi‘, and the synth will appear (within Ardour, without crashing it)…
Finally, as I was exploring all this, this evening, I ran into a slight panic situation, because something obscure that I had done, was crashing my ‘PulseAudio’ server, which I seemed to be able to switch back to, from having used ‘JACK’, successfully. And upon closer observation I found, that what was causing my PulseAudio server to crash was, that somehow, when I was retesting the ‘ZynAddSubFX’ available synth under the more-convenient application ‘LMMS’, which runs natively under ‘PulseAudio’, its configuration had gotten switched to using a 256-sample buffer, which would have been grotesquely short for ‘PulseAudio’.
Somehow, after my experiments with ‘Ardour’ had failed to launch ‘ALSA’-based sessions, every attempt by LMMS to output actual audio with 256-sample buffers, would make a horrible noise, and then crash ‘PulseAudio’ (entirely from user-space, without finally revealing that I had mis-installed any packages as root). This last effect is understandable, and just changing the settings of ‘LMMS’ to use a 2048-sample buffer again, resolved that problem.
One fact which I’ve noticed, when running ‘ZynAddSubFX’ from within ‘Ardour’, seemed to be, that I could only open the generic settings window, not the native settings window, that the synth would normally have…
(Update 12/06/2020, 0h50: )
As a follow-up, I discovered two additional ways in which the Debian / Stretch computer named ‘Phosphene’, would need to be set up differently from how the Debian / Jessie computer named ‘Klystron’ had been set up in the past:
- Because I was compiling the pre-release of version 6, of Ardour, one available back-end was in fact, ‘pulseaudio’. Therefore, I was able to recompile, including this back-end, and it works. And
- Under Debian / Stretch, the ‘calf’ plug-ins seem to be broken, specifically, in that they either crash when trying to bring up their GUIs, or just refuse. For that reason, and, to improve the stability of that computer, I uninstalled the ‘
calf-plugins‘ package again.
(Update 12/07/2020, 14h00: )
My initial reason to explore Ardour again this morning was, to find out, whether when Exporting a session to an audio file, it had as standard feature, to encode its (linear) 16-bit Pulse Code Modulated files with dithering. And the following screen-shot sows that in fact, it does:
This is a technology which adds some amount of noise to the 16-bit signal, so that a non-zero probability exists, that its least significant bit could change value, due to a virtual, appended 17th or 18th bit (overlapping with the signal before being mixed down to a 16-bit format).
(Update 2/24/2021, 20h20… )
The way to read the last setting would be, that ‘rectangular dithering’ adds a linearly distributed, floating-point value, the maximum value of which is just less, than the quantization step of the selected output format. If ‘triangular dithering’ is chosen, then a noise value is added, which is really just the average, between two linearly distributed, pseudo-random values. This results in a well-controlled maximum positive or negative value, but with a triangular probability distribution. If ‘shaped noise’ is chosen for the dithering, then the low-amplitude noise is also redistributed over the spectrum – mainly to the high end, resulting in a Gaussian Distribution.
I just needed to delete an earlier update, because an initial concept I had about how Noise Shaped Dithering works, differed too much from how it truly works, for my initial concept to remain valid. What the true concept of Shaped Dithering is based on, is a feedback loop, in which an error value is computed between the quantized output value, and the intended input value. This error is added back in to the next tentative output value.
Some amount of actual dithering needs to be added per iteration, to override the degree with which the algorithm would otherwise respond entirely to quantization distortion. And under the assumption that the D/A Converter used to decode the audio again, is working correctly, that amount which gets added per iteration is still only supposed to remain just-less, than one quantization step.
One possible outcome of my own thought process which I generally don’t like, is, if my equations differ from official solutions. When this happens, I try to find any non-trivial reasons. The following WiKiPedia article explains Noise Shaped Dithering perfectly:
And so, a conclusion which I need to reach is that, in order for what the WiKiPedia describes to become possible, that being, the addition of a considerable number of virtual bits, past the LSB of the output-format, the dithering noise which is added must only consist of bits, less significant than the LSB of the output-format. In that situation, their equation is also correct.
According to my recent communication with Ardour devs, their application applies its Dithering, according to the same, somewhat standard assumption, of properly-working D/A Converters. But the idea persists to my mind that other devs, such as those behind “Audacity”, may apply it at a peak amplitude, which overlaps with the LSBs of the output-format. If developers do that, the equation which I suggested above seems more correct. And in that case, the results which one obtains from the feedback loop are less spectacular. One then merely obtains from the Noise Shaping, that at frequencies below half the Nyquist Frequency, the per-frequency amplitudes are reduced.
Of course, If some Sound Editing Application did that, it would be for a reason, that perhaps being pessimism, in whether cheap D/A Converters stated to conform to a 16-bit norm, actually decode all 16 bits accurately. If they do not, then such dithering might exist as an attempt to average out, the distortions which D/A Converters introduce.