VOGONS


Soundfont unknown formulas?

Topic actions

First post, by superfury

User metadata
Rank l33t++
Rank
l33t++

Currently my soundfont-based renderer uses the following formula to calculate any factor (Hz, speed multipliers for specified frequencies etc.) in Soundfont rendering:
2^(cents/1200)

Is that correct for both absolute and relative cents, as the 2.o specification calls it?

What is even the difference between absolute and relative cents? Aren't they all applied to the same result in the same way (relative simply adding cents to a static value that's absolute cents? So 500 absolute + 200 relative = 700 absolute cents? Or does it change absolute by means of cent-based multiplication of te absolute cents results? So absolute speed * relative speed?
The modulators are always simply added to generators, aren't they?
What about linked modulators? How do they work?

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 1 of 20, by superfury

User metadata
Rank l33t++
Rank
l33t++

I'm just wondering. Are the instrument layer (containing the ibag range for said layer)'s splits capable of rendering multiple at the same time for the same note?
Say:
IBag 1 filters the requested note
IBag 2 also filters the requested note
Both belonging to the same Instrument, PBag and Preset.

Does the synth simply play both notes on two channels?

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 2 of 20, by superfury

User metadata
Rank l33t++
Rank
l33t++

Currently my MIDI synthesizer's formulas used are based on:
https://basicsynth.com/uploads/SF2-DLS.pdf

Is that correct use for a normal floating point sample MIDI soundfont-based synthesizer I'm emulating?

The concave/convex etc. is implemented as follows:
Main call using the concave/convex/linear etc. algorithms:

	//Now, apply type, polarity and direction!
type = ((oper >> 10) & 0x3F); //Type!
polarity = ((oper >> 9) & 1); //Polarity!
direction = ((oper >> 8) & 1); //Direction!

if (direction) //Direction is reversed?
{
i = 1.0f - i; //Reverse the direction!
}

switch (type)
{
default: //Not supported?
case 0: //Linear?
if (polarity) //Bipolar?
{
i = (i * 2.0) - 1.0f; //Convert to a range of -1 to 1 for the proper input value!
}
//Unipolar is left alone(already done)!
break;
case 1: //Concave?
if (polarity) //Bipolar?
{
if (i>=0.5f) //Past half? Positive half!
{
i = MIDIconcave((i-0.5)*2.0f); //Positive half!
}
else //First half? Negative half?
{
i = -MIDIconcave((0.5-i)*2.0f); //Negative half!
}
}
else //Unipolar?
{
i = MIDIconcave(i); //Concave normally!
}
break;
case 2: //Convex?
if (polarity) //Bipolar?
{
if (i>=0.5f) //Past half? Positive half!
{
i = MIDIconvex((i-0.5)*2.0f); //Positive half!
}
else //First half? Negative half?
{
i = -MIDIconvex((0.5-i)*2.0f); //Negative half!
}
}
else //Unipolar?
{
i = MIDIconvex(i); //Concave normally!
}
break;
case 3: //Switch?
if (i >= 0.5f) //Past half?
{
i = 1.0f; //Full!
}
else //Less than half?
Show last 12 lines
		{
i = 0.0f; //Empty!
}
if (polarity) //Bipolar?
{
i = (i * 2.0) - 1.0f; //Convert to a range of -1 to 1 for the proper input value!
}
//Unipolar is left alone(already done)!
break;
}
return i; //Give the result!
}

Concave/convex support:

//val needs to be a normalized input! Performs a concave from 1 to 0!
float MIDIconcave(float val)
{
float result;
if (val <= 0.0f) //Invalid?
{
return 0.0f; //Nothing!
}
if (val >= 1.0f) //Invalid?
{
return 1.0f; //Full!
}
result = 1.0f-val; //Linear!
result = (result * result); //Squared!
result = (-20.0f / 96.0f) * log10f(result); //Convert to the 0.0-0.9 range!
return result; //Give the result!
}

//val needs to be a normalized input! Performs a convex from 0 to 1!
float MIDIconvex(float val)
{
return 1.0f - (MIDIconcave(1.0f - val)); //Convex is concave mirrored horizontally on the input, while also mirrored on the output!
}

Is that correct? The input I is normalized 0.0-1.0 inputs from the source that's being converted.

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 3 of 20, by superfury

User metadata
Rank l33t++
Rank
l33t++

What modulator is used for the ADSR in performing attack scale by keynum, like with keynumToVolEnvDecay/Hold and keynumToModEnvDecay/Hold? The DLS document mentions it, but which destination modulator does it use?

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 4 of 20, by superfury

User metadata
Rank l33t++
Rank
l33t++

OK. After some testing with Viena to modify the soundfont to use specific values to check if my app calcualtes and uses them correctly, I now can at least say that the attack is indeed correct behaviour (even though it sounds delayed for some odd reason). Don't know if the curve is correct though. It's using a convex curve (the same curve generated for modulators actually, same function call, but different locations being called from).

Now, looking further, it looks like somehow the volume controls are off? Maybe the formula is incorrect somehow, or the attenuation is calculated incorrectly?

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 5 of 20, by superfury

User metadata
Rank l33t++
Rank
l33t++

After fixing volume vs modulation envelope scaling (one uses 100% (0.1 steps) and the other 1440cB scaling (was set to 960cB) it seems to be better wrt volume.

Though I seem to hear (reverb?) echoes on the opposite side of the source of the audio it sounds like? Or does the original music have some weird echo? That happens to both left and right panned audio it looks like?

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 6 of 20, by superfury

User metadata
Rank l33t++
Rank
l33t++

Anyone knows how the volume envelope works exactly, in the context of 1440cB range, DAHDSR conversions and sustain level? How is the sustain attenuation (in cB) calculated using the other parts (AHD-R top/bottom) and sustain level(supplied in dB of attenuation (Soundfont documentation) or inverted (according to Viena)?

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 7 of 20, by superfury

User metadata
Rank l33t++
Rank
l33t++

Hmmm... After fixing modulator recursiveness and modifying preset/instrument generator and modulators to become properly additive instead of last-only (except when matching to first found modulator/generator), while giving priority to global over local split generators/modulators (instead of local over global) and adding support for multiple modulator outputs affecting the same split simultaneously (for example LFO to volume + velocity to volume now added together instead of being replaced), I notice something weird. Somehow, playing some instruments seem to get reversed note tones. That would mean their modulators/generators to pitch is reaching negative values somehow? Thus reaching "key*-x cents" instead of proper "key*x cents"? So the summed modulators become so low it reaches too negative cents?

How do global and local splits interact with default modulators?
Right now, they're basically added together, with global splits overruling local splits.

Edit: Fixing it to handle like documented somewhat seems to fix some volume issues.
Some sounds still don't play somehow? Or are muted somehow?

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 8 of 20, by superfury

User metadata
Rank l33t++
Rank
l33t++

Hmmm... After some testing I found out that somehow the LFO to pitch handling is not behaving as intended?
It seems to distort the pitch incorrectly (producing a weird sound), instead of pitch bending up and down somehow?

It's simply taking the value of the applied LFO (or memory of it for chorus duplications) and adds the pitch (which is in cents) to the sample's new playback cents (which calculates the current pitch speed and is converted to a speedup for samples to render).

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 9 of 20, by superfury

User metadata
Rank l33t++
Rank
l33t++

Anyone knows the exact behaviour of the soundfont ADSR generators and how the ADSR (AHDSR actually) functions with regards to note-on and note-off messages?

I notice sonds with too long decay for example when a note off is received, it keeps playing too loud?

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 10 of 20, by RetroGamer4Ever

User metadata
Rank Oldbie
Rank
Oldbie

If anyone knows, the FluidSynth gearheads probably would, since the SF community and userbase is pretty well anchored around them.

Reply 11 of 20, by superfury

User metadata
Rank l33t++
Rank
l33t++

Just a quick question, though.

How are the volume envelopes scaled with attenuation? Are they simply added to the attenuation from the initial attenuation generator?

Because if the sustain level is below the volume envelope's maximum (which is 1.0), it lowers the volume of the note? Is that supposed to happen? Or is the volume envelope itself scaled up so that sustain matches the initial attenuation's volume level (and attack/decay go over it by the inverse of the sustain, so (1/sustain))?

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 12 of 20, by superfury

User metadata
Rank l33t++
Rank
l33t++

Another related question: how do the cents (like for example frequencies, multiplied by a base frequency) and timecents (multiplied by 1 second to obtain the seconds) differ?
Right now, I calculate both using the very same formula (2 to the power of x/1200). The only difference in my emulation is that one is multiplied by a base frequency (pitch, LFO frequency and the like) and the other by the samplerate (to obtain the time it takes in seconds, like for the envelopes).

Edit: Just improved it a bit.
The key number to hold time scale is properly applied to the hold time instead of using the decay scale incorrectly.

Also, I've adjusted the key number input to the hold/decay scaling to be properly the amount of notes below 60 (as documented) instead of the amount of notes above 60 (which the documentation of "Notes on Implementing SF2/DLS Sound Synthesis" by "Daniel R. Mitchell"). His documentation clearly says:

Scaling can also be made relative to Middle C:
decay = decay + ((key – 60) * scale);

The key-60 is clearly incorrect, as the documentation on the scaling in the Soundfont 2.04 clearly states:

This is the degree, in timecents per KeyNumber units, to which the hold time of the Volume Envelope is decreased by increasing M […]
Show full quote

This is the degree, in timecents per KeyNumber units, to which the hold time of the
Volume Envelope is decreased by increasing MIDI key number. The hold time at
key number 60 is always unchanged. The unit scaling is such that a value of 100
provides a hold time which tracks the keyboard; that is, an upward octave causes the
hold time to halve. For example, if the Volume Envelope Hold Time were -7973 =
10 msec and the Key Number to Vol Env Hold were 50 when key number 36 was
played, the hold time would be 20 msec.

So that means that the key input is the amount of the key below 60, not above (thus being "60 - key" instead of "key - 60")). I've quickly inputted the example values into a calculator and did the math. The soundfont documentation is indeed correct on that one (otherwise, the math wouldn't check out with the resulting 20ms (actually 19ms when rounded down to whole ms though)).

When I look at it, interestingly enough the text in the Soundfont 2.04 documentation generator description is copy-pasted from the hold description to the decay description for the modulator envelope (keynumToModEnvHold and keynumToModEnvDecay). The only word that changes is "which tracks the keyboard" to become "that tracks the keyboard". Thus it mentions the incorrect sources and destinations for the decay version.
Then, looking at the volume envelope versions, I see the exact same errors (with the which vs that difference and the hold parameter being mentioned in the EnvDecay explanation), although the people that wrote the documentation at least properly replaced the mod/Modulator vs vol/Volume references.

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 13 of 20, by superfury

User metadata
Rank l33t++
Rank
l33t++

OK. Did some testing with the Viena Soundfont editor, using the wood block (instrument 115 of the AWE ROM) in loop mode as a testing instrument.

Apparently, the ADSR switches to release phase immediately when a note is released. It doesn't matter if it's in delay, attack, hold, decay or sustain at that moment. When a note is released, the release phase of the envelope immediately kicks in. And yes, that's even during the attack phase (even when it's still very quiet) or even during the delay phase, if there is one (it will simply terminate the note in that case).

Edit: Managed to get the envelopes themselves working correctly.
One issue remaining is that the concave/convex function (convex is used here, which is based on the concave function) is returning an invalid value (it's returning negative values when it shouldn't).

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 14 of 20, by superfury

User metadata
Rank l33t++
Rank
l33t++

OK. Managed to improve it somewhat.
Fixed some issues with the envelopes themselves.

Then changed the concave/convex functions to be as follows (convex is just illustrated as it was. Concave is changed in the latest version):

//val needs to be a normalized input! Performs a concave from 1 to 0!
float MIDIconcave(float val)
{
double result;
if (val <= 0.0f) //Invalid?
{
return 0.0f; //Nothing!
}
if (val >= 1.0f) //Invalid?
{
return 1.0f; //Full!
}
result = val; //Starting value!
result = 1.0f - val; //Reversed!
result *= result; //Squared!
result = (20.0/96.0)*log10(1.0/result); //Convert to the 0.0-0.9 range!
result = LIMITRANGE(result,0.0f,1.0f); //Limit the range!
return (float)result; //Give the result!
}

//val needs to be a normalized input! Performs a convex from 0 to 1!
float MIDIconvex(float val)
{
return 1.0f - (MIDIconcave(1.0f - val)); //Convex is concave mirrored horizontally on the input, while also mirrored on the output!
}

I still seem to get a larger than 1 value when numbers very close to 0 are inputted to MIDIconvex (or to 1 for MIDIconcave) somehow, which is why the LIMITRANGE is there. Any idea why this happens?

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 15 of 20, by superfury

User metadata
Rank l33t++
Rank
l33t++

I'm just wondering now...

What is the way the default, global and local modulators combine? Are they all added together? Do default modulators only exists at the local level? What about global modulators? Are they overruled by local modulators or are they added instead? What about same destination, source etc. but different amount value? How do global vs local vs default modulators get stacked?

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 16 of 20, by superfury

User metadata
Rank l33t++
Rank
l33t++

OK. Adjusted the modulator priorities and overrides a bit.
Now local will override defaults (not using them, disabling them by reporting themselves as skipping to the default modulator). And local (without defaults checking for them existing) will also override global modulators of the same type that match. Basically local disables global and both disable the default modulators when matched against the 3 fields.
Also fixed the default field comparisons after applying a default field (if so). So out-of-order fields will check the default override properly (previously default context was lost).

That should fix the modulator-based issues in theory.

I've also modified the Note-ON velocity to Filter Cutoff to have a sfModAmtSrcOper of D02h, which should match the AWE32 soundfont handling of it's overrides (which requires that value to override it). The offical documentation says either 0 (2.04) or C02h (2.01) so apparently both are wrong, at least according to fluidsynth.

So now modulators overriding that modulator will properly override it with being 0 instead of adding to them with 0.

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 18 of 20, by superfury

User metadata
Rank l33t++
Rank
l33t++

Improved the MIDI player too. Changed the tempo setting to be per-channel instead of a shared setting.
All settings that apply to a channel (speed, counters etc) now aren't shared between channels anymore.
Some settings that are detected are copied over to the next channel by default with a type 2 MIDI file though (multile type 1 files combined, performed at the end of a channel's stream).

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io

Reply 19 of 20, by superfury

User metadata
Rank l33t++
Rank
l33t++

Hmmm... When playing some fast song I get 160BPM on my MIDI player, but 129BPM using Window's media player?

I am using the default BPM formulas?

DOUBLE calcfreq(uint_32 tempo, HEADER_CHNK *header, MIDCHANNEL *channel)
{
DOUBLE speed;
DOUBLE frames;
DOUBLE PPQN;
byte subframes; //Pulses per quarter note!
word division;
division = header->timedivision; //Byte swap!

if (division & 0x8000) //SMTPE?
{
frames = (byte)((~division >> 8)+1); //Frames! 29=29.97. Stored negated!
if (frames == 29) //Special case?
{
frames = 29.97f; //Special cased!
}
subframes = (DOUBLE)(division & 0xFF); //Subframes! Ticks per frame!
subframes = subframes ? subframes : 1; //Use subframes, if set!
channel->frames = frames;
channel->subframes = subframes?subframes:1; //Apply subframes too, if any!
PPQN = (frames*(DOUBLE)subframes); //Use (sub)frames per quarter note! Pulses per beat.
}
else
{
PPQN = (float)(MAX(division,1)); //Divide up by the PPQN(Pulses Per Quarter Note (Beats)) to get the ammount of us/pulse!
//Speed is now the ammount of pulses per beat!
}

//tempo=us/quarter note
speed = (DOUBLE)tempo; //Length of a quarter note in us!
if (!speed) speed = 1.0f; //Something at least!
speed /= PPQN; //Apply beats per quarter note to get ticks/minute!
speed = 1000000.0 / speed; //Convert to BPM! 60000000/tempo=BPM. Already divide by 60 to get the ticks/second instead of minute!

//We're counting in ticks!
return speed; //ticks per second!
}

OPTINLINE void updateMIDTimer(HEADER_CHNK *header, MIDCHANNEL *channel) //Request an update of our timer!
{
if (calcfreq(channel->activetempo, header, channel)) //Valid frequency?
{
#ifdef IS_LONGDOUBLE
MIDchannels[channel].timing_pos_step = 1000000000.0L/(DOUBLE)calcfreq(MIDchannels[channel].activetempo, header); //Set the counter timer!
#else
channel->timing_pos_step = 1000000000.0/(DOUBLE)calcfreq(channel->activetempo, header, channel); //Set the counter timer!
#endif
}
else
{
channel->timing_pos_step = 0.0; //No step to use?
}
}

timing_pos_step is the time (in nanoseconds) that each tick is supposed to take (the delta timing between notes is multiplied by this).
Are the formulas correct?

Author of the UniPCemu emulator.
UniPCemu Git repository
UniPCemu for Android, Windows, PSP, Vita and Switch on itch.io