|author||Daniel Silverstone <email@example.com>||2019-05-12 16:56:03 +0100|
|committer||Daniel Silverstone <firstname.lastname@example.org>||2019-05-12 16:56:03 +0100|
Diffstat (limited to 'firmware')
1 files changed, 12 insertions, 10 deletions
diff --git a/firmware/INTERNALS.md b/firmware/INTERNALS.md
index d9bbfe2..80f1314 100644
@@ -72,16 +72,17 @@ xoring the data together. The running average of those estimations provides us
with an idea of whether or not the generators are tending toward an extreme
or correlating with each other.
-If the total number of shannons estimated by this process is less than the hash
-size of the mixing function then we designate that block as failed and skip the
-following stages, instead waiting for the next interrupt.
+Next we perform a debiassing on each of the two generators independently. This
+means that we turn a bit sequence of 0b00 or 0b11 into nothing, 0b01 into 1 and
+0b10 into 0. Those bits are then aggregated back into bytes as and when they
+are available. Whenever a byte is available, its entropy is estimated and
+the byte is mixed into the pool which is credited with the entropy estimation.
-We then mix the full 128 bytes of data from the two generators into our mixing
-function, crediting it with the entropy estimates generated from the first
-stage processing. The maximum amount of entropy which could be credited is
-therefore 1024 shannons during this stage of processing. Providing that the
-total is greater than the hash size, we're OK and we will claim the hash size
-of shannons as we move on.
+When *each* of the generators has contributed *at least* half of the hash function's
+size in entropy then we finalise the mix and move on to the next instance.
+If any of the above entropy estimations begins to falter, we lock the key out
+and shut down the generators.
## Flowing data into the FIPS checks
@@ -89,7 +90,8 @@ Once the data leaves the hashing function attached to the SPI DMA buffers, we
have to gather it together into a FIPS 140-2 sized buffer for validation. This
process requires that we acquire 20,000 bits of data which if we have a 128 bit
hashing function will effectively be represented by a 2512 byte bufer (157 hashes)
-of which we will use about 156 and a half.
+of which we will use about 156 and a half. FIPS 140-2 operates on 2500 byte
+buffers so that should all be perfectly good.
Ideally we'll fit a pair of these buffers into RAM, though we accept that it's
possible we won't be able to. Initial estimates suggest we'll manage it. The