From 68eb14fdf6debf1e26921a1b2dddf34dbd031471 Mon Sep 17 00:00:00 2001 From: Joey Hess Date: Tue, 13 Sep 2016 22:15:18 -0400 Subject: use less expensive hash for proof of work The server has to run the hash once to verify a request, so a hash that took 4 seconds could make the server do too much work if it's being flooded with requests. So, made the hash much less expensive. This required keeping track of fractional seconds. Actually, I used Rational for them, to avoid most rounding problems. That turned out nice. I've only tuned the proofOfWorkHashTunable on my fanless overheating laptop so far. It seems to be fairly reasonablly tuned though. --- Types.hs | 3 --- 1 file changed, 3 deletions(-) (limited to 'Types.hs') diff --git a/Types.hs b/Types.hs index e129ea3..2ab5d6c 100644 --- a/Types.hs +++ b/Types.hs @@ -61,6 +61,3 @@ data SecretKeySource = GpgKey KeyId | KeyFile FilePath -- A gpg keyid is the obvious example. data KeyId = KeyId B.ByteString deriving (Show) - -data BenchmarkResult t = BenchmarkResult { expectedBenchmark :: t, actualBenchmark :: t } - deriving (Show) -- cgit v1.2.3