We'd like to thank the following two people for helping with this blog post by answering our questions:

Benjamin Wesolowski, who laid a lot of the theoretical groundwork for verifiable delay functions.

Bram Cohen, CoFounder and CEO of Chia Network (and also known for being the author of the BitTorrent protocol). The Chia blockchain is the only major project to currently employ verifiable delay functions in practice.
Intro
This is the second post in a twopart series on different approaches to onchain randomness.
For many different purposes, the blockchain space needs a trustless, generalpurpose source of randomness. In the first post, we saw that almost all sources of general onchain randomness that are currently in use have fundamental flaws (the severity of which varies). So how can we fix this? In this post, we focus on employing Verifiable Delay Functions (VDFs) for onchain randomness. We describe what VDFs are and why it's not clear they even exist. We also discuss one of the most promising candidates for a VDF and examine what difficulties an onchain implementation of a scheme using that VDF candidate would face. Finally, we also spend some time examining a VDF implementation that is already being used in practice, namely in the consensus algorithm of the Chia blockchain.
Be warned: This post is a technical deepdive into the theory and implementation of VDFs. It dissects concepts down to the nasty details that are inherent in cutting edge research. However, we are convinced it's worth going all the way.
WTF is a VDF?
Let's begin by defining what a Verifiable Delay Function (VDF) is.
Informally, a VDF is a conjectured cryptographic primitive that allows anyone to quickly verify that a socalled prover has spent a certain amount of time computing something from an input they were given.
Slightly more formally, a VDF is a function whose evaluation involves a large number of sequential steps, but whose result can be verified in a much smaller amount of steps.
Very formally, using a (slightly modified) definition from this paper, a VDF scheme for a given security parameter
A VDF scheme must satisfy the following properties. It should be:

sequential (basically:
$Eval$ is hard to compute): Anyone can compute$y=Eval(x)$ in$t$ sequentials steps, but not even an adversary that has a polynomial number of processors available can distinguish$y$ from an element picked randomly from the output range in significantly fewer steps. (This is a paranoid cryptographer's way of saying "Nobody can compute the output of$Eval$ much faster than with the obvious approach".) 
efficiently verifiable (basically:
$Verify$ is very easy to compute):$Verify$ runs significantly faster than$Eval$ . 
unique (basically: Fooling the verifier is computationally hard): For any input
$x$ , it is computationally hard to find values$y$ and$π$ such that$Verify(x,y,π)=’true’$ but$y=Eval(x)$ .
A VDF scheme should maintain these properties even if an attacker is granted a polynomial amount of time for precomputation.
Now that's quite the definition. What's the main takeaway from this section? For a VDF, when we give someone an input, they will need some time to compute the corresponding function output  but we will be able to verify that it's the correct output very quickly.
There are candidates for VDF schemes which have not been broken yet  we'll see one of them in detail later  but much like it's unknown whether strong public key cryptography exists in general, it's not known if a VDF scheme satisfying all properties defined above exists. The difference is that there has been significant research into breaking the most common public key cryptography schemes, while VDFs are a relatively new field.
Supplementing Our CommitAndReveal Scheme With a VDF
Now, assuming a VDF exists, let's see how we could employ it.
In the last post, we saw a commitandreveal scheme. It worked as follows: In phase one, all
We saw that this scheme is almost perfect and guarantees true randomness as long as at least one participant is honest and inputs truly random data.
Unfortunately, it had the fundamental flaw that participants can refuse to reveal their inputs and, by doing so, influence the result. We're now in a position to fix this flaw by employing VDFs.
The CommitandReveal Scheme Augmented with a VDF
The scheme we construct has three phases. For simplicity, we denote the
(Note that the following scheme is an example we cooked up ourselves. It has not been subjected to due scrutiny, so be very careful when employing this in practice. We'll see a different scheme that is currently being used in practice later.)

In the first phase, each of the
$k$ participants chooses a random VDF starting value$r_{i}$ for themselves and computes$F(r_{i})$ . They then take their random input$a_{i}$ and encrypt it using a key$K(F(r_{i}))$ derived from$F(r_{i})$ , obtaining$Enc_{K(F(r_{i}))}(a_{i})$ . 
In the second phase, each participant
$i$ commits$r_{i}$ and$Enc_{K(F(r_{i}))}(a_{i})$ to the onchain contract. These values are now public. 
In the third phase, anyone can input
$(i,F(r_{i}),π_{i})$ to the onchain contract. This essentially says "I know the VDF result for participant$i$ : it's$F(r_{i})$ , and the proof for that is$π_{i}$ ". The contract can then verify that$Verify(r_{i},F(r_{i}),π_{i})=’true’$ and if so, calculate$Dec_{K(F(r_{i}))}(Enc_{K(F(r_{i}))}(a_{i}))=a_{i}$ .
As with the old commitandreveal scheme, the result is
Some notes on each of the phases:
Phase 1: The length of this phase can be extended at will, and indeed the time period must be long enough such that each participant can compute
Phase 2: As soon as participant
Phase 3: Ideally, participant
Analyzing the Scheme
Assuming certain security assumptions (that we will discuss later in this post), it should be true that no matter what input
Note that the participant does not need to have committed the correct
Indeed, note that at the end of phase two, when all participants have committed their inputs, the output should be entirely deterministic. As long as there exists at least one person that bothers to reveal any unrevealed inputs, nobody has any control over the set of inputs that are used to compute the result. Hence, as we saw in the discussion for the commitandreveal scheme in the last post, if there exists some participant
Other Schemes
It should be noted that there are other protocols based on VDFs. The one we described above is simply the natural extension of the commitandreveal scheme we saw in the last post. Later in this post, we'll discuss a completely different architecture currently used by the Chia blockchain that needs significantly less VDF evaluations.
A Promising VDF Candidate: The Wesolowski Construction with Class Groups
Now that we've seen how to employ Verifiable Delay Functions to construct what seems like the perfect scheme for onchain randomness, let's see a promising candidate for such a VDF. Keep in mind that most research into VDFs is very recent and hence this construction has likely not been subjected to the amount of scrutiny one might wish.
The General Construction for Groups of Unknown Order
The following is a construction proposed in a paper by Benjamin Wesolowski (see also this paper for a more concise overview of the construction).
The scheme uses a group
The
The
Note that the verifier obviously accepts outputs that the evaluator algorithm produces, since we can rewrite
The above shows that this scheme satisfies the efficient verifiability property. For it to be a full VDF scheme, we would also have to show that it has the sequentiality and uniqueness property. However, both of these are only conjectured to hold.
For sequentiality ("
More formally, one can define a timelock game whose hardness is sufficient for sequentiality to hold for an implementation in a certain group. For a more formal discussion of this, see section 5, Assumption 2 of Wesolowski's original paper.
For uniqueness ("Fooling the verifier is computationally hard"), let's assume that the attacker even has the value
Wesolowski gives a nice informal intuition in his paper for why fooling the verifier comes down to breaking the rootfinding game. To do this, he analyzes the interactive version of the protocol, where
We want that the verifier accepts, meaning we want
For a malicious prover to fool the verifier with a fake proof
Of course, the above is not a formal proof that fooling the verifier requires breaking the rootfinding game. The real proof first formally argues this fact for the interactive version of the protocol in the random oracle model, after which you can apply a general argument that applies for any interactive protocol made noninteractive by the FiatShamir heuristic. If you're interested in the details of the formal proof, see section 6 of Wesolowski's original paper.
The takeaway from this section is that employing this Wesolowski construction only makes sense if the timelock game and rootfinding game are conjectured to be hard in the group that we use as a basis for our scheme.
The Class Group of an Imaginary Quadratic Field
We've now seen a protocol that relies on a group of unknown order in which computing repeated squares must be done in a sequential fashion, and in which finding arbitrary roots is hard. Ideally, we'd also like this group to have fairly efficient group operations. So are there any good candidates for such a group?
Wesolowski, in his original paper, mentions two such types of groups: RSA groups (i.e., invertible integers modulo a product of two large primes under multiplication) and class groups of an imaginary quadratic field.
The first type is suited for our purposes only in a very limited way because we require that nobody can reconstruct the order of the group, even if many parties collaborate. This is hard to achieve, because the two primes defining the RSA group have to come from somewhere. Wesolowski mentions that there are approaches to generating large integers that are divisible by two unknown primes with large probability, but those have significant drawbacks as well. There are also ways to generate two unknown primes via secure multiparty computation, but that is very complicated to implement, let alone implement onchain in a verifiable manner. Hence, we will focus on the second type of group, the class group.
This is where some abstract algebra comes into play. Feel free to skip some of the more technical parts of the following paragraphs, they are not essential for the randomness protocol itself.
The class group we will use is constructed as follows. Take a negative prime
Now of course, that's a very convoluted definition, and it's certainly not something we'd want to represent computationally as it stands. Fortunately, this group is isomorphic to a group consisting of equivalence classes of binary quadratic forms, along with a wellknown and efficiently computable group operation. All quadratic forms across all equivalence classes of that group have the same discriminant, namely the negative prime
In fact, the group is identified by its negative prime discriminant
The group operation on two group representative is now defined as follows. Let two representatives
 Set
$g=21 (b+β),h=−21 (b−β),w=gcd(a,α,g)$ .  Set
$j=w,s=wa ,t=wα ,u=wg $ .  Solve
$(tu)k≡hu+scmodst$ for$k$ , obtaining solutions$k=μ+νZ$ . This is a standard problem and can be done efficiently (see e.g. section 7.4 of the Chia paper on class groups.)  Solve
$(tν)n≡h−tμmods$ , obtaining solutions$n=λ+σZ$ .  Set
$k=μ+νλ,l=skt−h ,m=sttuk−hu−cs $ .  Return
$f_{3}=(st,ju−(kt+ls),kl−jm)$ .
It is not hard to prove that if both inputs have the same discriminant, this operation preserves it. The proofs that this forms a group, or indeed that it is isomorphic to the fractional ideal definition mentioned above, are rather more involved. See e.g. this blog post for a concise introduction and argument.
In practice, there are several crucial methods to make group operations even faster. One method involves always operating on a specific group representative
This concludes the description of the class group. Crucially, for a randomly chosen negative prime discriminant
All of this makes this type of class group the best currently known contender for a group to use in our VDF scheme.
Other VDF constructions
We've now seen the Wesolowski construction for VDFs, as well as a candidate for a group of unknown order with which it can be used. It should be mentioned that this is not the only viable construction for VDFs available: Pietrzak describes a similar VDF based on repeated squaring in a group of unknown order, differing from the Wesolowski construction by how verification is performed. Two more constructions are described by De Feo, Masson, Petit and Sanso, using supersingular isogenies and pairings. However, these latter constructions require a trusted setup, much like the instantiation of the Wesolowski protocol using an RSA group. Again, this is something we would ideally like to avoid.
It is also an open problem to find a simple VDF that is secure against attackers with access to a quantum computer. Both the Wesolowski and Pietrzak scheme can be broken by computing the order of the underlying group via Shor's algorithm.
Considerations for Practical Implementation: Towards an OnChain Contract
We now have a promising candidate for a VDF. Great! Irrespective of any theoretical concerns over hardness assumptions, what's stopping us from just running with this and implementing our scheme as an onchain contract?
Not much. In fact, the Chia blockchain already uses VDFs (notably as part of their consensus algorithm and not as an onchain contract). They use a variant of the Wesolowski protocol that splits the generation of
Chia's protocol is very different from the protocol above with which we motivated VDFs. It does not directly take participant input, and consequently does not need one VDF execution per participant. Instead, they use three interacting VDF chains. When simplified, their method boils down to the following. They generate a continuous stream of challenges, each of which is meant to be fully random. To generate a new challenge
Ideally, we want to implement either Chia's or our protocol onchain. What obstacles are there to making that happen?
Generating Random Numbers
One of the main concerns regarding implementing the Wesolowski scheme is that it relies on large (pseudo)random^{[1]} primes: We need a random prime discriminant that defines our group
Generating a prime takes a nonnegligible amount of computation time. The best method currently known  and indeed what almost all cryptographic applications essentially do to obtain random primes  is to generate and test the primality of many candidates.
Let's take the num_bigint_dig
crate that implements purerust bigints along with cryptographic functions including random prime generation. For each candidate, it does a lot of cheap pretests like testing divisibility by small primes, followed by 100 rounds of MillerRabin, and finally the Lucas primality test. If a candidate fails at any point in these tests, the search immediately continues with the next candidate. If it passes all tests, the search is terminated and the candidate is output as the result.
In practice, using the release flag, this needs something like 100200ms singlecore for generating a 1024bit prime on an average laptop. That's a lot!
The openssl prime generation (using openssl prime generate bits 1024
) does a bit better, completing in about 3060ms, but that's still not great for our purposes. It also does heavy lowlevel optimization, which isn't accounted for in most gas costs / computation limit models, which is important if we want to implement this onchain.
We did some very quickanddirty tests for how well prime generation using the num_bigint_dig
crate works on Solana, which currently has a transactionwide limit of 1.4 million compute units (which are configured to approximate the computation time needed to execute the onchain contract). The problem was that remainder operations  which are essential for any primality test  were extremely inefficient, with up to 11.000 compute units simply for computing the remainder of a 1024bit number when simply dividing by 5. This makes it necessary to split a full run of an onchain primality test over several hundred or even thousands transactions. And keep in mind that you typically need to check hundreds of candidates! Though most of those tests will usually abort much sooner.
Even though this can almost certainly be optimized significantly, it would be surprising to see the entire prime generation process complete in only a couple of transactions, or even a few hundred, in the near future.
Choice of Discriminant
Apart from prime generation, verification also involves calculating
So in practice, discriminants should be chosen small enough to make verification relatively quick and cheap, but large enough to make breaking the discriminant infeasible in practice. So what's the smallest discriminant we can get away with? Research on this is very sparse, but there is one data point of interest.
One way an adversary could beat the timelock game and rootfinding game is by finding a significantly faster way to compute powers of the group element
So how long does it take to compute the order of a group element, i.e., to "break" its prime discriminant? Fortunately, we can again look to Chia: they ran a discriminant breaking contest as part of the first iteration of their VDF challenge. The winning entrant was able to break a 240bit discriminant in slightly less than 10 days on unspecified consumer hardware. However, note that all they did was run parigp
, which includes a fast implementation of the BuchmannMcCurley algorithm that computes an algebraic representation of the class group from which its order is easy to read. It might be possible to improve this significantly.
That being said, Bram Cohen pointed us to a paper by Belabas, Kleinjung, Sanso and Wesolowski that showed how to compute the order of a specific element in any group with a Mersenne prime as a discriminant, with some pointer for how it might be possible to extend this technique to other types of primes. This implies that such discriminants should never be used for a run of Wesolowski's VDF.
He also confirmed that Chia currently uses 1024bit discriminants. This strikes a balance between ease of prime generation and safety of the protocol. Note that in their protocol, the discriminant is rotated every 10 minutes.
Faster Squaring (Code vs. Hardware)
Research into accelerating iterative squaring, i.e., accelerating the computation of
Let's start with what's known for generalpurpose hardware. Fortunately, Chia ran a VDF competition in 2018 and 2019, asking for fast implementations of squaring in class groups. They used 2048bit discriminants. This resulted in a significant improvement in the computation time needed for squaring, from ~170
There are several optimizations that were employed in the winning entries, such as implementation of the NUDUPL algorithm (described in this paper) that reduced the size of operands during squaring operations, and a very neat trick discovered by Akashnil Dutta to speed up the reduce operation using that uses u64 approximations for transformation matrix entries, avoiding costly BigInt computations. Doubtless further optimizations are still possible.
A paper from 2020 even evaluates the speedup of using ASICs for squaring forms with 2048bit discriminants. They achieve ~6.3
Chia itself only uses 1024bit discriminants for its VDF implementation. The fastest implementation they have observed on their network runs at 360k iterations per second, implying about ~2.7
In any case, the length of phase two should always be chosen such that there is an ample buffer for implementations faster than any implementation known publicly  especially since research into ASIC implementations is still so sparse. The faster the implementations, the larger the parameter
Incentivization for Participants
One matter that should be mentioned is what the incentives should be for participants to submit inputs. If it becomes necessary to employ specialpurpose hardware  which is definitely plausible seeing as we've already seen a 2x speedup of ASIC vs software in the paper mentioned above, a value that will presumably increase as more research in this direction is initiated  incentives would need to offset the costs of purchasing and running that hardware.
In the end, VDFbased solutions might be restricted in use to applications with a large amount of money at stake with each generated random string. Applications with smaller amounts at stake might not warrant use of VDFs, relying instead on the normal commitandreveal scheme, VRFbased schemes or stakingbased schemes, and then simply trusting that all participants reveal their input.
Conclusion
In this post, we've seen what we believe is the only known solution for a generalpurpose, trustless scheme for generating randomness onchain. However, we've also seen that it relies on cryptographic assumptions that have received comparatively little attention  specifically, the hardness of the rootfinding game in the class groups we explored.
We also experimented very quickly with what would need to happen to implement a VDF in an onchain contract, and recognized that generating random primes onchain is a notable obstacle. In that regard, we also saw that nonetheless, the Chia blockchain already uses this construction in their consensus algorithm.
Generally available onchain randomness remains a major problem in the blockchain space. More research into the theory of VDFs, fast implementations of form squaring and verification algorithms, as well as possible alternative protocols, is desperately needed.
This concludes our twopart deepdive into onchain randomness. Make sure to check our blog regularly, as we will be releasing more deepdive posts about Solana bugs, topics related to smart contract audits, and interesting ideas in the blockchain space.
For brevity and readability, I'll use random and pseudorandom interchangeably in this post. ↩︎