Before reviewing, please be aware of the following system limitations that we may want to consider addressing:
There's no way to increase the amount of tokens you've Stacked once you register your PoX address. This means that if the total liquid STX increases while you are Stacking, and your Stacked total falls beneath the stacking minimum for the reward cycle, your tokens remain locked but you will not receive a PoX reward. This could potentially be changed without too much effort, but I don't know yet if it's necessary -- a Stacker (or delegate) can determine in advance how many uSTX will need to be locked up a priori by looking at the coinbase release schedule and the lockup schedule.
Related to the above, this PR contains a stub
lockupcontract that contains some of the machinery for processing STX unlocks from token sales. This code will eventually be able to help Stackers determine the maximum number of liquid STX per future block, so they can Stack accordingly. Perhaps this would be preferable to build out, instead of making it possible to adjust the amount of locked STX once the user has Stacked?
reward-cycle-pox-address-listwill include PoX addresses that had at least the minimum uSTX locked to them at the time they were locked, but NOT the current minimum. This discrepancy manifests in two ways:
- If Alice locks up STX before 25% of the total liquid STX are locked, and later on Bob locks up STX and pushes the total locked STX to over 25% of the total liquid STX, then Alice's PoX address will no longer have enough STX locked on it (and per the above, she can't increase it). However, her address will still show up in this map.
- As more blocks are mined, and as more tokens unlock, the total liquid STX steadily increases. Because PoX addresses are registered to multiple reward cycles in this map when the STX are locked, it is possible that the total liquid STX will increase to the point that a PoX address no longer has the minimum required STX locked on it.
As a result, the Stacks node will need to determine the current minimum STX lock requirement on each reward cycle, and filter out addresses pulled from this map that do not meet the minimum lockup.
- The number of addresses the node will iterate through per reward cycle is bound by the protocol, but we don't exploit this in the current design. Specifically, a reward set can be no bigger than (5,000 * 0.75 + 20,000 * 0.25), or 8,750 PoX addresses, because only 20,000 * 0.25 = 5,000 addresses at the less-than-25%-lockup minimum can be registered at a lock-up minimum of 1/20,000th the total liquid STX before the lockup minimum is bumped to the greater-than-or-equal-to-25%-lockup minimum (meaning that there would then be space for at most 3,750 = 5,000 * 0.75 addresses with the 1/5000th liquid STX minimum lockup). Bound or not, the PoX lockup code needs to enforce to keep the maximum number of registered addresses small enough that it can be efficiently loaded (it currently does so by making sure a PoX address has enough locked STX on it at the time of the call before storing it). Without this enforcement, someone can DoS the node by locking up 1 uSTX/PoX-address to fill this map up to the point where loading all PoX addresses would take a long time. Right now, the code does not exploit this property to cap the size of the reward set explicitly; instead, the node treats each PoX address and its lock-up as a single key/value pair (meaning that obtaining each PoX address and its lock-up costs 1 MARF read + 1 Sqlite3 read, or 8,750 MARF'ed key lookups to obtain the reward set). The reason for this is that it keeps the writes cheap when registering a PoX address, which is important from a UX perspective because the user pays for them in transaction fees. An alternative design would be for the node to pre-allocate a massive 8,750-entry list per reward cycle, and have PoX address registration insert an entry into this list. This might be preferable to the current design because the node would be able to read the entire PoX reward set in a single (big) MARF'ed read. But, I don't know if this is possible given the write expense it imposes on users. Would love some feedback on which design would be overall better for users?