A 3D lattice of individual cross-confirming transactions that can be mined offline as a replacement to a blockchain (1D) or blockDAG (2D). It scales to infinite free and private transactions per second, is immune to traditional 51% attacks, mining pools aren't necessary for miners to get constant rewards, tx can get initial confirmation within a few seconds, reward is proportional, CPU dominated, ASICs will never dominate, and also resists GPU and Quantum computer takeover while allowing them to mine. Consensus is reached by an army of ants, an atomic, democratic, vote of factoring power
Published: 9/28/2022
Update: 11/28/2022
Author: GiverofMemory
Maintainer: GiverofMemory
License: Site License
Categories: See Footer
See Also: none
Page: (upload / browse / edit / print)
Discussion: Actionlattice-Talk
Download: URL,PDF from URL

See also Blocklattice, CollectBit, Digital Collectible Network, BlockDAG, Solvum, Quantum, Dustyplasma, transep

Bitcoin was the rough draft. Actionlattice is the fulfilment of the technology. Having a non-reusable address for each and every satoshi would not have been feasible in 2008, but now with software and hardware wallets we know we can have nearly infinite addresses at our fingertips - making actionlattice a reality.

Actionlattice was designed from scratch to be the most secure protocol in the cryptocurrency space, including fixing the security flaws of bitcoin. It is designed to last hundreds, if not thousands of years.

An actionlattice is a new open source method to create, order, and store transactions, prove work, confirm transaction validity, prevent double spending, maintain privacy, encode tokens and NFT's (with no smart contract code needed), and issue rewards (cips).

It replaces the blockchain. You can think of it like a 3D blockchain, whereas a blockchain is 1D. We have come a long way from clay tablets. The way we achieve 3D is by getting rid of blocks and have each transaction stand on its own and be mined separately.

It can be used for any purpose, free and open source. One interesting use-case is metaverse or other games using actionlattice to craft items - or even use actionlattice as a decentralized game server. It can also be used as an accounting ledger for transferring real or fictional assets like NFT's and tokens. It can even be used for voting, much more democratically than blockchain. AI devices will also be able to mine this; as well as your Gaming PC, laptop, and even smartphone or raspberry pi. It can also be used for Social media since the only bottleneck to transaction speed is the sync speed of nodes verifying. In theory the network can seamlessly scale to process infinite transactions per second.

The smallest (and only) unit of account is the cip (dust - like a satoshi). The name is a convolution of "bit", "semiprime" maybe "cipher" or "sip". Depending on your chosen number to factor, you can produce cips in different denominations.

Every transaction is free to propose, and then a miner (or you - if you mine your own tx) gets paid with one reward cip if they "activate" the proposed transaction using proof of semiprime (aka proof of sieve) (Posi) by factoring a large number, and point (connect) it to three other transactions that they validate and vouch for, preferably these three other transactions are near the "surface" but whose cip genesis is deep below the surface. Next, other new activated transactions will point (connect) to yours which "confirms" yours. Some factors that will help decide whether they confirm yours or not is if it fits in the majority of nodes "max transaction size", if it is not double spent or double mined, the transaction is valid, and if it is near the surface but the cip genesis is deep from the surface. The surface is just the layer of unconfirmed transactions on the edge of the lattice.

The mining reward for activating a transaction would mature by whether your transaction is confirmed by others linking their transactions to yours. Some consensus on how "well connected" your transaction needs to be (how deep from the surface) in order for your mining reward to be valid (see also hashsnaps). I suppose you can try to spend them immediately and it is up to each miner whether they build on a transaction whose genesis isn't deep from the surface. I suppose it would take a while for your spending transaction to be confirmed before miners feel it is valid.

This is for a 2 bond version, 3 bond version (called xlattice) would look a bit more complex.

A transaction is mined by hashing the transaction into a number to factor and then factoring it. More proof can be added to the transaction later (basically any transaction can be re-mined just as long as it has a higher bitlevel proof than the original, probably with a new recognized cip every 5 decimal digits {17-18 bits} of proof level. The original transaction and cip is kept and not deleted which works because cips cannot be double spent by definition, addresses are throwaway and cannot hold more than 1 cip and once they send it they cannot receive another cip ever again. The more proof you the miner provides, the more valuable the "cip" the miner receives as a reward is (and potentially the more data that they can store). Some merchants may accept only cip140's, and others may accept cip120's as well, but assign a lesser value to them. There can be 24 different bitlevels and 4 different categories; alpha (red), beta (blue), gamma (green), and delta (purple).

All a crypto network would need is an actionlattice. Also a UTXO may be used, and will be helpful because of pruning. The UTXO would simply carry a list of addresses with balance of 1 cip, and used addresses would only need to be kept by the nodes 3 days (to make sure they are not reused in that time while they are valid) but can never be reused due to our address generation requirement.

Each transaction is it's very own "block" and stands alone. It can be broadcast with or without proof of semiprime Posi. If it is broadcast to the network with no proof, then a miner would need to activate and connect it wherever they want in the lattice (they will probably selfishly connect it to their own transactions to add confirmations). You can also designate cips to go to the miner of the transaction (activator) to incentivize quick confirmations. You can even designate cips to miners who connect new transactions to your transaction. However be aware that if your transaction is found to be invalid then all the cips you designated for miners, and the cip created from mining your transaction, and all transactions that connected to yours, will be null and void, and we could even make it so cips you designated to miners are now invalid and can't be respent (typically if a transaction is found invalid or connected to invalid txn's then the cips revert back to the owner and can be respent later).

Each transaction can send cips from one public key (address) to another. Each public key can hold only 1 cip. Think of each cip as a "bit". Each bit is immutable and cannot be split or added to. Each bit (cip) has one public and private key.

Mining any transaction will require a certain amount of proof of semiprime, which may depend on the size of the transaction or other factors. There could be multiple lattices; perhaps alpha and beta. The alpha fork could have few second confirmations for point of sale (pos) applications and then within 20 mins or so it would be confirmed on the beta network for more confidence. Perhaps there are even more networks for even higher proof levels the transactions could be added to. There probably will be a new cross compatible genesis fork everytime another level of difficulty is reached by at least 1 miner.

Each transaction contains public addresses and signatures for each of the cips being sent and the public addresses for each cip to be received to, a cipbase address where a new cip is created and given to, and a message field that can be used for iterating nonces and including messages like love letters or encoding Tokens or NFT's like Ravencoin or Bitcoin's colored coins.

Number of bonds

In the examples here we use 2 or preferably 3 bonds per transaction but this can be any number set by the network. The more bonds you require the faster there will be consensus/convergence but also the more time it will take each miner to activate a transaction. Also the number of levels down the miner verifies will also add to the time. Somewhere between 2-10 bonds would probably be ideal and over time this number can even be increased as computers get faster at verifying transactions (due to moores law). I think I would pick 3 bonds.

For some context 2 bonds checking 8 layers/levels down would be 256 transactions to verify, 4 bonds checking 4 levels down would be 256 transactions. Three bonds checking 5 levels would be 243 transactions. Three bonds checking 7 levels would be 2187 transactions to check which seems reasonable. A reorganization (reorg) effecting over 2000 transactions would be rare.


Consensus looks slightly different than Bitcoin, but it is even more rigorous. Bitcoin for one has to prevent double spending by using arbitrary timestamps. Actionlattice is more rigorous at preventing double spending (and privacy at the same time) by dictating that an address can only hold 1 bit and never be reused. Bitcoin miners/nodes still have to verify that the longest chain is valid, and if not, then accept a less long but valid chain. In the same way nodes/miners need to verify the transactions in actionlattice and then accept the "heaviest" version (number of transactions x their bitlength weight = greatest).

That said someone could attempt a double spend on two different "sides" of the lattice at once, and the "each transaction confirms 3 others and 7 levels down" wouldn't necessarily catch it immediately, but it would eventually. Actionlattice basically asks for a continuous rolling vote. Lets look at these two duplicate transactions sending to different addresses (double spend; they are trying to duplicate their cips). Now the other miners vote on which is correct by vouching for only one of them, obviously since they can only vote for 3 transactions and it would be invalid if they vouched for a double spend. Now whichever of the two double spends gets the most "confirmations" is the winner. The one with less confirmations gets deleted by the nodes once they feel it has no chance of pulling ahead. Also, every transaction later that vouched for the "bad" one also gets removed (reversed) and the cipbase also nullified.

So this incentivizes every miner to validate all transactions to make sure they aren't vouching for a double spend or otherwise invalid tx, and go several layers down from the tx's they are vouching for there is no double spend there either. It doesn't hurt the person making the nullified but valid transaction that was linked to the "bad" one, as they can just rebroadcast it with no problem, but the original miner of the tx would have lost their reward. So I think the idea is sound. I think this works because only 1 of the double spends will win the most confirmations and the vote will keep getting more and more polarized because miners don't want to attach to what seems to be the looser. It would be a "rolling consensus".

Hashsnaps (snaps/snapshots/epochs)

This is similar to transep, which is the 1D blockchain equivalent.

Every certain amount of time the actionlattice should be hashed. This should be done so that nodes can ping other nodes and by just comparing one number, they can see if their actionlattice is identical. If the hashsnaps don't match, then the nodes can reconcile what is different. The hashsnaps with the most voting weight are the accepted hashsnaps. The reason there are 5 levels is that nodes can tell if they are correct with more granularity and can compensate and contest other hashsnaps faster. For example if it was done every hour, if lots of nodes have different hourly hashsnaps, then they would have a lot of transactions to compare. If there is a discrepancy at 7 second hashsnaps, they can compare a much smaller amount of transactions to see which lattice has a higher weight and accept that.

This might look like you get a confirmation every hashsnap. Yes and no, yes in the sense that you now know the majority of nodes have your transaction saved, but also you can be watching the nodes in real time as connections, and thus confirmations are happening to your transaction(s) constantly. So a good node would have high certainty a transaction has gone through even before the 7 second hashsnaps circulate the network.

For every number of seconds shown below (epoch), a node would broadcast the hash of the last hashsnap hashed with all the current transaction hashes XOR'd together so that the order the transactions are in doesn't matter [1] [2] (or even just adding all the hash values up! [3]). Obviously if there were double spending (duplicates) the later one(s) each node would just delete on their own.

Each Level of epoch uses the hash of the last epoch plus all the hashes of the transactions in the current epoch. So L1H's would only be concerned with previous L1H's, whereas L4H's would only use the hashsnap of the last L4H, and all the transactions since. They do not rely on each other. So a light node for example would only compare their actionlattice with L4H's for example so they don't have to constantly performing lots of calculations.

Epoch's should be 1 behind. So lets say we are currently sitting in epoch L1E2 right now, at the end of L1E2 your node should publish L1E1. This is because discrepancies can be fixed by each node so that consensus isn't constantly lagging. Hashsnaps from the current epoch could be published, but more as a preliminary comparison, and 1 epoch behind is binding in the sense that the highest weighted actionlattice is the "winner" and shall be accepted by all the nodes.

L1H's and L2H's are not multiples of eachother to prevent nodes from having to calculate both at the same time, which would be difficult for nodes.

Level 1 hashsnaps (L1H)

These are done every 7 seconds

Every 7 seconds would be a new L1H epoch. L1HE1, L1HE2, etc every 7 seconds

Level 2 hashsnaps (L2H)

These are done every minute (60 seconds)

Every 60 seconds would be a new L2H epoch. L2HE1, L2HE2, etc every 60 seconds

Level 3 hashsnaps (L3H)

These are done every 10 mins (600 seconds)

Level 4 hashsnaps (L4H)

These are done every hour (3600 seconds)

Level 5 hashsnaps (L5H)

These are done every day (86400 seconds)

Three day rule

For a couple reasons I am going to set it so that at least 1 of the 3 transactions you validate must be within 3 days (3 daily hashsnaps away). The good part about this is that you still get 2 connections to validate any age transactions you want. The bad part is that if the network is inactive for 3 days it is dead. Its ok though, if that were to happen then a new genesis could be made and then it could be connected back to the main lattice.

The main reason why I am implementing a 3 day rule is so that we can maintain a free floating difficulty adjustment. What we can do now is say that when someone factors a number that is 5 decimal digits above the current highest (must be multiples of 5) then the difficulty for every level raises by 5 decimal digits. For example lets say we start the network with accepting 55 digits to 170 digits numbers to factor. When someone factors a 175 digit number then the whole network moves up so the acceptable range would be 60-175 digits.

The reason why this requires a 3 day rule is to avoid an attack where someone takes months to factor a number and keeps moving the difficulty of the whole network up. Since we have a 3 day rule, that means anyone only gets 3 days to factor a number. Also this is nice because we now know that every number is factored within 3 days, so we can calibrate estimates on how long it would take someone to break (factor) a given RSA key by extrapolation of our actionlattice data.

Address generation

We will use base64 address encoding since powers of 2 are best [4]. Base58 was used in bitcoin to make addresses easily human recordable, but for actionlattice our addresses are not intended to be human usable since each piece of dust will have its own address so addresses will need a software wallet to manage. That said it is not very difficult for a human to read/record base64.

We will use RSA (renamed radial simulacrum accumulator) keys since they are widely supported, use basic math - factoring instead of elliptic curves (which is important if society collapses), and are more quantum safe than ECDSA [5]. Also we will have a good guage on how safe our addresses are since our miners will be showing us what they can do with large number factorization, thus providing us an early warning sign. Also the process of mining will generate large primes that can be used to make RSA addresses.

We only want addresses to be used once and never reused. So to enforce this the most obvious way is to keep a database of all previously used addresses and their balance. But this means keeping a large database of all the empty addresses forever which would get huge, or search through the entire actionlattice to see if an address is there, which would take way too much time.

We considered merkle tree proofs, schnorr signatures, 3 factor RSA and moving address requirements, and moving address requirements was the only viable solution to this problem. Newly proposed addresses (recipients of cips in transactions) must contain a specific suffix to be valid, and these acceptable suffixes will only be valid for 72 hours. As long as they are the recipient of a transaction within that 72 hour window, they will be able to send forever, but can never receive again. Nodes will either keep a list of all new addresses generated in the last 72 hours and compare each newly proposed address to this list and ensure there are no matches, or simply search the last 72 hours of the lattice to ensure no newly proposed addresses match.

Address from public key

So we are using RSA with specified parameters for public keys.

Just like bitcoin [6] the public will not know the public key of an address until after the cip is sent. So just like how you need to give your address(es) to the sender off-chain in blockchain, you will have to do that in actionlattice too. Also if you want to send an encrypted message to someone, you will need to either send it to a spent address (so you can see the revealed public key) or get their public key off-lattice.

Generating an address in bitcoin [7]

do not allow split key address generation [8]

Our proposed method

  1. perform 2x sha256 on public key
  2. perform 2x blake on previous hash
  3. perform 2x skein on previous hash
  4. perform 2x sha3 on previous hash
  5. perform 2x shake254 on previous hash
  6. check if last 6 digits (hexadecimal) match currently acceptable suffixes (1 in 5.6 million chance since there are always 3 so 64^4/3)
    1. if not, generate new public key and start from step 1.
  7. append version bits to beginning of final shake254 result, this is the hex address
  8. convert 256 bit address to base64. This is the final address.

RSA considerations

RSA signatures are malleable so messages being encrypted by them should be padded.

RSA calculation examples [9]

e = 1073741827 (roughly 1 billion)

e should be: [10] [11]

Prime. It can't have the same factors of lambda(n) or be a factor of lambda(n), so prime is ideal.

Small hamming weight [12]. The common e is 65537 (1+2^16) which has a hamming weight of 2 when paired with 00000. 2^16 (and seemingly every power of 2) has a hamming weight of 1 but probably isn't used because it has a lot of factors, whereas 1+2^16 is prime.

Be greater than 50,000 [13] and less than 32 bits [14] [15]. Needs to be smaller than lambda(n)

262147 (2^18 + 3) or 8388617 (2^23 + 9) or 268435459 (2^28 + 3) or 1073741827 (2^30 + 3) or 9007199254740997 (2^53 + 5); all have a Hamming distance (via converting to binary and subtracting) of 3 (1 worse than 65537) and are prime.

2147483659 (2^31 + 11) has a hamming distance of 4.

I am selecting e to be 1073741827 which is (2^30 + 3) because it is below the 32 bit limit imposed by some software, is prime, has a low hamming distance of 3, and is as large as possible to avoid insufficient padding attacks.


Safe and strong primes might have a benefit and avoid pollard p-1 algorithm [16] but may not be worth it at high bitlevels [17] but better safe than sorry.

Accumulators require rigid numbers, product of 2 safe primes [18] [19]

Moving address requirements

We will have the last 4 digits of base64 addresses (suffix) determined by the trimmed hash of 3 daily hashshaps concatenated. So you concatenate 3 daily hashsnaps ordered from oldest to newest, hash them together, and trim off all but the last 4 digits. This gives about 17 million addresses to search to find one that has the correct last 4 digits.

Each suffix is good for 72 hours for newly proposed addresses (even though a new suffix is generated daily). This means the 3 hashsnaps you concatenate and hash are the three newest daily hashsnaps you are aware of, and if you are doing a long proof you may want to start your attempts as soon as the daily hashsnap is published. However if you just want to factor a large challenge so you can upload a lot of data to your transaction (say a picture or video) then you don't have to have any recipient cips, therefore the 72 hour window doesn't apply.


We could require that public key addresses have a new prefix or suffix every certain amount of time. So addresses could be required to start with xdgh... For example. Then a month from now they could be required to start with kdne... These leading strings could be generated randomly via hashsnaps. The problem with this is it would take a lot longer to generate addresses, which really shouldn't be too big of a deal because already vanity address generators exist that can do this. The big problem is collisions, the algorithm could easily generate the same prefix twice so old addresses made with a certain prefix can be reused at a future date. This could be fixed by making the prefix longer, but then address generation time also increases significantly.

To implement this we would need to specify how often the address prefix changes. Lets say its daily. Then to test transaction validity nodes would only need to scan back a day to make sure that address wasn't used before.

Can we tolerate address reuse that is limited to very long time periods apart? Is this good enough to really disincentive address reuse? It might be, especially if we can combine it with another technique. Another thing we could do is keep a list of all prefixes and what hashsnaps they correspond to so if we do get a repeat it could be rehashed. But then we are stick with maintaining a list of all prefixes, which certainly is a much smaller dataset then all addresses ever used... The problem with that though is you would eventially run out of prefixes. Lets say the addresses were numbers and the first 4 digits are the prefix. That means 10*10*10*10 prefixes would be possible which would only last 10,000 days before all were used, which is about 30 years.

Example: I created a vanity address in several minutes I believe that had a prefix of 6 hexadecimal characters which is 16^6 attempts (16.8 million attempts). If we reduced that to 5 characters it should take just a few seconds (1 million attempts), that would give us enough unique prefixes for about 2,800 years, which should be sufficient, and if we are still around we could increase the requirement to 6 digits at that time (or at any time really).

A very fast cpu can do 1/2 million attempts per second, a bank of 8 fast gpu's can do 2.5 million attempts per second [20]

3 prime factor RSA

It exists [21] znd this would allow us to use one of the primes as a timestamp. The problem is, how do we know that the other 2 primes weren't reused? Basically the same problem as the merkle tree proof.

Merkle tree

Merkle tree cannot help us create a public key that is provably new because you can't sign for the high level hash.

Notes (deprecated)

Merkle tree proofs should work to allow addresses to be generated using a recent Hashsnap so that we know an address is newly generated that you are sending a cip to [22] [23] You would take the most recent daily hashsnap and hash it with a brand new public key (that you have the private key for). Now you have a new cip address that did not exist before. In reality it proves the address is new but it does not prove the public key is new so this does not work for us.


Schnorr signatures should not be allowed because they can make fake public keys that can be signed for with old private keys. We want only new keys to be allowed with no fakes.

Notes (deprecated)

Schnorr signatures might allow timestamping keys [24] 'pay to contract' allows to embed and old public key with data to make a new public key that can be signed with the old private key. This is good to make cips linkable to a user but unlinkable for external observers of the lattice. However by revealing the old public key then cips could be linked. Optional privacy doesn't really work because it ruins fungibility for everyone when someone chooses to reveal a link on chain.

Schnorr signature lecture [25]

Good explanation of how exactly signatures work including schnorr [26]

Bitcoin wiki on schnorr [27]

Bip340 [28]

Signatures are 64 bytes and public keys are 32 bytes [29]


Nano (blocklattice)

Nano uses a block-lattice and DAG and since the words are similar it is good to compare and contrast it with actionlattice.

Transactions validate 2 others

This is about the only similarity between block-lattice and actionlattice. I wasn't aware this is what nano did prior to designing actionlattice so this confirms it is a good choice and resembles what is done in nature. Actionlattice however will probably have each transaction to verify 3 others and you also check 7 levels down, 3^7 transactions you check which is around 2,000. In Nano, you really only check 2.

How it differs: in nano you only have to verify those transactions are valid. In actionlattice you need to verify around 7 levels down, so 128 transactions. If any transaction below you connected to you is invalid you risk your transaction reverting as well. So the miner of the transaction (which can be you) incentivized to validate as far down as possible because the validity of their reward counts on it. The person who made the transaction does not need to care very much because if it does get removed it can just be added back and there is no loss, nor theft, nor double spending because of the design of actionlattice.

Two party transactions

In nano both parties have to sign for a transaction. Actionlattice only the sender has to sign, just like standard blockchain. This is important because signing gives up security. Actionlattice goes above standard Blockchain security and ensures an address "breaks" after it signs and cannot be reused in order to preserve 100% maximum security.

Individual blockchains

In Nano they achieve double spending protection by each address having it's own blockchain. In actionlattice there is a global "blockchain" and we achieve double spending protection by having each address only able to hold 1 bit and once it is sent the address can no longer be used (destroyed), and nodes simply reject the latter duplicate transaction(s).

Double spending

Nano has representative staked node voting [30]





Transactor side - create "transaction block"

  1. "Sending Public Addresses" (SPA's) of each bit (cip) being sent.
  2. Signatures (proves private key ownership) for the addresses (SPAs) for each bit (cip) being sent.
  3. "Receiving Public Addresses" (RPA's) - One brand new address for each bit (cip) being received.
  4. Internal Message (IM) which can be anything, including encoding tokens and NFT's.
  5. Completed transaction signature (CTS) - Signs over the whole transaction using one (or more) of the SPA's done by the sender. This prevents man in the middle (MITM) attack.

Miner's side - "activation" of transaction block

  1. Proof of Proven Sieve (Props) - Completed transaction signature (CTS), number to factor (NTF), two equal digit-length factors, and nonce (that was concatenated with the CTS, peer blocks NTFs, cipbase address, hashsnaps, Miners message, and then hashed to create the new NTF).

Or 1 factor to save space? To verify validity, multiplication might be faster than division, so listing both factors might be wise since lots of nodes are going to be spending lots of time verifying). If two factors are given NTF would not need to be declared and recorded because it can just be assumed by multiplying the factors together, but it might be worth the space to save the NTF to make it more human understandable and not require caching.

  • Peer blocks - 3 peer blocks NTFs (PNTFs, pronounced 'pontiffs') within the last epoch (nodes may have set their own requirement for how old of transactions that they will allow new transactions to connect to) that you confirm are correct (if you are wrong about their validity your transaction may become voided). These three NTFs need to be a part of the hash for generating the NTF's in order to prevent a man in the middle attack where someone would steal your NTF and factors and use their own PMsigs and their own Msig to steal the reward.
  • Latest Hashsnaps of each of the five levels, which acts as a vote for the correct network snapshots. These votes are not needed for nodes to select the fully valid hashsnaps with the most weight, but it should help clear up any supposed discrepancies. Doing this will also act as a pretty accurate timestamp for the transactions it links to (but not necessarily for itself, as it could have taken days to factor the NTF).
  • Bitbase address (aka cipbase address) which is the Miners public address (MPA) where the bitbase is sent.
  • Miner message (MM) - anything the miner wants to write within say 100 characters (needs to be limited because there are no checks & balances over the size, without a set limit).

The reason why all of this data was hashed into the Number to Factor (NTF) is because the factor acts as a seal preventing a Man in the middle (MITM) attack that changes the cipbase address, miner message, or peer blocks.

This miner can provide a proof of semiprime which can be done on any transaction of any size for the same factoring cost. Solve a 396 bit (or higher) challenge and you get to pick where the 3 bonds (prior art transactions) are directed.

Now another miner can also do the same exact challenge and add the same transaction to the lattice however the challenge has to be harder, say a 400 bit number instead of 396. This means multiple miners can mine the same transactions, they are all in the lattice but since each transaction is a non-reversible change (since by definition each address can only hold 1 bit, no more no less, once it sends its bit (cip) it can never receive another, its worn out. this solves the double spending problem. Each address is one and done. It either doesn't exist, holds a bit, or is spent. So a transaction from one specific address to another can only ever happen once and never be reversed. We solve double spending to not allow any address to make more than one transaction ever. And we achieve this by allowing infinite addresses. That also gives privacy.

Node settings

  1. Max transaction size of transactions you save (not everyone has to save every transaction). Set to 0 for no pruning.
  2. Minimum factored number bitlength you will accept (I would suggest 120 (396 bits) digits as of now) this should be increased over time, but not too fast as to create orphans of old transactions.

Node considerations

The actionlattice can be thought of like a database where each transaction is a new line. Just like a blockchain, an actionlattice is append-only.

However nodes can prune their lattice to only include transactions under a certain size or that have a sufficient proof of semiprime. Say the Posi starts at 120 digit number factorization, when the node moves to require 130 digit all the 120 digit proven transactions need to be remined at 130 digit proof. Every re-mining to increase the proof level produces a new cip, and a higher value cip in fact.

Since you don't want the transactions your transaction is connected to to dissapear due to node pruning, you are going to want your transaction to be connected to small transactions that have a high proof level. Transactions closest to the surface will have the most up-to-date proof level so are good candidates to connect to. One nice thing is that if the transaction size is acceptable to most of the nodes now, chances are it will continue to be in the future because as hard drives grow due to moores law, storage capacity increases and likely nodes will allow bigger transactions over time because they can and it supports the network.


Number to Factor (NTF)

Proof of Semiprime/sieve (Posi)

See Digital Collectible Network#mine

Data standard bitlevels

See also data

This is initial levels, as each bitlevel within each of these groups can hold double the last. For example a 55 digit NTF can hold 64 bytes of transaction data, whereas a 60 digit number can hold 128 bytes. Each new bitlevel doubles the size allowed of the former and every 6 bitlevels (bepoch) it goes up 64x. The numbers to factor to achieve these storage capacities will go up by 5 digits every 2 years, butthe amount of storage space allowed will not go up without a vote. For example right now factoring a 145 digit number allows you to store 17 mb, in 2 years that would require factoring a 150 digit number to store the same 17 mb.

  1. Alpha 64 bytes
  2. Beta 4092 bytes (4.1 kb)
  3. Gamma 262144 bytes (262 kb)
  4. Delta 16777216 bytes (17 mb) - max 536870912 bytes (537 mb)

32 byte is the smallest known signature size [31]

Bitcoin public keys are 33 bytes compressed and public key hashes are 20 bytes [32]

How 20 byte bitcoin address is made [33], 25 byte total after checksum [34]

This max datasize does not automatically go up every 2 years like the digitlength, but only increases if a vote decides it should, perhaps every 7 years.

Time standard bitlevels (based on year 2022)

See also difficulty

170 digit is 4 months

165 digit is 2 months

160 digit is 1 month

155 digit is 2 week

150 digit is 1 week

Delta: 145 digit is 60 hour

140 digit is 30 hour

135 digit is 15 hour

130 digit is 8 hour

125 digit is 4 hour

120 digit is 2 hour

Gamma: 115 digit is 1 hour

110 digit is 30 mins

105 digit is 15 mins

100 digit is 8 mins

95 digit is 4 mins

90 digits is 2 mins

Beta: 85 digits is 1 min

80 digits is 30 sec

75 digits is 15 seconds

70 digits is 8 seconds

65 digits is 4 seconds

60 digits is 2 seconds

Alpha: 55 digits is 1 second

This can all be on one lattice

Purple is the Delta difficulty

Green is the Gamma difficulty

Yellow is the Beta difficulty

Red is the Alpha difficulty


Targeting the 72 hour mark (and higher) is ideal as it pretty safely sets CPU as the fastest hardware (while also benefitting from an auxillary GPU) and also it makes you wait 3 days before sending the payment, which waiting 3 days to think about making a purchase helps one weed out unnecessary spending.

The digitlength for each level goes up one rung every 2 years to compensate for moore's law, however the datasize limit does not - without a vote. The vote can be held every 7 years or something.

These are not hard and fast rules just rough categories of difficulty. Light nodes would probably save delta and gamma, fast strong nodes would save all of the transactions including alpha. As the times to complete a given challenge become faster, lower bitlevel proofs can be pruned (which would need to be re-mined at a higher bitlevel to not disappear) and/or future proofs will need to be a higher bitlevel.

If it is in the lattice and is spent, it cannot be reused. If the network as a whole forgets it then it can be reused.

These bitlevels chosen by the miner do affect consensus because transactions ruled invalid or connected to invalid blocks will be deleted. They will need to be reactivated (remined) in order to add back to the lattice so they lost all the proof they provided so are incentivized to be especially careful where they connect. The voting weight determined by the digit-length of the NTF is what determines their voting power.


This is how long it takes to factor a base-2 brilliant number at each bitlevel. In reality it would take many attempts at factoring nearly base-2 brilliant numbers so it will probably take 10x as long as this to find one that fulfills the challenge, and the higher the bitlevel, likely the more attempts needed.

Prime factorization on optical (photonic) computers [35]

Quantum computers would need to use shor's algorithm.

CPU's are favored for challenges with over roughly 140 digits.

GPU's can ECM up to around 140 digits.

SNFS numbers should not be allowed [36]

Proof of proven sieve/semiprime (Props)

See also transaction activation


Use case

Cip's don't need to have value, they vcan simply be placeholders for tokens whose value is based on something else.

However we could (and probably should) allow proposing a transaction to be able to use a little smart contract to pledge an amount of Cip's to be given to the sucessful miner of your transaction. This could buy you a fast and strong initial confirmation, which will lead to faster subsequent confirmations, and thus a faster completed transaction.


TKN shall be put at the beginning of the message section of the transaction


NFT shall be put at the beginning of the message section of the transaction


Each cip will have a proof level (like 70 digit or 155 digit or whatever) and a hashsnap it was included in, which is basically a timestamp. You can value them whatever you want to. None of them will be perfectly identical, and people can design schemes to value them. I am guessing that for example any 155 digit proof performed in the same year will likely have identical or similar value.

A cip is a digital collectible, not a currency persay. That said collectibles often are used like currencies, such as coins or even dollar bills. Each coin or bill is unique.

Smart contracts

The first contract type is pledging coins to the miner of a transaction.


See also datasize

A certain amount of data can be stored in a transaction. Since as many txn's as desired can be added to the Actionlattice in parallel,a size limitation does not effects number of transactions per second the network can handle, which is in theory, infinite and limited only by bandwidth and processing speed.

Maybe 1024 bits would be reasonable which is enough for 4x 256 bit signatures.

Also it depends on how difficult the number to factor is. If it takes 1 min on an average computer to factor perhaps it would allow less data to be stored than a number to factor that takes 1 hour on a average computer.


Actionlattice is very censorship resistant. However lets say you live in a jurisdiction where holding or serving certain types of data is illegal. The only part of a transaction that can hold unlimited types of data is the message portion. So you can prune the message out of the transaction you save and just replace it with the hash of the message, this way no transactions are censored but the message can be censored at a nodes discretion. Pruning out the message and replacing it with the message hash can also be done as standard practice for light nodes.

As long as some node somewhere is keeping the actual message, it could easily be appended into someone elses private actionlattice by matching the message with the message hash.


The genesis, called lattigenesis, requires number of bonds + 1 transactions (in the case of 2 bonds - a trinity) to begin that cross reference (connect) to each other. Three transactions is the minimum genesis for a 2 bond system, and more could be used which would make the surface that would need to be attacked larger, and thus a larger genesis is more resistant to attack. But using the minimum viable genesis (MVG) of a trinity for 2 bonds is preferable to keep "fake" transactions to a minimum. Genesis transactions should not have a cipbase (no new coins should be generated). Later transactions can just be blank with no cip movement in order to collect cipbase cips.

The genesis seeds the crystallization of the lattice.

2 bond version gives a 3 transaction genesis

In the 3 bond case there would be a 4 transaction genesis.


See also time standard

Moore's law shows that every 5 more decimal digits you double the difficulty [37] [38] or 17-18 bits [39], which may happen every 3 years. So the ease of factoring a 110 digit today will be what its like to mine a 115 digit number 3 years from now.

The nodes set the minimum difficulty they will accept, which should probably take at least 1 second to complete. This should be raised based on basically what is the minimum difficulty proof that is present in the network. Since miners get rewarded to re-mine old transactions at higher difficulty, the network autocompensates for moore's law. The result of this is old rewards would slowly loose value over time, at about the rate that a PC depreciates. That could be quite fast, value might decline with moore's law for old cips. However you can keep mining fresh new cips with high value constantly. Digital Collectible Network doesn't have the value loss problem though. Neither does Fact0rn.

Value depreciation could be prevented by raising the digit-length of the NTF by 5 every 2.6 years (koomey's law) but then the network would no longer auto regulate itself and this central planning will eventually cause the network to fail when koomey's law deviates in doubling time and the network fails to adapt properly. I would rather trade value proposition for long term health of the network.

Previously I had thought of requiring that the smallest of the two factors of the semiprime being 40% (0.4x) of the bitlength to mostly preclude GPU's and ECM asics but only require 1 GNFS sieve to activate the transaction. Now however (after contemplating Fact0rn) I think I will make it both factors have to be the same number of bits. This will mean a couple dozen GNFS' will have to be done in order to activate the transaction. The reason for doing this is to make it if multiple people are working on the same transaction that their is a more random chance of who wins it. Also so that we don't have to worry about GPU's being able to ECM farther or the like and I don't want to ever have to change this setting.

Voting weight

voting weight = y = (2^(digitlength/15)/(12.7))*((2^(digitlength/5))/2048)

GPU ECM is 10x faster than CPU, which is compensated by the weighting constant.

There should also be weight based on how new the peer transactions are (the transactions your transaction is connecting to) in order to incentivize people to stay on the chain-tip as much as possible adding confirmations to new transactions instead of old ones. Perhaps a compounding 5% reduction for every extra day old the oldest peer transaction is that you are connecting to. So confirming a 1 day old transaction would yield you 95% of the reward, a 2 day old transaction would be 95%*95%=90.25% reward, 3 day old would be 85.74% etc. This also creates a price stabilization effect because if transactions slow down to a crawl and there are only a few transactions per month, reward for transactions would be smaller due to this factor and thus emission reduces so price can trend upwards.

Weighting function


Voting weight = y


digitlength = x

Initial Equation

f(x) = a (1 + r)^x

y = initial (1+proportion growth per digit added)^(number of digits added)

y = (2)^(x/5)


55 digitlength

y = (2)^(55/5) = 2048

56 digitlength

y = (2)^(56/5) = 2352.5

60 digitlength

y = 2^(60/5) = 4096

100 digitlength

y = 2^(100/5) = 1048576

Intermediate equation

Now we want to normalize based on the current smallest accepted digitlength.

So currently we accept 55 digit length minimum, which has a difficulty rating of 2048. Thus we divide everything by 2048 to find relative difficulty

Relative difficulty = difficulty/min difficulty

Relative voting weight = (2^(digitlength/5))/(2^(mindigitlength/5))

currently with 55 digitlength min, equation would be:

y = (2^(x/5))/2048

This shows for ever ~17 digitlength increase, we have 10x the voting weight.


55 digitlength

y = (2^(55/5))/2048 => 1

56 digitlength

y = (2^(56/5))/2048 => 1.15

60 digitlength

y = (2^(60/5))/2048 => 2

63 digitlength

y = (2^(63/5))/2048 => 3

65 digitlength

y = (2^(65/5))/2048 => 4

70 digitlength

y = (2^(70/5))/2048 => 8

72 digitlength

y = (2^(72/5))/2048 => 10.6

80 digitlength

y = (2^(80/5))/2048 => 32

100 digitlength

y = (2^(100/5))/2048 => 512

105 digitlength

y = (2^(105/5))/2048 => 1024

140 digitlength

y = (2^(140/5))/2048 => 131072

Final equation

For the final equation we need to calculate the weighting constant. This constant is created to compensate for more parallelizable methods of factoring for smaller numbers, like GPU's using ECM. Basically we want to double the weight every 15 digits (*), which is about 10x extra voting weight for every additional 50 digits, so using CPU's to factor larger numbers would be a better option (*) than using GPU's for lots of small numbers.

(*)This is subject to testing to make sure that cpu's get more voting weight than gpu's, but perhaps the combo can be fastest. When we graph the total voting weight on the actionlattice with respect to digitlength, we should see a reasonably flat, smooth, bell curve with its maximum centered around the 72 hour mark (currently around 146-147 digitlength) and incentives (voting weight) should be tuned to achieve this bell curve.

voting weight = (weighting constant)(2^(digitlength/5))/2048

weighting constant (c) = 2^(digitlength/15)/(2^(55/15))

weighting constant (c) = 2^(digitlength/15)/(12.6992084157)

voting weight = y = (2^(digitlength/15)/(12.7))*((2^(digitlength/5))/2048)



55 digitlength

y = (2^(55/15)/(12.7))*((2^(55/5))/2048) = 1

60 digitlength

y = (2^(60/15)/(12.7))*((2^(60/5))/2048) = 2.52

65 digitlength

y = (2^(65/15)/(12.7))*((2^(65/5))/2048) = 6.35

70 digitlength

y = (2^(70/15)/(12.7))*((2^(70/5))/2048) = 16

100 digitlength

y = (2^(100/15)/(12.7))*((2^(100/5))/2048) = 4096

105 digitlength

y = (2^(105/15)/(12.7))*((2^(105/5))/2048) = 10321

140 digitlength

y = (2^(140/15)/(12.7))*((2^(140/5))/2048) = 6658043


Each transaction's vote is weighted by the digit-length of the NTF.

every 5 digit length is double the weight. So a 105 digit ntf is twice the weight as a 100 digit ntf.


Software design


Miner software will need:

Find unconfirmed transaction

Create activation block attached to transaction

Iterate nonce and hash the block to give random numbers

Run ECM on candidate numbers and eliminate non acceptable

Run GNFS on candidates to find a base-2 brilliant number



Opsonization [40] AKA 99.9% attack hard fork

A fully successful opsonization attack causes a destructive hardfork in the network. But new genesi can be created and rebuild the network if it happens. These new genesi can later be reconnected together as the threat subsides.

Opsonization in this context means cover the entire surface of the actionlattice with fraudulent transactions in order to force new transactions to confirm fraudulent ones.

Surface area to volume is 3/R [41] So surface area is directly proportional to volume. This means that as the size of the actionlattice grows, so does the difficulty in performing the opsonization attack.

The reason this is nicknamed the "99.9%" attack is because you would literally need to flood the lattice with enough transactions to literally cover the entire surface, except for 1 single transaction. Since a new transaction needs 2 old transactions to connect, you would literally have to connect to all other transactions in the lattice except one in order to force new transactions to confirm the fradulent ones.

Now practically speaking you might get less rigorous miners to confirm some of your fradulent transactions if they aren't willing to check the validity of everything, but this should never happen, a miner should always check validity of the transactions it is connecting to, but some may decide not to, leading to the downfall of the network. Miners not being rigorous would make them "bad" and a part of the attack. "Good" miners will always check validity. Only very few good miners are needed to nullify an attack and have the network recover.

If the 99.9% attack only achieves 99.8% for example (fudging numbers here for illustration) then new transactions can find the "hole" and connect only to the good transactions and this would create what amounts to a "genetic bottleneck" and basically would grow out through this opening and slowly move out of the prison created by the opsonization attack.

Even in the case of a successful opsonization attack that totally nullifies the surface with fraudulent transactions, someone can create a new genesis (hardfork) by creating 3 crosslinking transactions and all the old good transactions can be re-mined onto this new genesis. Preferably many of these new genesi would be created so that the attacker would have to attack them all at once.

Tumor attack soft fork

A tumor attack causes a benign softfork in the network.

A tumor attack is similar to how a traditional 51% attack works, but it does not cause a hardfork like it does in a blockchain. Basically a bad miner starts confirming their own transactions with more fraudulent transactions. Basically nodes would need to figure this out and excise the tumor of fraudulent transactions (and any transactions connected to them above them) from their ledger. This way the confirmations don't matter. However if this attack is done quickly after a transaction, some nodes might be fooled into saying the transaction was confirmed if they were not checking validity. Nodes must be checking validity before adding transactions to their ledger.


An ASIC has been proposed, called a SHARK [42]. the SHARK is a hardware arrangement that works for only one specific bitlength of number. So if one was created it could only have a small and temporary niche in our network. Estimated cost is 200 million estimated, it has never been attempted. At current consumer technology levels the SHARK would achieve a 2-3x speedup over consumer CPU's and of course would cost 200 million each (this isn't just a one-time development cost), and it also would only work for one specific bitlength number without physical reconfiguration, in the case of the SHARK paper it was designed for 1024 bit numbers. More discussion here: [43]

There is a proposed ECM ASIC which means we need to go up probably to 0.47x bitlength for smallest factor to try to avoid these [44] but currently it only works up to 200 bits, which is like a 70 digit number or less than that.


Gpu's can ECM a number looking for factors up to around 0.37x of the bitlength of the number in the same time it takes a CPU to use GNFS to find all factors. What this means is that the stength of factors required would mean that there should be no factors smaller than 0.37x of the bitlength. Preferably 0.4x. the farther we go over around 0.32x requirement the more "useless" sieves we will have to do due to finding decently strong semiprimes but they are not strong enough. Due to risks of ECM asics though my current thought is require semistrong semiprimes with factors no smaller than 0.47x of the bitlength. If you put the requirement at around 0.37x then you can balance GPU's with CPU's if you want something like that otherwise at around 0.4x and above GPU can ECM up to 0.34-0.37x and then pass it to the CPU in order to do the GNFS sieve and find the factors.

Quantum computers

Quantum computers can do 2 things against something like actionlattice. First they can potentially crack keys, reverse the private key from the public key. I believe such a process would use grover's algorithm which isn't exceptionally fast so that provides some security. Also we can use quantum resistant addresses which is no problem (although it would bloat transaction size for us especially).

The next attack is unique to GNFS crypto and that is shor's algorithm which is very fast at factoring large numbers. The main defense we have against that is the democratization of mining. One quantum computer cannot factor hundreds or thousands of numbers in parallel and so it wouldn't be able to attack the network if other small miners are also confirming transactions. Also miners can even mine their transactions offline before they propose them to the network further eliminating quantum as the only things that will win cips. The best part about actionlattice is that it allows quantum computers to mine but by design prevents them from taking over. For a quantum computer to factor a 150 digit (496 bit) number would take 994 clean logical qubits. Shors algorithm as of 2020 could still only factor 2 digit numbers [45]. Quantum computers have been around for 54 years now [46] and certainly don't have over 54 clean logical qubits (qubits that can run shor's algorithm). TEEFs law states that at best 1 logical qubit will get added per year, since adding one clean logical qubit is probably twice as hard as the last added. This places us at least a couple hundred years before quantum computers can do what a ryzen processor can do today. And in a couple hundred years ryzens will be orders of magnitude more capable then they are today.

Classical computers gain about 5 bits in factoring ability per year. Every clean logical qubit added to a quantum computer gains it 1/2 of a bit. That means for quantum computers to keep up with classical computers for factoring, they would need to gain 10 clean logical qubits per year. According to the 50 year history of quantum computers, that has never happened nor ever come close. If anything we seem to be on track for 1 clean logical qubit every 3 years which means quantum computers will progress 30 times slower than classical computers when it comes to factoring large numbers. If this is true, quantum computers will literally never be a threat to GNFS or RSA.


Greatest common denominator (GCD) can be run reasonably quickly (seconds-minutes) on a candidate number to see if it is of a special form where SNFS can be run faster than GNFS [47] [48]. The only question is how common are these numbers where looking for SNFS-able numbers and bypassing GNFS would be viable. One workaround to prevent this is to have the nodes simply reject any special forms, which can be somewhat easily checked - and known even faster after the number is factored (since the possible 'greatest common denominator' is now simply checked to match the factors).

Node collusion

Anyone can be a node. The people who want to know if a transaction is confirmed would either rely on their own node, or a trusted node to see if their transaction was confirmed. Eventually over time the nodes should reach consensus on the ledger, as everything can be verified and nodes that condone fraudulent transactions would be eliminated from the network of nodes.

Types of invalid transactions

Sending cips that don't exist

Cips are all created in a cipbase transaction. So when adding a new transaction to the ledger a node must validate that the cipbase transaction exists that gave a cip to the address that is now trying to send it. Maintaining a UTXO may help make this process quicker. It will take a long time to check if a huge transaction sending say 1000 cips all check out, and this is why nodes can rightfully limit the transaction size they are willing to save.

Duplicate transaction

Duplicate transactions are allowed as long as they contain a different (usually higher) bitlevel proof. If they have the same bitlevel proof then the latter one would be discarded by the node and should not be built on by other transactions. This is also dis-incentivized because the loosing cipbase reward would be lost forever, and the work done lost.

Double spend

The same cipbase addresses as another transaction are sent to different addresses, attempting to duplicate the original coins. New transactions would vote for which they thought happened first, and the one with the most votes wins, the other looses and is deleted along with all the transactions that connected to (voted for) the bad transaction. So they are incentivized to vote correctly (which is valid and which is not).

Double mine

The same transaction block, was mined twice with two different proofs. This would never be done intentionally but we still face the possibility it happens. New transactions vote for which one happened first by bonding to them. Once there is consensus then the looser and everyone that bonded to it is deleted. This incentivizes new transactions to vote correctly (about which is valid).

Non-standard bitlevel proof

The difficulty aka 'bitlevel proof' is standardized to certain 'activation levels'. These are 5 digit levels which is around every 17-18 bits.

If the miner proposes an NTF that is not a multiple of 5 decimal digits, it should be discarded or the reward rounded down to the nearest accepted bitlevel and thus standard cip type (aka cip397)

Invalid signatures or proof of semiprime

Basic stuff here, make sure the signatures match the addresses, make sure the proof of semiprime is valid, etc.

Connected to an invalid transaction

There is a reason why each transaction can only be connected to two others and not more, because 2 gives good functionality while not overcomplicating how many transactions are checked. Each node would want to check the validity of several layers down from the transaction to make sure the transactions it is confirming are valid. If you go down 1 level you verify the two transactions it is confirming are valid. Two levels down you are verifying 4 transactions. Three levels verifies 8 transactions and so on, each node needs to verify 2^n transactions for every new transaction, where n is the number of levels you go down. Nodes might be set for n=7 default which is 128 transactions. Lighter nodes can set this lower, but each node should be transparent of their setting so people connecting to them or merchants using their services can know how sure they are. If any transaction is invalid, all transactions that are connected to them are also rendered invalid and all are removed from the ledger of the node and no longer exist.

Double spend


See attacks and address generation

Actionlattice is specified and designed to be fail-proof and full security by default in every way possible.

  1. Transaction malleability
    1. Transaction malleability in bitcoin was because the signature did not cover the entire transaction [49]. It was just a hash which was not secure as transaction data could be changed by a miner.
  2. Automated encrypted private key backups, organized in folders by date
    1. Everytime you generate a new address a new backup is made
    2. Everytime you exit the app or shut down the computer a new backup is made.
  3. Address reuse
    1. We strictly enforce no address reuse.


Basically everything written is upsides so upsides don't need a section.

  1. Fungibility: The main downside I can see so far is the value of certain bitlength cips will decrease over time (Edit: we actually do timestamp now, called hashsnaps), perhaps at a rate of moore's law, so halving in value every 3 years, so basically the value will depreciate like a PC value does. I guess this kinda makes sense because a CPU cycle today is as good as half a CPU cycle in 3 years for now. So perhaps this is the natural order of things. If we timestamped the transactions and increased the digit-length requirement of the NTF by 5 every 2.6 years then 1 cip = 1 cip forever, but I tend to not want to make a rigid difficulty curve which would eventually break. I would rather sacrifice value proposition for flexibility, so that the network can act as a sensor of koomey's law rather than a subject of it. Also I would rather not rely on arbitrary timestamps.
    • Another way to improve this is to have different genesi lattices, Alpha for low bitlength proof, Beta for medium bitlength proof, Gamma for high bitlength proof. so a gamma cip would always hold the same value, as would a beta or alpha cip. The downside to this solution is there isn't infinitely variable bitlength that can be rewarded so we really wouldn't know how high a proof level could be without trying it. So in this example alpha might start at 80 digit proof, then next year might be 82 digit proof, next year 84 digit proof etc. So one alpha cip will always equal 1 alpha cip. Problem is we don't know how fast we can raise that bitlevel proof requirement (which is set by each individual node) since there is no incentive for miners to go above and beyond.
  2. Consensus: See also consensus. It is a rolling consensus instead of a block based consensus. What this means is the nodes can watch a transaction gain more confirmations in real time (while watching for double spends) to decide when a transaction is complete, instead of waiting for arbitrary block-times to confirm it (like a blockchain).
  3. Size: Having no transaction size limit is a downside because different nodes will have different requirements, but GRIN also uses a similar method for node settings which is more democratic and less centrally planned.


Inspiration for this idea came mostly from my work on CollectBit and Digital collectible network. I hit a wall not knowing what to do next. I started shifting into ASIC resistance by hashing the entire blockchain like in blockvault (which still is a good concept for saving useful data) and then later making my own algorithm, basically forking Yescrypt and/or RandomX and making the memory requirement scale with moores law to make ASICs prohibitively expensive to keep redesigning. I was also interested in proportional reward from staticcoins like Ergon. But always in the back of my mind I knew GNFS was the holy grail for asic resistance but it can't really work in a 1D blockchain.

Then learning about Kaspa and their blockDAG got me thinking. I noticed their system was basically a 2D blockchain and can reward people for coming up with solutions at the same time as others. I thought the logical conclusion would be every transaction would be its own block but was unsure of how that could work scalably and why it would be important to do. After a day or so of thinking I figured I could make transactions like water molecules that bond to other transactions and these bonds confirm previous transactions. Only then did I realize that would make it a 3D blockchain. This 3D blockchain design, actionlattice, also provides a proportional reward for miners which is awesome. Its funny and fitting that all the limitations that 1D blockchain has (51% attack, unfair reward, mining pools, asics, etc) can be overcome by moving into 3D. We do live in a 3D world after all.

Doing research further on mersennes forum I also ran into Fact0rn post and how they wanted to use factoring in a 1D blockchain. I explained to them why it wouldn't work as designed in their discord and here [50] and offered them my help and potential solutions. That said their requirement of "strong semiprimes" actually was an improvement for 1D blockchain over my "semistrong semiprimes" and realizing why their solution was better led me to my advice on how to make it better still (in the link above).

I chose semistrong semiprimes because in my collectbit database people can submit individual proofs to mine and I wanted every GNFS sieve, after ECM, to yield a solution - therefore proving GNFS was done but not requiring extra effort after that.

I still think 3D blockchain is the best current implementation for GNFS solely because multiple people can submit proofs at the same time and even back in time slightly (allowing mining offline). Also the 3D method will inherently scale to much much greater transactions per second (TPS) than a 1D or 2D can achieve.

Prior art


Anders 2014 proposal for integer factorization proof of work [51]

Anders post Block timestamps in bitcoin can be subject to attack [52] [53] bitcoin can enter a time warp creating the next dark age where large of chunks of history are forgotten. PoH may be a potential solution. Actionlattice doesn't need to worry about timestamps to dictate the "earliest transaction" because every unit can only be sent once so any version of that transaction is valid.

Anders also came up with unique coin ID's similar to actionlattice idea to prevent double spending [54]


10-21-2017 Spork idea [55] which was the seed idea that grew over the years into digital collectibe network, collectbit, and finally actionlattice.

2017 proposal [56] ~NatureHacker

Original mersennes proposal [57]

Original proposal to Monero [58]

Toppling the blockchain [59]

Bitcoin vs DCC [60]

Digital collectible network#mine



Fact0rn came up with the requirement to require base-2 brilliant numbers which increases randomness of winning a challenge to democratize it slightly for a 1D (traditional) blockchain. However, this only allows up to a maximum of a couple dozen miners/mining pools while staying fair. Right now they use bitlength of the number adjustment to adjust difficulty. What they need to do to create fairness is let the difficulty be set by moores law (gain 5 decimal digits (17-18 bits) every 2-3 years and then adjust difficulty with requiring a certain number of "leading 9's" [61] on one or both factors to adjust the blocktime to 20 minutes for a rise or fall of blocktime (change in number of miners).

Factorn integer factorization proof of work [62] [63] [64] initial commit to merge bitcoin code [65]

Factorn whitepaper [66] takes them 200 semiprimes to find a valid semiprime (has a factor of exactly half the bitlength) about 1 in 20 for reasonably strong semiprimes. This means my finding that 1 in 300 numbers is semiprime, 1 in 6,000 numbers is a strong (base-2 brilliant) semiprime.


See nano comparisions

Good rundown of nano [67]

Nano shares almost nothing with actionlattice besides the "lattice" name, nano calls theirs a blocklattice. However understanding how it works and the design decisions made can help in understanding the benefits to actionlattice.


Youtube masters in cryptology [68]

Discrete logs are hard just like int factorization [69]

Matrix vector multiplication is slowest and is used extensively in gnfs (nfs) [70] and is the basis of optical proof of work which aims to optimize capex/opex [71]

How sharing blocks/data happens [72] libtorrent idea [73]

Blockchain use cases, perhaps we can overlap or improve [74]

Bitcoin network, seems to be somewhat restful [75]

One way accumulators potentially usable in timestamping [76] pretty much if we could have an RSA algorithm that accepted 3 large primes instead of 2 then we could have 1 of the primes known, which is the timestamp. Actually this is no different than a merkle tree proof, and we can't be sure they didn't reuse the other 2 primes.

If possible prove that every address is randomly generated so you can never send to the same address twice. merkle tree proofs should work [77] [78] homomorphic encryption and opentimestamps could be possible solutions [79]

Understanding encryption and signatures [80]

Bitcoin op_return is 80 bytes but nodes can set their own limits [81]

Anders Rule 30 as a hash function [82]

Anders tail emission [83]

Anders bitcoin price will stabilize over time due to arbitrage [84]

Strong semiprimes called base-2 brilliant numbers [85]

Bitlattice tries to go 5D [86] why??

Matrix vector multiplication on GPU [87] also oPoW uses this.

ECM depth on 150 digit number [88]

Resuming factoring [89]

Simple explanation of GNFS [90] [91]

What length numbers do people ECM the most [92]

Lattice sieving on a GPU [93]

probably should be coded as a restful [94] api in python, java, or node [95] probably with QT [96] (choose xml or json [97]). Go and especially Rust are good [98]

Primality tests [99] [100] [101]

Name ideas






Surizon - suri





















Crumbs crums cookies cukies











Other pages that link to Actionlattice:


Attachments to Actionlattice:

Password to edit: nature

It's Sat, 10 Dec 2022 03:39 UTC in Vaultlandia