📜Rollups on Solana and EIP-4844
Looking into the inner workings and implementations of the EIP and blob transactions
When there is a global virtual machine running inside thousands of machines computing the same data it's hard to be as fast as a single server computing that data without any verification from outer sources/inspectors. This has been one of the biggest fallouts of Ethereum and its low tps which has given rise to a totally another ecosystem working on allowing Ethereum to process more transactions. Blockchains have caps on the amount of data + computation that can be fit into a single block to prevent dos attacks done by malicious vectors to halt the network. As Ethereum has only limited blockspace and many parties wanted to include their transactions in the latest block this started to drive the fee paid to validators/miners to include their transactions higher. Ethereum is a single global fee market where all the parties are competing for their transaction to be included in the block and activity on any one program affects the gas paid by all users, unlike Solana which has paralleled local fee markets where every program hotspot is contained allowing users who do general transactions to carry on.
This is achieved by the state management system of Solana where everyone program has to specify beforehand all the programs & states it’s going to touch. Ethereum’s global fee market led to huge fee spikes to thousands of dollars whenever there was a big hyped event like an nft mint or a token airdrop rendering the chain unusable for an average person who wanted to transfer some tokens and was unable to pay hundred dollars for the transfer. This will be solved by sharding to only a small extent as even after sharding it won’t be able to create isolated states and separate fee markets for different programs.
This problem has been intended to be solved by rollups.
What are Rollups?
Rollups are off-chain scaling solutions where they verify and compute the transaction of the mainnet and post the data proof that maintains the computational integrity of the transaction. To make this efficient rollups batch many transactions in submit proof that validates them on the mainnet. There are two types of rollups->
Optimistic rollups: They optimistically assume that every transaction is correct and verify it, but there is a time window where one can object to the integrity of the transaction by producing fraud-proof. A fraud-proof usually consists of transactions itself and some Merkle tree data to convince it’s invalidity.
It is a pretty good system for the most part but as there is a time window which is usually around ~1 week the transaction has a chance of failing to lead to longer wait times. But optimistic rollups also get a lot of things right check out this post.Zero-Knowledge rollups: ZK rollups post a cryptographic proof on the mainnet showing the execution of the transaction. The type of zk rollup depends on the type of zero-knowledge proof produced by the rollup, won't go into much detail as there is an excellent blog post on the difference between zk proofs ->
SNARKS (Succinct Non-Interactive Argument of Knowledge) -> Non-interactive meaning once the proof is generated the parties don’t need to interact and their validity can be checked by anyone by running a verification algorithm.
STARKS(Scalable Transparent Argument of Knowledge) -> Starks are bigger than Snarks but easier to compute and proof as verifier\_time= O(N * poly-log(N)). but as the size of the proof goes up for stark the verification times don’t go up allowing a better verification system.
\(verifier\_time= O(N * poly-log(N))\)
The no. of transactions rollups can do is limited to their data availability layer and consensus layer because, at the end of the day, they have to post the proofs on the mainnet. Decoupling the two layers is what Celestia is working on by making sovereign rollups that are not attached to any mainnet but are free to choose and move around different Data availability layers but L2 rollups have the security of Ethereum. To fix the issue of DA(data availability) on Ethereum post-sharding the proposal EIP 4844 was introduced.
What are EIP-4844 and Blob Transaction?
It is proto-dank sharding means it will implement the scaffolding needed for dank sharding to happen on Ethereum and the biggest focus is on introducing a new type of transaction called blob transactions. These transactions will be blob-carrying transactions that are quite bigger in the order of >100kb. As these blobs have a big data limit it will allow the rollups to post bigger batches of transactions on the mainnet, this data is not available to the mainnet so the validators do not have to process this data themselves but need to be maintained for a short period.
Why Rollups on Solana?
Solana was one of the pioneers of parallel execution in blockchains which unlocked a whole set of new features that makes it a super-performant blockchain. Solana's Sealevel runtime allows thread assigning to programs and isolates different states so that there are no fee spikes faced by normal users.
If the transactions on Solana are already very cheap then why rollups? Solana Virtual Machine is a high performant virtual machine if implemented on other chains could have a great impact eclipse takes it further by building an SVM rollup that could work as a settlement layer for other L3s kinda like fractal scaling of Starknet but with the performance boost of Solana. Eclipse hopes to build an SVM settlement layer that could have its consensus on any chain chosen by the developers of L3. As of now, Eclipse rollups are optimistic rollups meaning they use fraud proofs but they are also building zk rollups with RiscZero. Also one of the biggest cases is that Rollups allow a degree of composability and developer freedom that is not allowed by a monolithic chain but there is a hurdle, SVM has a transaction data limit of ~ 1.3 kb, and the need to write data into an account slowly which would be a hurdle for posting large batches of transaction for rollups.
That’s where SIMD-0019 comes in!
As rollup data is not necessary to be written into the state changes itself but only needs to be verified for availability blob transactions can also be applied to Solana. These data blobs will have data and metadata of commits in the form of KZG, data will be posted with a sidecar but only the KZG commits will be written on the chain and the blob itself needs to be only maintained for a little amount of time for data availability but later it could be verified with the help of KZG commits which would take a lot less time.
What KZG? KZG is a polynomial commitment scheme in which whenever we make a commit c to a polynomial p, it becomes binding meaning for a value of c there is only one p. The cool thing about commitment schemes is that with the commitments made, we can verify it belongs to the polynomial without ever revealing the polynomial itself. We can verify that for any value of m there is only one value n while keeping the function or phrase itself private.
Another application is Proto-Danksharding: data blobs are represented as polynomials, and their commitments are computed via KZG. The mathematical properties of KZG enable data availability sampling, which is critical to the scaling of Ethereum’s data layer.
PS and resources used→ Thanks for reading the post dear reader! means a lot to me and if you want to support me feel free to airdrop anything at madhavg.eth or madhavg.sol! Also would love feedback!
Big shout to Neel Salami, Xoheb, and Harsh for their feedback! (thanks lads!)
https://github.com/Eclipse-Laboratories-Inc/solana-improvement-
documents/blob/sidecar-data-availability/proposals/0019-data-availability.md
https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding
https://notes.ethereum.org/@vbuterin/proto_danksharding_faq#
https://pseudotheos.mirror.xyz/