Layer N Explained: Seeking Blockchain Performance for Traders
Layer N - Ethereum's L2 network using ZK and a modular structure for building financial applications with inbuilt liquidity and application-to-application connectivity.
Disclaimer: The content presented in this article, along with others, is based on opinions developed by the analysts at Dewhales and does not constitute sponsored content. At Dewhales, we firmly adhere to a transparency-first philosophy, making our wallets openly available to the public through our website or DeBank, and our articles serve as vehicles for self-expression, education, and contribution to the ecosystem.
Dewhales Capital does not provide investment advisory services to the public. Any information should not be taken as investment, accounting, tax or legal advice or as a recommendation to purchase, sell or hold or to pursue any investment style or strategy. The accuracy and appropriateness of the information is not guaranteed by Dewhales Capital.
1. Introduction
2. Perpetual Models and Classifications
2.1 AMM (Automated Market Maker) Model
2.2 Oracle Model
2.3 Order Book Model
3. Focusing on Order Books and Blockchain Limitations
4. An Introduction to Layer-N
4.1 Performance
4.2 Liquidity
5. Layer N Architecture
6. Layer N Value Propositions
6.1 Nord
6.2 Nucleus
7. Team
8. Backers
9. Closing Thoughts
1. Introduction
The on-chain derivatives market in DeFi has been steadily growing, ranging from RWA synthetics and future yield spread trading to the ever-growing market of crypto perpetuals and options. Naturally, futures trading and options take the lead as they resemble what many access in traditional markets today. Moreover, these two markets are highly composable for strategy building and portfolio hedging, methods classically used by institutions and portfolio managers alike.
However, the perpetual market has taken off more rapidly than the options market when considering user usage and liquidity allocation. Yet, that will not be the focus of this article. Instead, we will delve into the various models that perpetual protocols have adopted due to the inherent challenges of conducting high-frequency, high-leverage trading on-chain. We will also examine how these models affect the actors within the perpetual trading ecosystem and discuss potential improvements from both an experience and cost perspective.
It's also important for you, the reader, to note that this is an area with various schools of thought and conflicting experimental directions. And Layer N with seemingly small amount of documentation hides a huge number of technologies - ZKFP, XVM for running any VM from EVM to SVM and MoveVM to custom app-specific VMs, and Inter-VM communication. Let's untangle this tangle of technologies one by one.
2. Perpetual Models and Classifications
2.1 AMM (Automated Market Maker) Model
Similar to the known model of xy = k used for price discovery, perpetual protocols modify this by employing a virtual AMM based on the assumption that a nominal amount was traded without the actual underlying being exchanged. However, since AMM exchanges are not typically the market leaders in price shaping, they often struggle to set a virtual K (which represents market depth) that tracks the market price. This discrepancy has led to costs that outweigh the benefits of capital efficiency. These costs arise from the incentives to align both the mark price and the index price. Consequently, protocols have introduced liquidity provisioning (not purely based on a virtual AMM, but using real assets) to help incentivize the convergence of the mark and index prices.
Protocols employing this model include: Perpetual Protocol, Drift Protocol, and Rage Trade.
2.2 Oracle Model
This model leverages the price data from market leaders who determine the market price based on high trading volume. As a result, there's no cost associated with narrowing the gap between prices on DEXs and the prevailing market price. Oracles are also used to settle positions at the time of trade execution, which is advantageous for market takers because it involves no slippage. However, this means that the maker bears the risk of this zero slippage. It's worth noting that protocols following this model need to provide ample rewards to incentivize liquidity providers (who act as makers) to compensate for this risk. Furthermore, these protocols are vulnerable to exploits stemming from oracle manipulation on external exchanges. This makes it particularly challenging to incorporate long-tail assets, which remains one of the significant advantages that centralized exchanges (CEXs) hold over on-chain perpetual protocols.
Protocols employing this model include GMX and Gains Trade.
Note: GMX uses its own off-chain oracle. The construction of its infrastructure is not transparent, which raises concerns about decentralization. Additionally, these setups are more aligned with margin trading than with perpetual trading.
2.3 Order Book Model
This model is by far the most recognizable to existing market participants, and as a result, it has been able to retain a significant market share in the perpetual DEX market. It's also beneficial for makers, as it enables actual market making at specified desired prices. However, the inherent nature of blockchain imposes limitations on the efficacy of such a model since it's challenging to capture all maker orders accurately. This constraint has compelled protocols to maintain an off-chain order book, thereby offering existing order book technology capabilities to perpetual DEX traders. In this setup, only the execution and settlement of trades are facilitated on-chain.
Protocols employing this model include DYDX, Injective, and Clober.
Note: The implementation of off-chain order books has faced criticism. Detractors argue that it undermines the very essence of a decentralized exchange. With this approach, it's challenging to fully eliminate intermediary risk and information asymmetry. This setup becomes barely distinguishable from using a CEX, with the only exception being the absence of KYC (Know Your Customer) procedures.
3. Focusing on Order Books and Blockchain Limitations
Despite criticism regarding the implementation of centralized order books, interactions with these protocols remain non-custodial and permissionless. This is generally seen as a more significant benefit, and the familiarity of order books positions this model as the ultimate approach for on-chain perpetual protocols (i.e., a decentralized or on-chain order book). This perspective enables the community that aligns with this school of thought to concentrate on addressing the limitations inherent to the blockchain itself.
There are multiple pieces of work being done in this space:
Clober: An on-chain order matching system using LOBSTER (Limit Order Book with Segment Tree for Efficient Order Matching)
ZKEX: A multi-chain decentralized order matching system encompassing zkLink, Starkware and zkSync
Layer N’s NordVM: A DEX-specific rollup scaling layer for Ethereum is built not on the EVM, but on its own virtual machine in Rust, specifically designed for order books and order matching
In keeping with the theme of exploring the limitations of base-layer blockchains, such as Ethereum, this article will focus on Layer N and their quest to build a scalable blockchain layer suited for order book systems and other many usecases.
4. An Introduction to Layer N
In keeping with the theme of exploring the limitations of base-layer blockchains, such as Ethereum, this article will focus on Layer N and their quest to build a horizontally scalable blockchain layer suited for highly performant applications that require monoithic-like composability.
Each Layer N rollup leverages a “one-shot” ZK-fault-proof system coupled with optimistic state settlement to ETH, achieving low latencies. Data availability is posted to dedicated high-bandwidth decentralized networks, thus bypassing the bandwidth restrictions that alternative rollups face.
Layer N uses its own type of proofs - zero-knowledge fraud proofs (ZKFP). Layer N originally published the idea of ZKFP in May 2023, which allows a network to provide proof of validity only when fraud occurs, rather than every single transaction. This means that applications can perform logical inference without worrying about unnecessary verification costs. This solution is different from both Optimistic and ZK, but takes the best from them. The minus of Optimistic is that they take a long time due to the peculiarities of interactions between prover and verifier are extremely time consuming and compute costly on Ethereum. And ZK's is that they require high hardware requirements and lead to expensive overheads for generating proofs, which become prohibitively expensive when the exchange is launched. ZKFPs are a hybrid solution that leverages the best of both worlds: cheap and fast optimistic transaction execution with brevity and zero-disclosure security for proof-of-stake fraud.
Another important component of Layer N is N-EVM, a publicly available and permission-less universal virtual machine. N-EVM provides a development environment that developers are already familiar with and love: EVM. N-EVM is the primary publicly available instance of the Layer N drive on which any developer can deploy arbitrary smart contracts. N-EVM is fully composable with other Layer N and XVM virtual machines such as NordVM (more on this in the "Layer N Architecture" and "Layer N Value Propositions" sections).
XVMs on Layer N use WASM as the base ISA. This allows extensibility of programming languages and tools. This opens up a wider range of possibilities for the shape and compatibility of future Layer N virtual machines (see the Backers section for more on RISC Zero integration).
To solve the bandwidth problem, the N layer is using EigenDA, a new solution that provides megabytes of block space per second. What differentiates EigenDA from other off-chain DA solutions is that data continues to be protected by Ethereum validators through recapture, meaning that the "off-chain DA risk" is mitigated. Unlike existing blockchains, which have a fixed capacity regardless of the number of validators, EigenDA expands its capacity along with the number of validators. This innovation effectively overcomes the traditional capacity limitations inherent in blockchains, allowing Nord to scale horizontally and support significantly lower transaction fees.
Thus, Layer N has targeted 3 core problems to solve, namely performance, liquidity, and connectivity.
4.1 Performance
Problem
Rollup performance is still approximately 1000 times worse than that of centralized solutions. All optimistic rollups that post data availability to Ethereum are constrained by Ethereum’s 0.8 mb/s bandwidth limit for calldata. Even when we factor in Ethereum’s proto-sharding upgrade (EIP-4844), Ethereum’s gas limit will only allow for roughly 760 TPS across all rollups that post data availability to Ethereum (assuming the state of compression remains consistent). Zero-Knowledge rollups face long and extremely expensive proving times, preventing genuine cheap and high-throughput scalability for now. In comparison, NASDAQ and VISA process approximately 20,000 TPS and 24,000 TPS, respectively.
Layer N Solution
Layer N features a modular rollup design. It leverages optimistic confirmations to achieve sub-100ms latencies and employs high-bandwidth data availability networks, such as Eigen DA and Celestia, to support over 100,000 TPS.
4.2 Liquidity
Problem
Siloed applications on various chains fragment liquidity and reduce capital efficiency. Deploying horizontal rollups for specialized applications proves infeasible because it results in even more liquidity fragmentation and UX isolation. As a consequence, developers find themselves limited to building within monolithic development environments that aren't tailored to their specific use cases.
Layer N Solution
Layer N will offer a native solution for inter-rollup communication, allowing applications to share liquidity directly and move assets instantly between each other without the need of 3rd party bridges. Developers can now select their preferred rollup VM for deployment (or even create a specialized VM) without concerns about fragmenting liquidity or compromising user experience.
5. Layer N Architecture
Before coming to an overview of the Layer N architecture, it is worth paying some attention to the basic chaining approaches:
The monolithic approach of building at the L1 or L2 layer has the advantage of synchronously linking applications that share a common state. The main disadvantage is performance degradation due to over-subscription of underlying blockchain resources by a potentially unlimited number of applications.
In the case of an rollups, the advantage is a purpose-built and dedicated computing environment that facilitates the creation of scalable applications. The main disadvantage is the loss of synchronous composability and the resulting fragmentation of liquidity.
And N layer StateNet is a solution that provides the performance of modular autonomous associations while retaining the benefits of synchronous monolithic stack layout. As we mentioned above, the base layer uses Ethereum as its security layer and Eigen DA as its Data Availability layer.
XVMs and their connection
Each StateNet node runs a separate virtual machine (VM). Some VMs are generalised virtual machines such as EVMs and others are application-specific virtual machines called XVMs. Virtual machines in StateNet send input data to a public data availability layer, such as EigenDA, and send state update blocks to Ethereum to complete the state of the network.
Virtual machines are divided into three types:
System Virtual Machines (SysVMs) are the underlying virtual machines that take on functional roles at the system level. Their role is to support the network in providing system-level functions and capabilities. Examples include a Router, which integrates messaging and management functions for a group of logically connected virtual machines, and a Gate Virtual Machine (or simply Gate), which unifies liquidity management across the network.
Generalised Virtual Machines (GVMs) are virtual machines that provide a generalised smart contract execution environment. GRNs allow developers to deploy smart contracts in their favourite language that can be combined with other virtual machines, whether generalised or not. An example of a GVM is N-EVM, an EVM implementation from Null Studios that provides public and permission-less deployment of smart contracts. Other potential GVMs include SolanaVM, MoveVM, or any other generalised runtime environment.
Application-specific virtual machines (XVM) - used for specific applications, to run a single programme. Unlike GVMs, XVMs run fully customisable pre-deployed application logic. Each XVM runs a single application. The application logic does not have to conform to the constraints of an EVM or any other generalised virtual machine, allowing for specialised implementations that are not constrained by other programs and environments. An example of an XVM is NordVM
XVMs consist of 5 modules:
The input module deterministically schedules input messages for the virtual machine's finite state machines. Input messages are stored in a queue, and network acknowledgments are used to reply to the message sender to confirm proper execution and receipt of messages.
The execution engine module defines the execution logic. This is the part of the virtual machine that takes input data, executes program logic on the input data, and outputs the results to the output module for messages to be sent to another virtual machine. For a GVM, the mechanism is a generalised finite state machine such as EVM, SolanaVM, etc.
The output module contains the data that needs to be forwarded to other virtual machines.
The rollman module is responsible for publishing network level and transaction level data to the data availability layer, as well as ensuring that state blocks are correctly calculated in Ethereum.
With the cron module, developers can schedule the execution of events and callbacks without relying on third-party offchain services. Tasks scheduled in the cron module are initially redirected to the input module and passed to the appropriate virtual machines.
Further, as we have already learnt, at the heart of Layer N is the connectivity between all its parts and products. This is done by Inter-VM Communication, which lays out the channels and mechanisms to enable message passing across the entire StateNet network. The following key components can be identified in this technology:
Message Queues - Virtual machines communicate by maintaining two-point message queues. The queues guarantee the semantics of exactly one-time delivery for messages between summaries. Each virtual machine manages separate request and response queues. The state and operation of the queues are then validated and authenticated using the virtual machine's own state validation scheme, allowing validators to demonstrate state inconsistency without having to run the entire network.
Routers are intermediate system virtual machines that route transactions between logical clusters of virtual machines, which we call the failure domain. Routers provide load balancing and atomicity by acting as a sequence converter between a cluster of virtual machines.
Gate - A gate VM is a specialised system virtual machine that handles all inputs and outputs from the N layer StateNet. The gateway functions as the entry and exit point for the N level. Once liquidity enters Level N through the gateway, it can move freely between all other virtual machines. Asset transfers between virtual machines do not need to subsequently pass through the gateway, as GMP functionality is provided by the N layer message queue infrastructure. This design ensures that there are no double costs when moving assets between virtual machines.
6. Layer N Value Propositions
6.1 Nord VM
Nord VM is an optimized DEX rollup that boasts a custom and highly efficient Rust execution environment, capable of processing over 100K requests per second with sub-10ms latency. Layer N achieves this prowess with its highly specialized VMs and a modular aggregated architecture. Furthermore, Rust is not a proprietary programming language, and it's recognized for its high-performance codebase. Given these capabilities, Nord can be seen as a competitor to central limit order book (CLOB) DEXs—yet it's fully on-chain, drawing it nearer to the speed and matching capabilities of centralized solutions.
The idea for Nord came from the Layer N team's study of the existing landscape of decentralised exchanges, which can be categorised into three main models: AMMs, staking pools and CLOBs. And they all have their own features and challenges that either compromise performance and usability or decentralisation and linkability:
Automated Market Making (AMM) protocols such as Uniswap use smart contracts to enable decentralised trading between users. The way they work is that the lack of active market making results in loss of liquidity providers, slippage costs for traders and a much more expensive trading experience compared to centralised exchanges.
Staked pools protocols, such as GMX and Gains Network, represent betting pools that combine liquidity provision with trading incentives. They suffer from high latency and high operating costs.
Finally, central-limit order book (CLOB) protocols such as dYdX handle order matching entirely off-chain, meaning that they lack the ability to create smart contracts within the chain, and inherit a high degree of centralisation risks.
Another feature is that Nord uses the N-level Inter-Rollup Communication (IRC) protocol. IRC allows Nord to seamlessly relay messages to other N-level XVMs and instantly send liquidity back and forth.
Thus, the closest alternative to Nord can be considered L2 AppChains such as StarkEX. And when comparing, it becomes apparent that StarkEx is a completely isolated aggregate infrastructure and applications based on its technology will communicate with other chains just like different blockchains. And building on Nord allows for a whole set of ecosystem applications.
8. Backers
It should be noted that Layer N has a small number of integrations and partnerships, but these are purely of an applied nature. We mentioned some of these projects above, now let's take a closer look at what these integrations are:
RISC Zero. Unlike existing optimistic associations, Level N does not rely on replaying transactions in the chain to prove fraud. Instead, Level N uses a novel approach that utilises zero-disclosure proofs and the RISC Zero zero-disclosure virtual machine. With RISC Zero, any verifier can create a concise proof that it has taken the correct DA transactions corresponding to a particular block and applied them to the initial state. RISC Zero does this by porting the N level execution environment to its zkVM and without trust issues a receipt for correct execution. In case of a dispute, the verifier sends this proof to the N-level smart contract on Ethereum, which then checks if the proof is valid.If the proof is valid and the withdrawal state claimed by the proof does not match the state published on L1, then it is fraudulent and we must cancel the blockchain.
Also, with an eye on the future, Layer N is looking to utilise the latest technologies for its accumulation ecosystem.For example, with Bonsai, RISC Zero's universal zero-disclosure verification network, Layer N will be able to fully transition to ZK pooling, which means guarantees of cryptographic security and instant withdrawals while maintaining high performance. Because Bonsai allows any chain, protocol or application to connect to its verification network, it can act as a secure off-chain execution and computation layer for a wide range of use cases.Modulus Labs. Working with Modulus Labs, which has been at the forefront of creating ZKML methods for validating AI computation on-chain, Level N will provide the ability for associations in its network to access AI outputs on demand. This means developers can deploy their own AI models on the N layer, customise inputs and outputs and easily integrate AI into their network applications.
To make this work, the high-level architecture is as follows: any aggregation in an N-level ecosystem can deploy an artificial intelligence model that allows it to process inputs and produce outputs. Rollups can theoretically reuse the same model as needed. The deployment and the AI functional module are considered to be separate modular parts. This separation allows for individual verification without having to verify the whole system together.
9. Closing Thoughts
While many researchers and innovators in the community challenge the viability of order books for blockchain protocols, believing they will never match the efficiency of centralized order books due to their distributed nature, off-chain order books are also not the universally accepted solution for decentralized trading protocols. They lack properties like censorship resistance and are not easily composable. This stalemate has been a significant hurdle for DeFi derivatives adoption, mainly due to capital inefficiencies and suboptimal pricing.
Nevertheless, this hasn't deterred protocols like Layer N from introducing innovative ways to scale blockchain through modularized architectures. They aim to create blockchains that can directly compete with traditional systems, which have underpinned our financial infrastructure for decades. The blockchain technology stack continues to evolve at a swift pace, with the industry well-aware of the scaling and service benchmarks expected by users.
The future is promising. Innovators are consistently pushing the boundaries of blockchain technology, aiming for decentralized solutions that meet clear, collective benchmarks required for their widespread acceptance as the new gold standard.
Layer N Links
Website | Twitter | Discord | Telegram | Documentation