Wen, Parity's core developer Kian Paimani, interprets the latest JAM protocol from a technical perspective to help people better understand how JAM brings new scalability to the Polkadot ecosystem.
Author: Kian Paimani, Parity Core Developer
Compiled by Polkadot Labs
"Polkadot Knowledge Graph" is our entry-level article on Polkadot from scratch. We try to start with the most basic parts of Polkadot, providing a comprehensive understanding of Polkadot for everyone. Of course, this is a huge project full of challenges. However, we hope that through such efforts, everyone can have a correct understanding of Polkadot, and those who do not know Polkadot can quickly and conveniently grasp the relevant knowledge of Polkadot. *Today is the 148th issue of this column, and this article is written from the technical perspective of Kian Paimani, a core developer of Parity, to explain the latest JAM protocol proposed by Polkadot, in order to help people better understand how JAM brings new scalability to the Polkadot ecosystem. This article is written in the first person by the author."
*The following is a detailed explanation of Polkadot1, Polkadot2, and how they evolve into JAM. (For details, please refer to ***) This article is aimed at technical readers, especially those who are not very familiar with Polkadot but have a certain understanding of blockchain systems, and may be familiar with other ecosystem-related technologies.
*I think it's a good prelude to read this article before reading the JAM white paper. (For details, please refer to: ***)
Background Knowledge
This article assumes that readers are familiar with the following concepts:
Describe the Block chain as a state transition function.
Understand what "state" is. (For details, please refer to: _sdk_docs/reference_docs/blockchain_state_machines/index.html)
Economic security and Proof of Stake. (See for details: 01928374656574839201)
Preface: Polkadot1
First, let's review what I believe to be the most innovative features of Polkadot1.
Social Level:
Polkadot is a large Decentralized Autonomous Organization (DAO). The network implements governance that is fully based on on-chain, self-executing, including runtime upgrades without the need for fork.
The United States Securities and Exchange Commission (SEC) views DOT as software rather than a security. (Details please refer to: )
Most of the network development work is done by the Polkadot Fellowship (see: 01928374656574839201) rather than funded companies like Parity:.
Technical Level:
Polkadot has achieved shared security and Sharding execution.
Store blockchain code in bytecode form in the state using the WASM-based meta protocol (see details: _sdk_docs/reference_docs/wasm_meta_protocol/index.html). This allows most upgrades to be implemented without the need for fork, and also enables heterogeneous Sharding.
For more information about 'Heterogeneous Sharding', please refer to the relevant sections.
Sharding Execution: Key Points
Currently, we are discussing a Layer1 network that hosts other Layer2 'blockchain' networks, similar to Polkadot and Ethereum. Therefore, the terms Layer2 and Parachain can be used interchangeably.
The core issue of Blockchain scalability can be described as: there is a group of validators who can ensure the execution of certain code is trustworthy through the economic feasibility of Proof of Stake (Proof-of-Stake) Crypto. By default, these validators need to re-execute each other's work. Therefore, as long as we force all validators to always re-execute everything, the entire system is not scalable.
Please note that simply increasing the number of validators in this model will not actually increase the system's throughput as long as the above absolute re-execution principle remains unchanged.
The above is a single Block chain (as opposed to Sharding Block chain). All network validators will process the input (i.e. Block) one by one.
In such a system, if Layer1 wants to host more Layer2, then all validators must now re-execute all the work of Layer2. Obviously, this approach is not scalable. Optimistic Rollups are a way to bypass this problem, as re-execution (fraud proof) only occurs when someone claims fraud. SNARK-based Rollups circumvent this issue by leveraging the fact that the cost of verifying SNARK proofs is much lower than generating them, thus allowing all validators to verify SNARK proofs. For more information on this aspect, please refer to 'Appendix: Scalability Space Diagram'.
A simple solution to Sharding is to simply divide the validators set into smaller subsets and let this smaller subset re-execute the Layer2 Block. What is the problem with this approach? We are Sharding the execution and economic security of the network. The security of such Layer2 is lower than Layer1, and as we divide the validators set into more Sharding, its security will further decrease.
Unlike Optimistic Rollups, which cannot always re-execute the cost, Polkadot considered Sharding in its design, so it can allow some validators to re-execute Layer2 Blocks, while providing sufficient crypto-economic evidence to all network participants to prove the authenticity of the Layer2 Block, and is as secure as when the entire validators set re-executes it. This is achieved through a novel (recently formally released) ELVES mechanism. (For details, please refer to:)
In short, ELVES can be seen as a 'doubtful-style Rollups' mechanism. By actively asking other validators whether a Layer2 Block is valid for several rounds, we can confirm the validity of the Layer2 Block with a high probability. In fact, in case of any dispute, the entire set of validators will be required to participate quickly. This point is explained in detail by Rob Habermeier, co-founder of Polkadot, in an article. (For more details, please refer to: )
ELVES enables Polkadot to have two previously considered mutually exclusive properties: 'Sharding execution' and 'shared security'. This is Polkadot's major technological achievement in terms of scalability.
Now, let's continue the discussion of the "CORE" analogy.
A Block chain that executes Sharding is very much like a CPU: just as a CPU can have multiple cores to execute instructions in parallel, Polkadot can process Layer2 Blocks in parallel. This is why Layer2 on Polkadot is called parachain, and the environment where a smaller subset of validators re-executes a single Layer2 Block is called “core”. Each core can be abstracted as “a group of collaborating validators”.
You can imagine a single Block chain as only taking in one Block at any given time, while Polkadot takes in one Relay chain Block and one parallel chain Block from each core during each time period.
Heterogeneity
So far, we have only discussed the scalability and Sharding execution provided by Polkadot. It is worth noting that each Sharding of Polkadot is actually a completely different application. This is achieved by using an element protocol stored in bytecode: a protocol that defines the Block chain as bytecode stored in the state of the Block chain itself. In Polkadot 1.0, WASM is used as the preferred bytecode, while PVM/RISC-V is used in JAM.
In short, this is why Polkadot is referred to as a heterogeneous sharded blockchain. (For details, please refer to:) Each Layer2 is a completely different application.
Polkadot2
An important part of Polkadot2 is to make the core usage more flexible. In the original Polkadot model, the core lease period can range from 6 months to 2 years, which is suitable for resource-rich enterprises but not ideal for small teams. The feature that allows Polkadot core to be used in a more flexible way is called 'agile coretime'. (See details at: [link]) In this mode, the lease period of Polkadot core can be as short as one Block or as long as one month, and it provides a price cap guarantee for users who wish to lease for a long term.
Other features of Polkadot 2 are gradually emerging in our discussions, so there is no need to elaborate too much here.
Core Internal and on-chain Operations
To understand JAM, first, you need to understand what happens when a Layer2 Block enters the Polkadot core.
The following content has been greatly simplified.
To review, the core is mainly composed of a group of validators. Therefore, when we say 'data is sent to the core', it actually means that this data is passed to this group of validators.
A Layer2 Block together with a portion of its state is sent to the core. This data contains all the information needed to execute the Layer2 Block.
Part of the validators in the core will re-execute the Layer2 Block and continue to process tasks related to Consensus.
The core validators will re-execute the required data for other validators (external validators of the core). Other validators may decide whether to re-execute the Layer2 Block according to the ELVES rules, and they need this data to complete this operation.
Please note that so far, all operations have been conducted outside of Polkadot's main chain and state transition function. Everything happens within the core and data availability layers.
Finally, a small part of the latest state of Layer2 will be visible on the relay chain of Polkadot. Unlike all previous operations, this operation is much cheaper than actually re-executing Layer2 Blocks, it will affect the main state of Polkadot, be visible in Polkadot Blocks, and executed by all Polkadot validators.
From the above content, we can discuss some of the operations that Polkadot is executing:
First of all, from step 1 we can conclude that there is a new type of execution mode in Polkadot that is different from the traditional blockchain state transition function. Typically, when all validators in the network perform a certain task, the main blockchain state is updated. We call this an on-chain operation, which is what happens in step 3. However, the situation that occurs internally in the core (step 1) is different from this. We refer to this new type of blockchain computation as in-core execution.
Next, from point 2 we can infer that Polkadot has provided a native Data-Availability (DA) layer, which Layer2 automatically uses to ensure that its execution evidence is available for a period of time. However, the data blocks that can be published to this DA layer are fixed, and they always consist of the evidence needed to re-execute the Layer2 Block. In addition, the code for parachain has never read DA layer data.
Understanding the above is the basis of understanding JAM. Summarized as follows:
CORE execution: refers to the operations that occur within the core. Its characteristics are rich, scalable, and achieved the same level of security as on-chain execution through Crypto economics and ELVES.
on-chain execution: Refers to the operations performed by all validators. Validators, secured by economic incentives, by default, ensure security but come at a higher cost and with more restrictions, as everyone is involved in executing all operations.
Data Availability (Data Availability): Polkadot validators commit to the availability of certain data within a certain period of time and have the ability to provide this data to other validators.
JAM
With the understanding of the previous section, we can smoothly transition to the introduction of JAM.
JAM is a new protocol designed inspired by Polkadot and fully compatible with it, aiming to replace the Polkadot relay chain and make core usage completely decentralized and unrestricted.
JAM is built on top of Polkadot2 and aims to make the core of Polkadot more accessible, but in a more flexible and unrestricted way than agile-coretime.
Polkadot2 makes the deployment of Layer2 more flexible at its core.
JAM is designed to enable any application to be deployed on Polkadot Core, even if these applications are not like blockchain or Layer2.
This is mainly achieved by exposing the three main original concepts discussed in the front part to developers: namely, on-chain execution, core on-chain execution, and DA layer.
In other words, in JAM, developers can get access to:
Fully programmable core and on-chain operations.
Allow arbitrary data to be read and written to Polkadot's DA layer.
This is a basic description of the JAM target. Without saying more, a lot of simplifications have been made here, and the protocol may still evolve.
With this basic understanding, we can now delve further into some details of JAM in the following chapters.
1 Service and Work Items
Under the background of JAM, what used to be called Layer2/parachain is now called "Service", and what used to be called Block/transaction is now called "Work-Item" or "Work-Package". Specifically, a Work-Item belongs to a certain Service, while a Work-Package is a collection of Work-Items. These terms are intentionally designed to be generic enough to cover a variety of use cases beyond the blockchain/Layer2.
A service is described by three entry points, two of which are fn refine() and fn accumulate(). The former describes the content executed by the service in the core, and the latter describes the content executed on-chain.
Finally, the names of the two entry points are also the reason why the protocol is called JAM (Join Accumulate Machine). Join refers to fn refine(), which is called Join when all Polkadot cores are parallel processing a large amount of work for different services. After the data is filtered, it enters the next stage. Accumulate refers to the process of accumulating all the above results into the main JAM state, which is the on-chain execution part.
Work items can precisely specify what code they execute in the core, on-chain, and indicate how/whether/where to read and write content in the Distributed Data Lake.
2 Semi-consistency
Reviewing existing information about XCM (the parachain communication language chosen by Polkadot), all communication is asynchronous. (See details: ) This means that after a message is sent, it cannot wait for a reply.
Asynchrony is the manifestation of system inconsistency and the main disadvantage of permanent Sharding systems (such as Polkadot 1 and Polkadot 2, as well as the existing Layer2 ecosystem of Ethereum).
However, as described in Section 2.4 of the white paper, a fully consistent system that always keeps in sync for all its tenants can only rise to a certain extent without sacrificing universality, accessibility, or elasticity. (See: )
This is another area where JAM stands out: by introducing multiple features, JAM achieves a novel intermediate state, namely the semi-consistency system. In this system, subsystems that communicate frequently have the opportunity to create a consistent environment among themselves without forcing the entire system to remain consistent. This is best described in an interview with Dr. Gavin Wood, the author of the yellow paper (for more details, please see: _referring_euri=https%3A%2F%2Fblog.kianenigma.nl%2F&source_ve_path=OTY3MTQ).
Another way to understand Polkadot/JAM is to see it as a Sharding system, where the boundaries of these Shards are fluid and dynamically determined.
Polkadot has always been Sharding, and completely heterogeneous.
Now, it will be Sharding, heterogeneity, and the boundaries of these Sharding can be flexibly determined, just like Gavin Wood's so-called 'semi-consistent' system on Twitter. (For details, please refer to: _src=twsrc%5Etfw,)
The features that make all this possible include:
Access to stateless, parallel execution of core, where different services can only synchronously interact with other services in the same core and specific Block, as well as on-chain execution, where services can access the results of all services across all cores.
JAM does not enforce any specific service scheduling. Services with frequent communication can provide economic incentives to their schedulers, creating work packages containing these services with frequent communication. This allows these services to run on the same core, with communication between them as if in a synchronous environment.
In addition, JAM services can access the Data Layer and use it as a temporary but extremely inexpensive data layer. Once the data is placed in the DA, it will eventually propagate to all cores, but it will be immediately available within the same core. Therefore, JAM services can achieve a higher level of data access by scheduling themselves in the same core in consecutive blocks.
It should be noted that although the above content is possible in JAM, it is not enforced at the protocol layer. Therefore, it is expected that certain interfaces are theoretically asynchronous but can be manifested as synchronous in practice through clever abstraction and incentive measures. The next section will discuss CorePlay as an example of this.
3 CorePlay
This section introduces CorePlay, an experimental idea in the JAM environment that can be described as a new Smart Contract programming model. As of writing this article, CorePlay has not been fully described and is still a concept.
To understand CorePlay, we first need to introduce the Virtual Machine chosen by JAM: PVM.
4 PVM
PVM is an important detail in JAM and CorePlay. The low-level details of PVM are beyond the scope of this article, it is best to refer to the description by domain experts in the white paper. However, for the purpose of this article, we only need to explain several attributes of PVM:
Efficient Measurement
The ability to pause and resume execution
The latter is particularly important for CorePlay.
CorePlay is an example of creating a synchronous and scalable Smart Contract environment using the flexible primitives of JAM, with a very flexible programming interface. CorePlay suggests deploying Actor-based Smart Contracts directly on the JAM core to enjoy synchronous programming interfaces, where they can be written like a normal fn main() and communicate through let_result=other_coreplay_actor(data).await?. If the other_coreplay_actor is on the same core in the JAM Block, this call is synchronous; if it is on another core, the Actor will be paused and resumed in the subsequent JAM Block. This is made possible by the JAM service and its flexible scheduling, as well as the properties of PVM.
5 CoreChains Services
Finally, let's summarize the main reasons for JAM's full compatibility with Polkadot. Polkadot's main product is parachains, which operate in an agile core time manner, and this product is continued in JAM.
The earliest deployed service in JAM may be called CoreChains or Parachains. This service will allow existing Polkadot-2-style parachains to run on JAM.
Further services can be deployed on JAM, and existing CoreChains services can communicate with them, but Polkadot's existing products will remain strong and only open new doors for existing Parachain teams.
Appendix: Data Sharding
Most of this article discusses scalability from the perspective of executing Sharding. We can also examine the same issue from the perspective of data. Interestingly, we find that this is similar to the semi-consistency situation mentioned earlier: in principle, a completely consistent system is better, but not scalable; a completely inconsistent system is scalable but not ideal, while JAM proposes a new possibility with its semi-consistency model.
Perfectly Consistent System: This is what we see on platforms that are fully synchronized Smart Contract platforms such as Solana or those bravely deployed on ETH Chain Layer1 only. All application data is stored on-chain and can be easily accessed by all other applications. This is a programmatically perfect property but not scalable.
Inconsistent System: Application data is stored outside Layer1 and in different, isolated Sharding. It is highly scalable but performs poorly in terms of composability. Polkadot and Ethereum's Rollup model fall into this category.
In addition to providing the above two functions, JAM also allows developers to publish arbitrary data to the JAM DA layer, which is to some extent an intermediate zone between on-chain data and off-chain data. New applications can be developed that utilize most of the application data in the DA layer, while only persisting absolutely crucial data in the JAM state.
Appendix: Scalability Space Map
This section re-explains our views on the field of blockchain scalability. This is also explained in the whitepaper, and here is a more concise version.
The scalability of blockchain largely follows the methods used in traditional distributed systems: scaling up (vertically) and scaling out (horizontally).
Scaling up is the work done by platforms like Solana. It achieves maximum throughput by optimizing the code and hardware to the limit.
The strategy adopted by Ethereum and Polkadot for scaling out: reducing the amount of work each individual needs to do. In traditional distributed systems, this is achieved by adding more replica machines. In the Blockchain, 'computer' is the collection of validators for the entire network. By distributing work among them (as ELVES does), or optimistically reducing their responsibilities (as optimistic Rollups does), we reduce the workload of the entire validators collection, thereby achieving the system's scaling out.
In the Block chain, scaling out is similar to "reducing the number of machines needed to perform all operations".
Summary as follows:
Scale Up: Optimization of High-performance Hardware + Monolithic Blockchain.
This section is based on the analogy provided by Rob Habermeier in Sub02023: Polkadot: Kernel/Userland|Sub02023-YouTube (see details: ), demonstrating JAM as an upgrade to Polkadot: kernel update on the same hardware.
In a typical computer, we can divide the entire stack into three parts:
Hardware
Kernel
User Space
In Polkadot, hardware, which provides the essence of computation and data availability, has always been at the core (cores), as mentioned earlier.
In Polkadot, the kernel is actually[9]So far it has two parts:
parachain (Parachains) protocol: a consensus-based, fixed-use core approach.
A set of low-level functionalities, such as DOT Token and its transferability, stake, governance, etc.
Both of these exist in the relay chain of Polkadot (Relay Chain).
User space applications are instances of parachains, their native Token, and other content built on top of them.
We can visualize the process as follows:
Polkadot has always envisioned moving more core functionality to its unique users - parachain. This is exactly the goal that the Minimal Relay RFC aims to achieve. (For more details, please refer to: )
This means that the Polkadot relay chain only handles the provision of parachain protocol, thus narrowing the kernel space to some extent.
Once this architecture is implemented, it will be easier to visualize the JAM migration. JAM will significantly reduce the core space of Polkadot, making it more versatile. In addition, the Parachains protocol will move to the user space, as it is one of the few ways to write applications on the same core (hardware) and kernel (JAM).
This also once again illustrates why JAM is only an alternative to Polkadot's relay chain, not a replacement for parachain.
In other words, we can think of JAM migration as a kernel upgrade. The underlying hardware remains unchanged, and most of the content of the old kernel is moved to user space to simplify the system.
To participate in the discussion of this article, please feel free to express your opinions in the forum:
Please refer to our Polka Forum User Guide for how to participate in the forum discussion.
"How to Participate in Polkadot Discussions: Polkadot Official Forum Usage Guide"
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
Unveiling Polkadot's JAM from a technical perspective
Author: Kian Paimani, Parity Core Developer
Compiled by Polkadot Labs
"Polkadot Knowledge Graph" is our entry-level article on Polkadot from scratch. We try to start with the most basic parts of Polkadot, providing a comprehensive understanding of Polkadot for everyone. Of course, this is a huge project full of challenges. However, we hope that through such efforts, everyone can have a correct understanding of Polkadot, and those who do not know Polkadot can quickly and conveniently grasp the relevant knowledge of Polkadot. *Today is the 148th issue of this column, and this article is written from the technical perspective of Kian Paimani, a core developer of Parity, to explain the latest JAM protocol proposed by Polkadot, in order to help people better understand how JAM brings new scalability to the Polkadot ecosystem. This article is written in the first person by the author."
*The following is a detailed explanation of Polkadot1, Polkadot2, and how they evolve into JAM. (For details, please refer to ***) This article is aimed at technical readers, especially those who are not very familiar with Polkadot but have a certain understanding of blockchain systems, and may be familiar with other ecosystem-related technologies.
*I think it's a good prelude to read this article before reading the JAM white paper. (For details, please refer to: ***)
Background Knowledge
This article assumes that readers are familiar with the following concepts:
Preface: Polkadot1
First, let's review what I believe to be the most innovative features of Polkadot1.
Social Level:
Technical Level:
For more information about 'Heterogeneous Sharding', please refer to the relevant sections.
Sharding Execution: Key Points
Currently, we are discussing a Layer1 network that hosts other Layer2 'blockchain' networks, similar to Polkadot and Ethereum. Therefore, the terms Layer2 and Parachain can be used interchangeably.
The core issue of Blockchain scalability can be described as: there is a group of validators who can ensure the execution of certain code is trustworthy through the economic feasibility of Proof of Stake (Proof-of-Stake) Crypto. By default, these validators need to re-execute each other's work. Therefore, as long as we force all validators to always re-execute everything, the entire system is not scalable.
Please note that simply increasing the number of validators in this model will not actually increase the system's throughput as long as the above absolute re-execution principle remains unchanged.
The above is a single Block chain (as opposed to Sharding Block chain). All network validators will process the input (i.e. Block) one by one.
In such a system, if Layer1 wants to host more Layer2, then all validators must now re-execute all the work of Layer2. Obviously, this approach is not scalable. Optimistic Rollups are a way to bypass this problem, as re-execution (fraud proof) only occurs when someone claims fraud. SNARK-based Rollups circumvent this issue by leveraging the fact that the cost of verifying SNARK proofs is much lower than generating them, thus allowing all validators to verify SNARK proofs. For more information on this aspect, please refer to 'Appendix: Scalability Space Diagram'.
A simple solution to Sharding is to simply divide the validators set into smaller subsets and let this smaller subset re-execute the Layer2 Block. What is the problem with this approach? We are Sharding the execution and economic security of the network. The security of such Layer2 is lower than Layer1, and as we divide the validators set into more Sharding, its security will further decrease.
Unlike Optimistic Rollups, which cannot always re-execute the cost, Polkadot considered Sharding in its design, so it can allow some validators to re-execute Layer2 Blocks, while providing sufficient crypto-economic evidence to all network participants to prove the authenticity of the Layer2 Block, and is as secure as when the entire validators set re-executes it. This is achieved through a novel (recently formally released) ELVES mechanism. (For details, please refer to:)
In short, ELVES can be seen as a 'doubtful-style Rollups' mechanism. By actively asking other validators whether a Layer2 Block is valid for several rounds, we can confirm the validity of the Layer2 Block with a high probability. In fact, in case of any dispute, the entire set of validators will be required to participate quickly. This point is explained in detail by Rob Habermeier, co-founder of Polkadot, in an article. (For more details, please refer to: )
ELVES enables Polkadot to have two previously considered mutually exclusive properties: 'Sharding execution' and 'shared security'. This is Polkadot's major technological achievement in terms of scalability.
Now, let's continue the discussion of the "CORE" analogy.
A Block chain that executes Sharding is very much like a CPU: just as a CPU can have multiple cores to execute instructions in parallel, Polkadot can process Layer2 Blocks in parallel. This is why Layer2 on Polkadot is called parachain, and the environment where a smaller subset of validators re-executes a single Layer2 Block is called “core”. Each core can be abstracted as “a group of collaborating validators”.
You can imagine a single Block chain as only taking in one Block at any given time, while Polkadot takes in one Relay chain Block and one parallel chain Block from each core during each time period.
Heterogeneity
So far, we have only discussed the scalability and Sharding execution provided by Polkadot. It is worth noting that each Sharding of Polkadot is actually a completely different application. This is achieved by using an element protocol stored in bytecode: a protocol that defines the Block chain as bytecode stored in the state of the Block chain itself. In Polkadot 1.0, WASM is used as the preferred bytecode, while PVM/RISC-V is used in JAM.
In short, this is why Polkadot is referred to as a heterogeneous sharded blockchain. (For details, please refer to:) Each Layer2 is a completely different application.
Polkadot2
An important part of Polkadot2 is to make the core usage more flexible. In the original Polkadot model, the core lease period can range from 6 months to 2 years, which is suitable for resource-rich enterprises but not ideal for small teams. The feature that allows Polkadot core to be used in a more flexible way is called 'agile coretime'. (See details at: [link]) In this mode, the lease period of Polkadot core can be as short as one Block or as long as one month, and it provides a price cap guarantee for users who wish to lease for a long term.
Other features of Polkadot 2 are gradually emerging in our discussions, so there is no need to elaborate too much here.
Core Internal and on-chain Operations
To understand JAM, first, you need to understand what happens when a Layer2 Block enters the Polkadot core.
The following content has been greatly simplified.
To review, the core is mainly composed of a group of validators. Therefore, when we say 'data is sent to the core', it actually means that this data is passed to this group of validators.
Part of the validators in the core will re-execute the Layer2 Block and continue to process tasks related to Consensus.
Please note that so far, all operations have been conducted outside of Polkadot's main chain and state transition function. Everything happens within the core and data availability layers.
From the above content, we can discuss some of the operations that Polkadot is executing:
First of all, from step 1 we can conclude that there is a new type of execution mode in Polkadot that is different from the traditional blockchain state transition function. Typically, when all validators in the network perform a certain task, the main blockchain state is updated. We call this an on-chain operation, which is what happens in step 3. However, the situation that occurs internally in the core (step 1) is different from this. We refer to this new type of blockchain computation as in-core execution.
Next, from point 2 we can infer that Polkadot has provided a native Data-Availability (DA) layer, which Layer2 automatically uses to ensure that its execution evidence is available for a period of time. However, the data blocks that can be published to this DA layer are fixed, and they always consist of the evidence needed to re-execute the Layer2 Block. In addition, the code for parachain has never read DA layer data.
Understanding the above is the basis of understanding JAM. Summarized as follows:
JAM
With the understanding of the previous section, we can smoothly transition to the introduction of JAM.
JAM is a new protocol designed inspired by Polkadot and fully compatible with it, aiming to replace the Polkadot relay chain and make core usage completely decentralized and unrestricted.
JAM is built on top of Polkadot2 and aims to make the core of Polkadot more accessible, but in a more flexible and unrestricted way than agile-coretime.
This is mainly achieved by exposing the three main original concepts discussed in the front part to developers: namely, on-chain execution, core on-chain execution, and DA layer.
In other words, in JAM, developers can get access to:
This is a basic description of the JAM target. Without saying more, a lot of simplifications have been made here, and the protocol may still evolve.
With this basic understanding, we can now delve further into some details of JAM in the following chapters.
1 Service and Work Items
Under the background of JAM, what used to be called Layer2/parachain is now called "Service", and what used to be called Block/transaction is now called "Work-Item" or "Work-Package". Specifically, a Work-Item belongs to a certain Service, while a Work-Package is a collection of Work-Items. These terms are intentionally designed to be generic enough to cover a variety of use cases beyond the blockchain/Layer2.
A service is described by three entry points, two of which are fn refine() and fn accumulate(). The former describes the content executed by the service in the core, and the latter describes the content executed on-chain.
Finally, the names of the two entry points are also the reason why the protocol is called JAM (Join Accumulate Machine). Join refers to fn refine(), which is called Join when all Polkadot cores are parallel processing a large amount of work for different services. After the data is filtered, it enters the next stage. Accumulate refers to the process of accumulating all the above results into the main JAM state, which is the on-chain execution part.
Work items can precisely specify what code they execute in the core, on-chain, and indicate how/whether/where to read and write content in the Distributed Data Lake.
2 Semi-consistency
Reviewing existing information about XCM (the parachain communication language chosen by Polkadot), all communication is asynchronous. (See details: ) This means that after a message is sent, it cannot wait for a reply.
Asynchrony is the manifestation of system inconsistency and the main disadvantage of permanent Sharding systems (such as Polkadot 1 and Polkadot 2, as well as the existing Layer2 ecosystem of Ethereum).
However, as described in Section 2.4 of the white paper, a fully consistent system that always keeps in sync for all its tenants can only rise to a certain extent without sacrificing universality, accessibility, or elasticity. (See: )
Synchronous ≈ Consistency||Asynchronous ≈ Inconsistency
This is another area where JAM stands out: by introducing multiple features, JAM achieves a novel intermediate state, namely the semi-consistency system. In this system, subsystems that communicate frequently have the opportunity to create a consistent environment among themselves without forcing the entire system to remain consistent. This is best described in an interview with Dr. Gavin Wood, the author of the yellow paper (for more details, please see: _referring_euri=https%3A%2F%2Fblog.kianenigma.nl%2F&source_ve_path=OTY3MTQ).
Another way to understand Polkadot/JAM is to see it as a Sharding system, where the boundaries of these Shards are fluid and dynamically determined.
Polkadot has always been Sharding, and completely heterogeneous.
Now, it will be Sharding, heterogeneity, and the boundaries of these Sharding can be flexibly determined, just like Gavin Wood's so-called 'semi-consistent' system on Twitter. (For details, please refer to: _src=twsrc%5Etfw,)
The features that make all this possible include:
Access to stateless, parallel execution of core, where different services can only synchronously interact with other services in the same core and specific Block, as well as on-chain execution, where services can access the results of all services across all cores.
JAM does not enforce any specific service scheduling. Services with frequent communication can provide economic incentives to their schedulers, creating work packages containing these services with frequent communication. This allows these services to run on the same core, with communication between them as if in a synchronous environment.
In addition, JAM services can access the Data Layer and use it as a temporary but extremely inexpensive data layer. Once the data is placed in the DA, it will eventually propagate to all cores, but it will be immediately available within the same core. Therefore, JAM services can achieve a higher level of data access by scheduling themselves in the same core in consecutive blocks.
It should be noted that although the above content is possible in JAM, it is not enforced at the protocol layer. Therefore, it is expected that certain interfaces are theoretically asynchronous but can be manifested as synchronous in practice through clever abstraction and incentive measures. The next section will discuss CorePlay as an example of this.
3 CorePlay
This section introduces CorePlay, an experimental idea in the JAM environment that can be described as a new Smart Contract programming model. As of writing this article, CorePlay has not been fully described and is still a concept.
To understand CorePlay, we first need to introduce the Virtual Machine chosen by JAM: PVM.
4 PVM
PVM is an important detail in JAM and CorePlay. The low-level details of PVM are beyond the scope of this article, it is best to refer to the description by domain experts in the white paper. However, for the purpose of this article, we only need to explain several attributes of PVM:
The latter is particularly important for CorePlay.
CorePlay is an example of creating a synchronous and scalable Smart Contract environment using the flexible primitives of JAM, with a very flexible programming interface. CorePlay suggests deploying Actor-based Smart Contracts directly on the JAM core to enjoy synchronous programming interfaces, where they can be written like a normal fn main() and communicate through let_result=other_coreplay_actor(data).await?. If the other_coreplay_actor is on the same core in the JAM Block, this call is synchronous; if it is on another core, the Actor will be paused and resumed in the subsequent JAM Block. This is made possible by the JAM service and its flexible scheduling, as well as the properties of PVM.
5 CoreChains Services
Finally, let's summarize the main reasons for JAM's full compatibility with Polkadot. Polkadot's main product is parachains, which operate in an agile core time manner, and this product is continued in JAM.
The earliest deployed service in JAM may be called CoreChains or Parachains. This service will allow existing Polkadot-2-style parachains to run on JAM.
Further services can be deployed on JAM, and existing CoreChains services can communicate with them, but Polkadot's existing products will remain strong and only open new doors for existing Parachain teams.
Appendix: Data Sharding
Most of this article discusses scalability from the perspective of executing Sharding. We can also examine the same issue from the perspective of data. Interestingly, we find that this is similar to the semi-consistency situation mentioned earlier: in principle, a completely consistent system is better, but not scalable; a completely inconsistent system is scalable but not ideal, while JAM proposes a new possibility with its semi-consistency model.
Perfectly Consistent System: This is what we see on platforms that are fully synchronized Smart Contract platforms such as Solana or those bravely deployed on ETH Chain Layer1 only. All application data is stored on-chain and can be easily accessed by all other applications. This is a programmatically perfect property but not scalable.
Inconsistent System: Application data is stored outside Layer1 and in different, isolated Sharding. It is highly scalable but performs poorly in terms of composability. Polkadot and Ethereum's Rollup model fall into this category.
In addition to providing the above two functions, JAM also allows developers to publish arbitrary data to the JAM DA layer, which is to some extent an intermediate zone between on-chain data and off-chain data. New applications can be developed that utilize most of the application data in the DA layer, while only persisting absolutely crucial data in the JAM state.
Appendix: Scalability Space Map
This section re-explains our views on the field of blockchain scalability. This is also explained in the whitepaper, and here is a more concise version.
The scalability of blockchain largely follows the methods used in traditional distributed systems: scaling up (vertically) and scaling out (horizontally).
Scaling up is the work done by platforms like Solana. It achieves maximum throughput by optimizing the code and hardware to the limit.
The strategy adopted by Ethereum and Polkadot for scaling out: reducing the amount of work each individual needs to do. In traditional distributed systems, this is achieved by adding more replica machines. In the Blockchain, 'computer' is the collection of validators for the entire network. By distributing work among them (as ELVES does), or optimistically reducing their responsibilities (as optimistic Rollups does), we reduce the workload of the entire validators collection, thereby achieving the system's scaling out.
In the Block chain, scaling out is similar to "reducing the number of machines needed to perform all operations".
Summary as follows:
Appendix: Same Hardware, Kernel Update
This section is based on the analogy provided by Rob Habermeier in Sub02023: Polkadot: Kernel/Userland|Sub02023-YouTube (see details: ), demonstrating JAM as an upgrade to Polkadot: kernel update on the same hardware.
In a typical computer, we can divide the entire stack into three parts:
Hardware
Kernel
User Space
In Polkadot, hardware, which provides the essence of computation and data availability, has always been at the core (cores), as mentioned earlier.
In Polkadot, the kernel is actually[9]So far it has two parts:
parachain (Parachains) protocol: a consensus-based, fixed-use core approach.
A set of low-level functionalities, such as DOT Token and its transferability, stake, governance, etc.
Both of these exist in the relay chain of Polkadot (Relay Chain).
User space applications are instances of parachains, their native Token, and other content built on top of them.
We can visualize the process as follows:
Polkadot has always envisioned moving more core functionality to its unique users - parachain. This is exactly the goal that the Minimal Relay RFC aims to achieve. (For more details, please refer to: )
This means that the Polkadot relay chain only handles the provision of parachain protocol, thus narrowing the kernel space to some extent.
Once this architecture is implemented, it will be easier to visualize the JAM migration. JAM will significantly reduce the core space of Polkadot, making it more versatile. In addition, the Parachains protocol will move to the user space, as it is one of the few ways to write applications on the same core (hardware) and kernel (JAM).
This also once again illustrates why JAM is only an alternative to Polkadot's relay chain, not a replacement for parachain.
In other words, we can think of JAM migration as a kernel upgrade. The underlying hardware remains unchanged, and most of the content of the old kernel is moved to user space to simplify the system.
To participate in the discussion of this article, please feel free to express your opinions in the forum:
Please refer to our Polka Forum User Guide for how to participate in the forum discussion.
"How to Participate in Polkadot Discussions: Polkadot Official Forum Usage Guide"