Delivering Fair Gas Fees Through Resource Usage Metering

Sui's parallel processing requires a different approach to gas fee calculation than on traditional blockchains, which only process a single transaction at a time.

Delivering Fair Gas Fees Through Resource Usage Metering

Sui's massive parallel processing requires new thinking in how to apply gas fees, the cost of processing transactions on the network. In our work, we examine computation costs and instruction processing to engineer an optimal gas fee mechanism. Accurately assessed gas fees not only deliver fair network cost spreading and healthy operational business models, but encourage developers to use best practices to ensure appropriate resource usage.

Sui’s gas model leverages a Validator survey to set the gas price and ensure that the cost of on-chain transactions are disciplined by the network’s operational costs. However, defining the specific cost of a Sui transaction is complicated by the fact that each transaction includes multiple instructions, and the network can process multiple transactions simultaneously. In other words, how many gas units should each transaction cost?

Applying per transaction gas fees in the same manner as other blockchains, that can only process transactions serially, would likely overcharge Sui users. Our model involves measuring instruction count, stack depth, and memory usage to more accurately represent network resource usage. Overall, our ongoing work involves simulated and real-world testing to deliver a market-based gas price with a resource-based gas schedule to deliver an efficient gas fee mechanism serving Sui's users and network health.

In sum, we've developed Sui's gas fee mechanism through the following broad steps:

  • Determine the best way to count resource usage
  • Design a method to calculate computation costs
  • Analyze the gas fee model based on various load assumptions

Resource usage on a parallel processing network 

As blockchains are a shared resource, gas fees account for the usage of that resource, compensating validators for the operational costs of maintaining the network. In addition, gas fees affect user behavior, encouraging judicious use of the network and, hopefully, discouraging misuse.

Many blockchain languages, such as Solidity and the Ethereum virtual machine, and certain dialects of Move use bytecode-based metering approaches, associating a specific cost with each individual bytecode instruction. In these settings, the accuracy of each bytecode cost to the number of underlying computational costs (i.e., CPU cycles) is not as important as the relative magnitude of the costs of these instructions. For example, if an Add instruction has a runtime of 1 millisecond and Div has a runtime of 2 milliseconds, what matters is that gas_cost(Div)/gas_cost(Add) ~= 2 while the actual underlying numbers assigned for gas_cost(Add) and gas_cost(Div) matter less.

In more traditional blockchains, where a total order to execution is part of the protocol, it makes sense to think of a single transaction as an exclusive use of the network. In particular, if one user runs some computation another user cannot run theirs until the prior computation completes. In such a setting gas costs need to be highly reflective of the relative runtime of a transaction because transaction processing is a zero-sum game–I got time, so everyone else didn’t.

"Pricing a transaction as if it was running by itself may not be accurate, and in fact would most likely significantly overcharge the user."

On Sui, however, the protocol includes massively parallel execution, making resource use non-exclusive. Pricing a transaction as if it was running by itself may not be accurate, and in fact would most likely significantly overcharge the user. Transaction processing is no longer necessarily a zero-sum game–I can have time, and thousands or even hundreds of thousands can also share that same time to process their transactions.

Sui's parallelism caused us to take a step back and examine the traditional instruction-based approaches towards gas metering during execution, and whether they were the best fit for this new protocol. In particular, parallelism adds a fluctuation to the cost of resource usage that calls for a model where we group similar things together instead of applying instruction-specific pricing. For example, a specific cost for adding two integers, and a different specific cost for dividing two integers is less important, because the actual runtime costs are more difficult to predict due to parallel artifacts.

With this in mind, we set out to design a gas model to both capture the new parallel aspects of execution, make the resulting model as easy to understand and reasonable as possible, and most importantly to give those wins back to Sui's users. Developing this model led us to reject the approach of adapting an inherently serial and historical bytecode-based gas mechanism that attempts to measure each computational cycle and instead design a curve or tier-based gas model for execution. 

Designing Sui's gas model

Coming up with an accurate model for computation costs, leading to fair gas fees, required discarding what might be relatively easy paths, such as measuring transaction time, and investigating other computational factors. Along with arriving at a reasonable model, we also need to consider Sui's storage fee, which allows for long-term on-chain storage.

Computational costs

With parallelism and efficient computation native to Sui, the comparative overhead of executing a single instruction becomes less important to the overall execution time or duration of the transaction. Instead, the sheer number of instructions executed matters the most in determining accurate overhead. Only after executing large numbers of bytecode instructions does the actual runtime cost difference for each instruction become significant to the overall runtime of the transaction.

In addition, creating a cost model that accurately reflects the real-world costs of execution, even for a serial execution system, requires overcoming a number of challenges. 

  • Complicated and highly complex gas charging mechanisms are difficult to understand, program against, and implement accurately. 
  • The gas calculation for an instruction can be nearly as costly as the execution of the actual instruction. 
  • The execution times for the same instruction may vary significantly depending on other contextual information not taken into account in the model (which is essentially impossible to model reasonably), regardless of the complexity of the gas model. The paper, Broken Metre: Attacking Resource Metering in EVM, by researchers Daniel Perez and Benjamin Livshits, explores these challenges in depth.

With these issues in mind, we set out to explore whether there was a better cost model for gas computation in Sui. We examined ways of coarsening the cost model through different dimensions of computation, rather than relying on precise and discrete counting of instructions and memory. Additionally, we aimed to keep the resulting model as simple and easy to understand as possible, while also ensuring that it is fair and accurate for users.

From this exploration, we arrived at a model where we track multiple different dimensions of computational cost simultaneously during execution based on three factors: 

  • Instruction count
  • Stack depth 
  • Memory usage 

Each of these different costs contribute to transaction resource usage in different ways, with different weights, and the cost of using one unit of each (e.g., executing a single instruction, allocating a single byte) will increase as more and more of these resources are used over the course of the transaction’s execution. The amount that each one increases, and how much of that resource needs to be used for it to start increasing, are defined independently of each other, using different cost curves. Each cost for each dimension during execution is then combined with step functions.

The core idea with this new model was that we don’t aim for precision in an exact, cycle-accurate manner, but rather define a range for which transaction costs are similar. Additionally, the individual cost of a single instruction matters less than the overall cost of executing all of the instructions of the transaction. Therefore, this new design takes a more dynamic view towards resource costing, where the more you use the resource in a transaction, the more that resource costs. We drew some inspiration from the paper, An Adaptive Gas Cost Mechanism for Ethereum to Defend Against Under-Priced DoS Attacks, by the researchers Ting Chen, Xiaoqi Li, Ying Wang, Jiachi Chen, Zihao Li, Xiapu Luo, Man Ho Au, and Xiaosong Zhang.

Dynamically pricing instructions and resource usage in this way lets us underprice resource usage under certain bounds, since we know that if those bounds are exceeded (e.g., in an attack scenario), the dynamic pricing of these resources during execution will eventually cause the computation to become too expensive to reasonably exploit. This ability to underprice resource usage at the lower end of computational usage (e.g., less than 100,000 instructions, less than 1 megabyte of total allocations) allows us to incentivize and encourage the idea of "conscious computations" while minimizing complexity, both in implementation, explanation, and reasoning.

"This design makes the need to optimize code for gas cost less of an issue."

Due to the simplicity of the model at the lower levels of computational complexity (where every instruction costs one gas unit and every allocation costs one gas unit), the model is easy to understand and inexpensive. This design makes the need to optimize code for gas cost less of an issue. Because every instruction cost is the same, it is almost impossible to do so. 

In this way we use our gas model to encourage proper programming design and remove the focus on minor details, single bytecode instructions, and obtuse optimizations that can often be harmful to the program. At the same time, we wanted developers to be aware of their computations and the resources they use while still being fair in accounting for their resource usage.

Non-computational costs

In addition to the computational costs described above, there are additional costs important to the overall cost of executing a transaction on Sui. The two primary sources of this additional cost today are long-term storage costs and gas cost rounding. 

Storage costs are handled in a unique way via Sui’s Storage Fund. Once we finish executing a transaction, we compute the data to be allocated to long-term storage and this additional cost will be added to the overall cost of executing the transaction. Moreover, if someone deletes storage, a storage refund will be given that can either reduce the total cost of the transaction, or even result in a refund. However, these types of storage refunds are only assessed after the execution of the transaction and therefore cannot contribute to the overall amount of gas that can be used during the execution of the transaction. 

Gas cost rounding occurs due to the step function in gas metering, where we put computational units in buckets ranging from 1,000 to 5,000,000. Any transaction using less than 1,000 computational units would be rounded up to 1,000, resulting in a negligible additional cost.

Detailed Sui gas cost analysis

The gas cost model we designed needs to work in practice as well as theory. Testing it involved looking at transaction costs for different synthetic workloads. In particular, given N instructions, what would the costs look like for different values of allocations, or pushes and pops for each instruction in the cost model.

Synthetic case 1: Instructions with no allocation

We start by looking at the most synthetic workload of all, instructions with no allocations that also do not affect the stack. Each instruction is essentially a NoP instruction. Below we see the cost for execution on the y-axis as more and more instructions are executed along the x-axis.

graph showing a rising line to the right
Instructions with no allocations begin to rise at certain thresholds, as opposed to a linear increase. Our model makes transactions with excessive instructions costly, which should limit resource-hungry apps.

As we can see from this test, and as was discussed previously, the cost per instruction is not linear. After a certain threshold charges will be multiplied by a constant making the final cost more expensive. The object is to keep costs contained for as much and for as long as possible while at the same time avoiding denial of services attacks or usages of the network that result in a similar effect.

This non-linear increase in costs lets us keep costs contained for most common usages of the network, while at the same time allowing heavy transactions to be executed for higher fees if and when users are willing to pay. Finally, there has to be a point where resource usage is not justified any longer and possibly penalizing the system and other users. So for certain usage and consumption of resources we want to make the cost prohibitively expensive (while not placing a strict upper bound).

Synthetic case 2: Pure allocation

In the second synthetic test case, we looked at an ideal setting where we somehow managed to allocate bytes of memory during execution while not executing any instructions. The graph below shows the cost along the y-axis for allocating up to 43 megabytes of memory. 

graph showing a line rising towards the right
The cost for pure memory allocation rises at certain thresholds, similar to our previous test case using only instructions. However, here the costs increase more quickly as memory usage grows.

Again, as we saw in the previous case the cost of allocating memory scales non-linearly as more and more memory is allocated during execution. However, as opposed to instruction execution the non-linear scaleup for memory allocation is much higher as the upper ends of the costs are reached.

Having looked at the different primary dimensions separately, we can now take a look at some more realistic (yet still synthetic) workloads.

Synthetic case 3: Instructions with different allocation amounts 

Having looked at instruction and memory costs in isolation and how costs change as more instructions are executed or more memory is allocated, we can now analyze the cost landscape when we start putting them together. The below graph shows costs on the y-axis as the number of instructions increases on the x-axis for different instruction counts.

graph showing five lines of different colors rising at various angles to the right
In these synthetic loads, the two most likely to mimic real-world use are memory plus instruction (orange) and memory plus instructions (bright blue). Gas costs increase substantially after particular thresholds.

One thing to note is how the cost is very contained for a period of time (number of instructions) and then grows exponentially from there. This gets back to what we mentioned earlier where we could handle either underpricing or aggressively pricing costs at the lower end of the computational usage scale because, as computational usage within a transaction increases, the cost of that computation will increase dramatically along with it, counteracting any possible underpricing or computational artifacts not taken into account by the cost model.

Real world testing and tuning

The tuning and evolution of the Sui gas model both for computation and for non-computational aspects of the system is an ongoing process where we need to constantly evaluate how the current model behaves against the real world, and determine where the pain points are for developers and users. We plan on continuing to develop a fair and comprehensible model that can be easily programmed against, that doesn’t discourage good programming habits as much as possible, and incentivizes good behavior. 

Looking to the future, the concept of opportunity cost and how gas fees and the gas model relate to the parallel nature of execution in Sui means that we will need to think about ways of incurring additional costs for interacting with or using shared objects that create serialized sections of computation.

Additionally, we will want to add additional costs to cover non-computational costs of the transaction, such as network bandwidth. We will look at ways that the transaction size, input objects' count and size, and input arguments' count and size can be factored into the overall cost of executing the transaction.

The evolving gas story in Sui is an ongoing process and we value community feedback on both places where the gas model could be improved and where tooling and greater visibility and information into how transactions use gas would be useful.