Vigeo Logo - We Provide A Chainlink Reputation Service

Illumination

Flux Aggregator Contracts: The Technical Explanation

A deep-dive into Flux Aggregators. - 2020-11-10

RunLog Jobs

Since Chainlink’s inception, node operators have been running a classic job type called RunLog jobs. RunLog jobs require oracles to listen for specific on-chain events in order to start a new round of aggregation — this happens on a set time interval or when the off-chain price of the asset deviates from the latest on-chain price. Each node operator is given a job specification, like the one shown below, to determine when and where they’ll get the answer and submit it to the contract. Also found in job specs are details like how to parse the JSON from specified API source, multiply value to be used, etc.

1.png

Pay close attention to the “initiators” property; it specifies the when for a given job. In the above example the type of initiator is set to RunLog — we can interpret this simply as “run the job when a specific log is observed”. Here, a log is an event emitted on-chain by the oracle contract (in this job spec our node is told to listen specifically for such events emitted by the contract “0x51DE85B0cD5B3684….”). When the oracle contract ‘logs’ an event on-chain, the job runs.

Let’s look at this event from the aggregator contract side:

2.png

We see here that the aggregator contract emits an OracleRequest event when there is a rate update request, which is exactly what the nodes listen for. This is the event that triggers a job. Look at the same event again on Etherscan:

3.png

Now check one of the many oracle responses to this particular event:

4.png

In event log 258 notice the AnswerUpdated function being called with the oracle’s updated answer.

The RunLog model works well: aggregator contract sends requests to oracle contracts, oracle contract emits a log, node listens and responds. Contracts remain constantly updated. But it could be better. For some contracts it is hard to determine how frequent the update interval should be. When the market is not moving as fast, sending out a full request/response round would cost a lot. There’s little reason to send all oracle contracts the same request transaction when each transaction requires so much gas — this is something flux aggregator contracts solve.

Flux Monitors

The flux aggregator is a new type of aggregator contract that Chainlink has been rolling out since July 2020. Its primary goal is the reduction of feed-update costs while delivering greater decentralization. Here’s how flux aggregators work. Node operators are again provided with the contract address and the API sources they’ll be using. The difference: instead of responding to a round initiated by a single request, they’ll be polling the API sources for the latest answer at a short interval and responding to each other’s new round requests. The oracles are polling to determine whether a deviation threshold for the latest on-chain answer is exceeded. Only when an answer exceeds the threshold does the oracle send a transaction on-chain and mark the start of a new round of aggregation. This means request rounds can be triggered by individual nodes, each monitoring their own set of APIs per feed.

The Code

This is one of the job specifications currently run by Secure Data Links:

5.png

There are some fairly big changes from the RunLog job spec we previously looked at. First, we see the initiator type changes from ‘RunLog’ to ‘fluxmonitor’ (on top of far richer object information). Let’s break down some of the key fields in the ‘params’ object:

“address” — the flux aggregator address.

“requestData” — price reference data we are collecting (e.g. COMP/USD). This is the payload that will be sent to the bridges in the feeds array next.

“feeds” — an array of API feeds used for the job (e.g.

[{“bridge”: “bridge-coinapi”}, {“bridge”: “bridge-cryptocompare”}, {“bridge”: “bridge-coinpaprika”}]

where ‘bridge’ is an external adapter that connects to an API provider).

So far the data is not too different from the RunLog job specifications. From “threshold” however, we get some fields specific to flux job specs. These fields are extremely important to understanding flux aggregators in general:

“pollTimer” — this field is fairly intuitive; poll the API sources on the given time interval.

“threshold” — the deviation threshold which when exceeded triggers a new round. Have a look at exactly how the threshold number is being used:

6.png

Github

Within the flux monitor source code we see the function pollIfEligible is called on each poll timer tick. On each idle timer tick the pollIfEligible function is called again; but now with the parameters 0,0, which forces a poll on the idle timer.

7.png

A look inside the pollIfEligible function will give us a good idea of what the oracle checks for each time before they submit a new round:

8.png

Github

The oracle checks for, in order:

  • Is an Ethereum node connected?
  • Am I (the oracle) eligible for this flux contract round (calls the roundState method on the contract)?
  • Have I already submitted to this same round?
  • Can the aggregator pay me (checks contract balance)?

If all checks pass, the oracle will poll an answer from its specified API sources: this happens locally on the node (off-chain).

9.png

The node first pulls the latestAnswer from the contract and, after receiving polledAnswer (from its API sources), compares polledAnswer to latestAnswer — looking to see if the difference remains within the specified threshold. If the threshold is exceeded the node calls function createJobRun, incorporating polled answer, an incremented round id, and expected payment. It’s worth mentioning again: the node performs the aggregation locally by taking the median of its feed sources. So far no gas cost is incurred. And this same process might run hundreds of times without coming to the final step — submission.

With the createJobRun function the node takes its polled answer and starts a new request round. It does this by calling the submit function of the flux aggregator contract:

10.png

Github

The FluxAggregator.sol file (which is the source file for the flux aggregator contract which our node called) initialises a new round on the contract side (on-chain), then records the submitted answer to be aggregated, pays the node, and validates the answer. This marks the conclusion of a flux job run. Etherscan shows how the process is recorded on-chain:

11.png

Etherscan

Pay attention to the NewRound event — all other oracles now see that a new round is started and submit their own answers.

Why flux?

Flux monitors are designed to save gas costs and maintain efficiency. We saw that flux follows a flexible response round model unlike a fixed number in RunLog. But that’s not the only advantage the flux aggregator contract provides. They keep transaction costs low; below shows that the price of the ‘submission’ (round starter) transaction is $3. Also note the gas price (47 Gwei).

13.png

Here’s a request transaction for the RunLog job:

14.png

Despite a lower gas price (at 27.4 Gwei) the transaction costs $26.74 — almost 10 times the flux request cost. Check the tokens transferred to individual oracles who participated in the round. The size of on-chain data needed to record all these transfers is what generates the higher fee. Flux aggregators, on the other hand, ‘pay’ oracles by simply changing the withdrawable link amount, which tracks balances internally, greatly reducing transaction costs:

15.png

Github

An additional benefit of the flux monitor model is the natural safeguard it provides against outlying data. Each oracle node takes the median of the API responses they poll, and then the flux aggregator contract takes the median again (given that a minimum number of nodes have responded) which eliminates any potential outliers originating from API or oracle node failures.

That’s a wrap for the technical Deep Dive into Chainlink’s flux aggregator contract. Flux aggregators not only reduce the number of total transactions needed for an aggregated answer update, but they also bring down the cost of each request transaction by trimming data to be recorded on-chain. As of October 14th, Chainlink has switched all active feeds over to flux aggregators. Legacy RunLog aggregators are not abandoned however — they are still kept on a 1 update per day basis as a warm backup in case of primary feed failure.

Though flux monitors ease some of those legacy hiccups, the future of Chainlink reference feeds is off-chain reporting (OCR). Nodes communicate off-chain before reporting a price on-chain in the coming reference feed iteration. Since detecting OCR tests on mainnet we’ve been working on another Deep Dive. Stay tuned.

Sign up to our newsletter

Be the first to hear about new content