Sophia: Statistical Consensus Mechanism
Sophia utilizes a set of rules designed to analyze and interpret the metrics data of nodes, uncovering their functional capacity to participate in the network. Through this application, three categories of validators are activated to process a broad spectrum of transactions submitted at varying fee rates. This mechanism enhances the block production process by expanding the chain’s capabilities, setting the standard transaction fee remarkably low based on previously mentioned average metrics. This standard cost applies to all transactions created and submitted by end-users, ensuring their rapid processing is comparable to high-fee transactions on other blockchains.
Periodically, Sophia evaluates individual statistics and generates a list categorizing validators into three groups. The Deep Learning Mechanisms oversee the PhronAi block sequence holistically, identifying any anomalies and initiating a Machine Learning auto-response mechanism to mitigate the risk posed by any potentially malicious party. Furthermore, validators within these groups are tasked with processing transactions immediately, based on fee values, benefiting end-users, node owners, and developers alike.
Validator participation within the network is carefully assessed using various metrics, which, after each mechanism cycle, serve as inputs for the next. Should a validator exhibit reduced participation, its metrics are recorded as low. With low metrics as inputs, there is a possibility of category shifts among validators, from super node to fast node or average node. Consequently, PhronAi motivates validators to engage actively in the network by processing blocks that include the maximum number of transactions. The first step involves collecting inputs from useful metrics that a node calculates independently. During the network’s initialization phase, these metrics, serving as input values, are supplied by the genesis file and a self-enforcing smart contract. Once the statistical algorithm becomes fully operational, metrics input values will also be directly obtained from the event trigger functionality.
Low latency, high throughput, the total number of votes, and maximum liveness metrics are sorted separately. For example, in the case of low latency, the individual matrices of all validators will be used for low-latency sorting. After this process, sorted lists from each metric are passed to the next step. The following details the criteria used in sorting each list of metrics for the super, fast, and average categories.
A variable is declared by defining an equation:
x =TotalnumberofValidators/NumberofGroups
The number of validator groups is 3.
The top 'x'metrics in each sorted list are considered super metrics. Similarly, the remaining top 'x'metrics in each sorted list are considered fast metrics. All remaining metrics will be considered as average metrics.
Validators are categorized into super, fast, and average nodes based on a sorted list of all metrics. A validator is formally designated as a super node if it consistently appears as a super node in every sorted list. Similarly, a validator is classified as a fast node if it is listed as fast in at least three sorted lists. If these criteria are not met, the validator is designated as an average node. At this point, three comprehensive lists containing the IDs of all validators in the network are compiled, categorizing them as super, fast, and average nodes respectively.
In this phase, validator IDs within the super, fast, and average node categories retrieve their records from databases and caches, leading to the creation of fully functional validator objects ready to participate in the block-producing mechanisms.
The registry service temporarily stores the active validator groups from these three categories. These groups are also permitted to engage in the event emission and evaluation process for a specific epoch round.
Concurrently, a self-enforcing event trigger service operates in parallel, generating input signals for the initial phase of the statistical algorithm. This service additionally generates input signals in reaction to particular events, such as the addition or removal of a validator. Consequently, the data-capturing fields of input objects may be initialized or aligned with input matrices. Moreover, this phase is technically regarded as the concluding step of the statistical algorithm.
This final step also operates in parallel and shares its output with the continuous execution of the process already running in the previous step. Its primary purpose is to monitor the liveness of validators within each group. Should any changes in the liveness status occur, such as the addition of new validators or the removal of existing ones, a reporting event is generated and conveyed through the event trigger service.