Once a registrant has submitted an **Application**, we will conduct an initial **Administrative Review** of every submission, to ensure that they comply with our **RULES** and other requirements. After we have qualified those submissions for assessment, every valid application is assigned to five of our **Evaluation Panel** members, will are responsible for assessing them using our **Trait Scoring Rubric**. Those judges will offer both scores and comments against each of four traits. Any assigned Evaluation Panel member will assess an applicant by resolving a score between 0-5 points for each trait, in increments of 0.1. Those scores will combine to produce a total score. Examples of possible scores for a trait are: 0.4, 3.7, 5.0, *etc*.

The most straightforward way to ensure that everyone is treated by the same standard would be to have the same judges score every application; unfortunately, due to the number of applications that we plan to receive, that is not possible.

Since the same judges will not score every application, the question of *fairness* needs to be carefully explained. One judge may be a hard grader, taking a more critical view by giving every assigned applicant a range of scores only between 1.0 and 2.0, as an example; meanwhile, another judge may be more generous, scoring any assigned applicant between 4.0 and 5.0.

For illustrative purposes, let’s look at the scores from two hypothetical judges:

The first judge is far more generous, as a scorer, than the second judge, who gives much lower scores. If your application was rated by the first judge, it would earn a much higher total score than if it was assigned to the second judge.

We have a way to address this issue. We work to ensure that no matter which judges are assigned to you, each application will be treated fairly. each application will be treated fairly. To do this, we utilize a mathematical technique relying on two measures of distribution, the **mean** and the **standard deviation**.

The **mean** takes all the scores assigned by a judge, adds them up, and divides them by the number of scores assigned, giving an average score.

Formally, we denote the **mean** like this:

The **standard deviation** measures the “spread” of a judge’s scores. As an example, imagine that two judges both give the same mean (average) score, but one gives many zeros and fives, while the other gives more ones and fours. It wouldn't be fair, if we didn’t consider this difference.

Formally, we denote the **standard deviation** like this:

To ensure that the judging process is fair, we rescale all the scores to match the judging population. In order to do this, we measure the mean and the standard deviation of all scores across all judges. Then, we change the mean score and the standard deviation of each judge to match.

We rescale the standard deviation like this:

Then, we rescale mean like this:

Basically, we are finding the difference between both distributions for a single judge and those for all of the judges combined, then adjusting each score so that no one is treated unfairly according to which judges they are assigned.

If we apply this rescaling process to the same two judges in the example above, we can see the outcome of the final resolved and normalized scores. They appear more similar, because they are now aligned with typical distributions across the total judging population.

We are pleased to answer any questions you have about the scoring process. You are able to ask questions related to the scoring process on the discussion forums once you register and begin developing your application.