Frequently Asked Questions (FAQ)

Last updated: September 10th, 2020

If you have questions about ipredikt, please read our FAQ below. If you still have questions about how ipredikt works, you can email us at [email protected].

1. What is iPredikt?

iPredikt is a crowd prediction site where we aim to bring users together on the same platform, help them improve their predictioning skills and produce aggregate predictions that are more accurate and consistent than any individual user. On iPredikt, you can hone your prediction skills, participate in various prediction tournaments, engage with other contestants and win exciting prizes. Unlike some prediction markets, you can share your reasoning with other contestants to challenge your assumptions.

iPredikt is inspired by the Good Judgment Project, a multi-year research project which showed that the wisdom of the crowd could be applied to prediction. iPredikt is designed for anyone and everyone to improve their prediction skills, and yet enjoy the journey

2. What is prediction?

Each question will have multiple answers. Assigning probability (between 0-100) to each answer option of a question such that the total of probability of all options together is 100 is considered a prediction on our platform.

3. What are challenges?

Competitions on iPredikt are called Challenges. Challenges are collections of questions organized by a theme or topic. Each challenge has its own leaderboard, which ranks contestants by how much more accurate their predictions were than the crowd.

4. How is scoring done on iPredikt?

Our primary measure of accuracy is called the Accuracy Score, which compares your score to the crowd. Always remember that lower scores always indicate better accuracy. So negative Accuracy Scores are better than positive Accuracy Scores. On your profile page, next to each question you’ll see several columns. Here’s a more detailed explanation of each:

Brier Score: The Brier score is used to quantify the accuracy of weather predictions, but it can be used to describe the accuracy of any probabilistic prediction. The Brier score indicates how far away from the truth your prediction was.

The Brier score is the squared error of a probabilistic prediction. To calculate it, we divide your prediction by 100 so that your probabilities range between 0 (0%) and 1 (100%). Then, we code reality as either 0 (if the event did not happen) or 1 (if the event did happen). For each answer option, we take the difference between your prediction and the correct answer, square the differences, and add them all together. For a yes/no question where you predictioned 70% and the event happened, your score would be (1 – 0.7)2 + (0 – 0.3)2 = 0.18. For a question with three possible outcomes (A, B, C) where you predictioned A = 60%, B = 10%, C = 30% and A occurred, your score would be (1 – 0.6)2 + (0 – 0.1)2 + (0 – 0.3)2 = 0.26. The best (lowest) possible Brier score is 0, and the worst (highest) possible Brier score is 2.

You will not receive any Brier Score till you make your first prediction. We calculate a Brier score for every day on which you have an active prediction (over the life of the question). Then we take the average of those daily Brier scores and report it on your profile page. Once you make a prediction on a question, that prediction is carried forward each day till you update by making a new prediction.

The Brier Score listed in large font near the top of your profile page is the average of all of your questions’ Brier scores.

Median Score: The Median score is simply the median of all Brier scores from all users with an active prediction on a question (predictions made on or before that day). We calculate a Median score for each day that a question is open, and the Median score reported on the profile page is the average median score for those days when you had an active prediction. We also report the average across all questions on which you made predictions (in parentheses under your overall Brier score).

Accuracy Score: The Accuracy Score is how we quantify how much more or less accurate you were than the crowd. It’s what we use to determine your position on leaderboards for Challenges and individual questions.

To calculate your Accuracy Score for a single question, we take your average daily Brier score and subtract the average Median daily Brier score of the crowd. Then, we multiply the difference by your Participation Rate, which is the percentage of possible days on which you had an active prediction. That means negative scores indicate you were more accurate than the crowd, and positive scores indicate you were less accurate than the crowd (on average).

For Challenges, we calculate your Accuracy Score for each question and add them together to calculate your cumulative Accuracy Score. On questions where you don’t make a prediction, your Accuracy Score is 0, so you aren’t penalized for skipping a question.

You can watch a short video here about predictioning scoring. http://goodjudgment.io/Training/KeepingScore/index.html

5. What do the graphs show?

The Consensus Trend graph reflects the consensus of the most recent 40% of predictions. This is done so that the consensus prediction is not overly-influenced by outlier predictions but still reflects the most recent wisdom of the crowd. This protects the jumping of the trend even when one or many users make predictions that differ from the trend.

6. What is the relation between the Consensus Trend graph and Accuracy Score?

The Consensus Trend graph does not show the median of all predictions on each day and might lag behind a little bit. On the other hand, the Accuracy Score is based only on the median Brier score of all active predictions on each day. The purpose of the graph is to provide an informative estimate of the general consensus.

7. What is the "ordered categorical scoring rule?"

Some prediction questions require the assignment of probabilities across answer options that are arranged in a specific order. An example would be “Runs scored by a batsman in a tournament”. The answer options to such a question would include chronological order of runs scored.

Our usual Brier scoring rule does not consider the order of the answer options and therefore does not give any credit for “near-misses.” Therefore, the usual rule treats a user whose prediction is “wrong” as a matter of rounding error as being no more accurate than a user whose prediction is off by an order of magnitude.

To address this issue, we have adopted a special “ordered categorical scoring rule” for questions with multiple answer options that are arranged in a special order.

8. What are "Conditional" questions?

Conditional questions ask you to make predictions on whether an event will occur IF the condition presented occurs. If the condition does not occur, the question will be voided and not scored.

9. Can I withdraw from a question?

We do not allow a user to delete a prediction or withdraw from a question in order to avoid situations where a user withdraws or deletes their prediction when it becomes clear that they will receive a bad score. If you unfollow a question on which you’ve made a prediction, it does not affect your score – it only affects only your notifications and where you can find the question on the site.

10. How can I improve my predictions?

If you're interested in learning some of the strategies, we recommend reading “Superprediction: The Art and Science of Prediction” by Philip E. Tetlock and Dan Gardner.

11. Can I suggest a question for the site?

To suggest a question, you can email us at [email protected].


Files
Tip: Mention someone by typing @username