The GMC Jam Suggestions Topic

Alice

Darts addict
Forum Staff
Moderator
@Toque Actually, GM48 mentioned by @Selek is a separate competition. I think it originated from GameMaker reddit, but I'm not sure.

At any rate, these competitions are independent.
You are free to participate in both. GMC Jam timeframe has been helpfully shifted in a way that these competitions alternate between one another (so that a given GM48 run ends up roughly right between two GMC Jams, and vice versa).
At the same, note that we don't host GM48; this forum section is dedicated to GMC Jam only. I guess there's nothing wrong with making Community Chat topic about the upcoming GM48, though.
 

Toque

Member
@Toque Actually, GM48 mentioned by @Selek is a separate competition. I think it originated from GameMaker reddit, but I'm not sure.

At any rate, these competitions are independent.
You are free to participate in both. GMC Jam timeframe has been helpfully shifted in a way that these competitions alternate between one another (so that a given GM48 run ends up roughly right between two GMC Jams, and vice versa).
At the same, note that we don't host GM48; this forum section is dedicated to GMC Jam only. I guess there's nothing wrong with making Community Chat topic about the upcoming GM48, though.
thanks the clarification!!! Sorry for any confusion.
 

Mercerenies

Member
Do you mean jam 38? Or am I missing something?
Seems like we just finished GM jam 37 and everyone is taking a Jam break. Things heat up a couple weeks before the next jam. I think there is between 30-60 entries. "Is it worth it?" Yeah its tons of fun if you like doing jams.

There is one more this year. You can look at the last Jam posting and look at the games. Look at the posts. Get a feel of what its about. You could always give it a shot. Its a challenge but its rewarding and fun.

Did you have specific questions that I haven't answered? I usually finish top three so you should take everything I say as fact........ Everybody fights to be on my team. Nintendo has been scouting me. I'm holding out for more money. You can name drop "Toque" if you need anything around the jam people. Hahaha

But seriously ask lots of questions if you have any.
Toque.
I believe Selek is asking about another GM-related jam. I don't know anything about it really, but I have heard people talk about it at least.

EDIT: That'll teach me to refresh the page before responding. Now I feel dumb 😓
 

kris24

Member
Is there a schedule somewhere that says when the next GMC jam will be? Going by last year it looks like the end of November maybe, but I'm wondering if there's somewhere I can find out exactly when it will be so I can set a reminder for it.

Reading through the pages, I just want to say that GM38 was my first one in this community (I've done GM48 a number of times now) and I really appreciated how it is structured - the time schedule was nice for me, personally. A little bit more time than GM48 to prepare and polish before uploading, but not so much time that I feel like I'm "missing out" because I have to work during potential jam time. I also really liked the community focus on feedback and comments; the encouragement to play, rate, and leave a note for every game is awesome. @Siolfor the Jackal taking the time to stream every game was especially amazing and a very useful resource for every participant. Jams with more entries would be impossible or at least very difficult to do all that, so while the number of entries seems to have come up as a concern, I honestly think 40-50 entries is a perfect amount for this kind of jam, and even if there were fewer sometimes I don't think it's something to worry about too much.

I think both GMC jam and GM48 are really beneficial for the GameMaker community, so if there's one suggestion I'd have, it's that both jams cooperate to at least mention the existence of the other if possible. Obviously, doing 8 jams a year is probably too much for most people, but it could be that sometimes someone's schedule lines up better with one or the other. GM48 has also had worries about participation in the past I think (they had 60 or so in this last one so maybe it's fine now) so it can't hurt to get the word out to all the different GM communities. I found out about this jam from the post that someone made on reddit, so maybe there's already an effort for that.

Looking forward to the next one!

EDIT: And of course right after posting this I find the dates for the next one - Nov 26-30. D'oh! Unfortunately I probably won't be able to join as I won't be at home, but oh well there's always the next one!
 
I've been doing a monthly vgm remix contest recently, and I've been talking to the organisers a bit in regards to voting and ranking.
They actually do this neat thing where you get an extra point for voting. I was thinking something like that could maybe be applied here to encourage more voting from participants? I realise though that there is potential for it to be abused, but I thought maybe it was worth bringing up and discussing.
 

Evanski

Raccoon Lord
Forum Staff
Moderator
I've been doing a monthly vgm remix contest recently, and I've been talking to the organisers a bit in regards to voting and ranking.
They actually do this neat thing where you get an extra point for voting. I was thinking something like that could maybe be applied here to encourage more voting from participants? I realise though that there is potential for it to be abused, but I thought maybe it was worth bringing up and discussing.
honorary reward for best voter? though I think theres something like that already

@Siolfor the Jackal taking the time to stream every game was especially amazing and a very useful resource for every participant.
I would like to say @Siolfor the Jackal inspired me to stream the jam games this time around so props to them for that
 

ghandpivot

Member
No what I mean is voting could contribute to your rank potentially.
I probably say this too often, but being punished for voting is madness. You should be compensated at the very least, or preferably rewarded, perhaps even heavily. A free first place to everyone who casts their votes would drive up the amount of plays and increase interest and participation in the jams.
 

GameDevDan

Former Jam Host
Moderator
GMC Elder
I can think of the following reasons not to implement a "points awarded for voting" system:
  • The whole point of the contest is to see who made the better game. Bonus points for voting potentially pushes you above other competitors using a criterion entirely independent of how good your game actually is.
  • How do you deal fairly with team entries? Bonus point for all team members who review, a third of a point each, only the first member gets the bonus, they only get a bonus if ALL team members vote? Creates a bit of a headache.
  • I think if we examined all 38 jams so far the impact of developers not voting on the final placings is probably minimal.
  • It will possibly encourage bad voting behaviour. E.g. someone who doesn't usually vote because they can't be bothered may vote completely randomly just to get the bonus point.
  • What counts as a valid vote for the purposes of obtaining bonus points? If we ever get back to the stage where jams have ~80 entries are we saying people need to rank all 80 to get a bonus? Many people don't have that much spare time.
This is not me saying "no", I am merely host. A vessel into which the community pours some rules and jam comes out. Just thought I'd lay out some of the objections/obstacles.

honorary reward for best voter? though I think theres something like that already
GMC Jams 19 through 31 did have a "best reviewer" reward, yes. I stopped doing them when I returned to hosting the jam @ Jam 32, mostly because I didn't have the spare time to commit to judging the reviewers.
 

ghandpivot

Member
To address your points:

I think if we examined all 38 jams so far the impact of developers not voting on the final placings is probably minimal.
This argument kind of undermines the rest of them. If the bonus points don't really matter competition-wise, then giving them for fairness and to stimulate more people to vote wouldn't be a problem. There is something fundamentally wrong with being punished for going out of your way to help the community, and if fixing that doesn't really scew the stats then I'm even more for it?

How do you deal fairly with team entries?
A game gets +1 if the creator has cast their votes. This means that if a team member votes, the game gets its bonus point. If more vote, it doesn't change the rewards. If anything, the way it works now, if all 3 team members vote their game is heavily unfavored.

What counts as a valid vote for the purposes of obtaining bonus points?
A vote for your top 10. This number is arbitrary but seems fair?

The whole point of the contest is to see who made the better game. Bonus points for voting potentially pushes you above other competitors using a criterion entirely independent of how good your game actually is.
Maybe, but that goes both ways. If the creator of the best game in the jam were to vote with the current system, it would be equal to recieving a -1 (or whatever the opposite of first place is) as they end up last in a vote even though their game is the best. To me, limited by my basic mathematical education, the optimal way to make sure the best game wins seems to be to at least prevent the voters from being unfavored?

It will possibly encourage bad voting behaviour.
Yes. Probably not a lot, but a few. By not overcompensating the vote, this will not matter. By overcompensating, this can be an issue but I doubt it'd be a big problem.
 

Alice

Darts addict
Forum Staff
Moderator
I think the question to ask here is:
a) whether someone wanted to vote/review entries, but didn't do it in order to avoid a disadvantage
b) whether someone who didn't really want to vote/review entries would do that in order to get an extra advantage

I expect people from a) group would give some moderate-to-high quality feedback - reviewing is something they wanted to do, anyway. So if there are people like that, then incentive could indeed result in some extra quality feedback.

When it comes to people from b) group, I'd expect them to give low-to-high quality feedback - some might just want to grab the extra points, while others might decide if they do it it's worth doing right. So in such case the incentive could be hit-and-miss - we could end up with more low-effort votes with some people not even playing through all the entries* or playing them very briefly** and/or we could get a few more sets of high quality reviews.

There's also a risk that some people - who can commit to making a Jam entry but otherwise have not enough time to play and review other entries afterwards - might feel demotivated, because seemingly lower-quality entries would end up higher because of the voting bonus. Note: it depends on the strength of a bonus; e.g. a 1st place worth of points could skyrocket some middle-grade participant by a few ranks.
It could be even worse if someone did try to play and review each entry but only ranked about half of these. Then, they either are frustrated the lack of time cost them voting bonus or they succumb to temptation and submit an incomplete-playthrough vote*.

Then there's a matter of intrinsic vs extrinsic motivation - right now people mostly vote because of their intrinsic motivation (they want to play through the games and provide feedback). Adding an incentive could risk current voters not getting as much fun from the experience as before, because in their mind it would become a mean to getting a higher score rather than a fun experience in and of itself (it's weird, but human mind can work this way).

I'm not saying "Absolutely not", but I'm also not certain introducing the reward would result in higher quality votes.
One option is to give it a test run during one of subsequent Jams (not the nearest, though, because there'll be plenty going on already) and see how it affects votes. It requires making some design decisions (like Dan's aforementioned issue with teams).
Before that, it's worth looking into other community-voted competitions like GMC Jam, to see how they address the voting participant disadvantage (if they address it at all).
It would also help if many other Jammers gave their input. E.g. asking in a poll:
- "Do you vote in GMC Jam?" with answers: "Yes if possible", "No, because it puts me at disadvantage", "No, because I have no time", "No, because I don't want to"
- "Would you vote in GMC Jam if voters games got ranking bonus?" with answers: "No, for the same reason", "No, because I don't like this system", "Yes, but not for reward", "Yes, because it gives me an advantage"

I guess one takeaway from all that is that incentives are tricky, and might or might not give the results we want. ^^'

*As I see it, the reason we don't have a voting system debate every single Jam anymore is because most voters (if not all) actually do play through each entry. Otherwise, we'd keep seeing the anomalies with some consistently lowest-ranked entry being above some others, because it got ranked 10th by a person who played 10 games and 15th by a person who played 15 games, and others ended up played more rarely. Then again, maybe with convenience of the Jam player even low-effort votes would cover every game?

**I don't have anything in particular against people playing the game only for five minutes or so, especially since it's still quite a commitment when multiplied by number of entries. However, as someone who tends to make longer and not-as-flashy entries I'd rather avoid breaking the current balance between people who are thorough in their playthroughs - playing most games to conclusion if there's one - and those who get the gist of the game based on 5-10 minutes of gameplay and base their ranking on that.
 

The M

Member
If I'm not mistaken, adding a significant bonus to low-tier entries would be a huge lift compared to neighboring entries that don't get one while a small bonus to a top-ranking entry would be completely insignificant. Perhaps adding a bonus of 1/X then, where X is the rank of your entry (basically saying that those who didn't vote would have voted the same as the current average)? With that said, I don't really care either way and I feel like most people with high-scoring games already vote themselves, making it a non-issue, but I could be wrong.
 

GMWolf

aka fel666
I think a one point difference is not significant if you consider some entries are more likely to be played if they are made by more well known GMC users.

I wonder what the stats are, do games usually get roughly the same number of votes each?
 

Alice

Darts addict
Forum Staff
Moderator
Perhaps adding a bonus of 1/X then, where X is the rank of your entry
That... actually does sound like a good estimation. It's pretty close to you ranking your own entry at Nth place, where N is the place everyone's raw votes gave (including your vote).

To consider some properties:
- if everyone got the voting bonus of 1/Nth raw rank, then resulting rank would stay the same
- if you vote, then entries you ranked above your raw rank will get further ahead of yours, while entries you ranked below your raw rank will get further behind of yours
- moreover, if you vote then your raw rank stays the same or becomes lower; you can't increase your raw rank
- with voting potentially decreasing your raw rank, the system might still be biased towards "punishing" rather than "rewarding" voting (but not as much as it is now)

So yeah, with the design goal being "own voting adding neither advantage nor disadvantage" (rather than "rewarding the voting"), I think the proposed formula comes pretty close, regardless of whether the voter tends to rank higher or lower. It doesn't take much effort to apply raw ranks in the spreadsheet, either. Well thought out, @The M

@GameDevDan What do you think?
With the voting being more neutral for voter's overall score, I think it actually does a better job at votes reflecting which game is better (as opposed to current system significantly punishing voters, or 1st rank worth bonus significantly rewarding voters).
Also, non-rewarding voting mitigates the potential issue of people adding low-quality votes to rank higher, while non-punishing voting mitigates the potential issue of people refraining from voting so that they aren't ranked lower.
 

Evanski

Raccoon Lord
Forum Staff
Moderator
while im thinking about it again, could be get a patch for the jam player? theme has been missing from the menu for the past 3 jams lmao
 

GameDevDan

Former Jam Host
Moderator
GMC Elder
This argument kind of undermines the rest of them. If the bonus points don't really matter competition-wise, then giving them for fairness and to stimulate more people to vote wouldn't be a problem.
That's not quite I meant - I meant the current situation (doing nothing) doesn't affect the results that badly. Whereas giving reviewers bonus points potentially would swing the results and has the added downside of being done on purpose. (Whereas I honestly don't think people choose not to vote to get an advantage, they're just busy or can't be bothered).

You make some good points about how we would go about teams & the cut off point for ranking entries (top 10 seems fair).


@GameDevDan What do you think?
You absolutely make some good points and if anyone can think of a way to make it fair I'm sure you can. I'd still be opposed to any system like this for a couple of "principle" based reasons...

- The voting should be as simple and transparent as possible while still being fair. I think we're in danger of overcomplicating something that's supposed to be fun :D
- IMO the voting system should be pure. Games should get points for being good or bad, not because their creator has more free time to vote.

And I say this as someone who would, in the current jam, definitely benefit from the bonus points since I've posted my reviews already lol.

Again though, it's not up to me. If we wanted to do a "here are the cases FOR and AGAINST" topic with a poll attached I'd do whatever people wanted. Although, like you said, NOT for GMC Jam 40. Big sneaky plans for that one and they don't involve confusing bonus points :p


while im thinking about it again, could be get a patch for the jam player? theme has been missing from the menu for the past 3 jams lmao
This is probably my fault. I'm sure there's something I'm supposed to edit to get it to show up... Oopsie.
 

HayManMarc

Member
I agree with Dan. I used to join in on these voting discussions long ago and argue for and against things. Now, the simple system in place seems fair enough, with results that seem correct. Reviews are fun to get, but honestly, after about 3 or 4 reviews, you should have a good grasp on how your game was generally received. I dont see any benefit of coercing people to review. When folks review because they want to, out of the goodness of their heart and with a sense of community, a coercive technique used to make them review starts to feel subversive. I say keep the GMC Jam free and open and simple.
 

Alice

Darts addict
Forum Staff
Moderator
- IMO the voting system should be pure. Games should get points for being good or bad, not because their creator has more free time to vote.
The current voting system is impure, since a game loses points - in relation to other games - because their creator has more free time to vote.
The proposed system is intended to reduce that impurity, making the act of voting more or less neutral.
I dont see any benefit of coercing people to review.
Again - the proposed system is not meant to reward voting, but not punish for it. The 1/(raw rank+1) bonus points are meant to reduce the disadvantage one gets from voting, when they can't rank their own entry.
If this does get introduced, that's also how it should be presented in voting system description - as a mean to balance out the voting penalty, rather than a reward.

Another matter altogether is that there are some claims whether the current system results in tangible penalty for voting and whether the proposed system would push games created by reviewers ahead of those who didn't review. Therefore, I suggest the following experiment:
- get the results for all posted votes (overall results)
- take each entry whose creator posted a vote
- for each such entry, calculate the results without its creator's vote (adjusted results)
- if the entry ranks lower in overall results than in adjusted results, it means voting penalised the voter's entry
- if the entry ranks higher in overall results than in adjusted results, it means voting rewarded the voter's entry
- with that, we can observe how frequently and how much entries are rewarded or penalised by specific system

We can grab results from previous Jams to calculate these and see how the current system and 1/(raw rank+1) bonus perform. Ideally, voting should neither reward or penalise the voter's entry, or at least reward about as frequently as penalise.
 

GameDevDan

Former Jam Host
Moderator
GMC Elder
The current voting system is impure, since a game loses points - in relation to other games - because their creator has more free time to vote.
The proposed system is intended to reduce that impurity, making the act of voting more or less neutral.
This is a philosophical point of difference where there's no right answer but I disagree with this take on it.

The current voting system: reviewers rank the games in the order they liked/disliked them most and we count it. That's pure, we don't mess with peoples' votes before we reveal the result. Any negative consequence of a developer choosing to vote is unintentional.

The proposed one: we actively interfere in the results to give bonus points to people because they voted, regardless of how good/bad their game is. This is not pure because we are messing with the votes based on criteria other than the game's actual worthiness.

That's how they're different in my mind.

My overall take is: It's not the end of the world to me if we implement this, it just feels a bit wrong. I've participated in GM48 once and you are forced to vote for X amount of games there or your entry doesn't count at all. So I know this sort of system does work (despite all the potential hang ups) I'd just personally vote "no" if we did a poll on it.
 

Alice

Darts addict
Forum Staff
Moderator
The proposed one: we actively interfere in the results to give bonus points to people because they voted, regardless of how good/bad their game is.
Qualitatively speaking, yes - the fact of adding bonus points applies whenever someone votes, regardless of their entry quality.
Quantitatively speaking, no - the amount of bonus points depends on the game's performance in raw rankings, which is meant to approximate how good/bad the game is.

The current system - in terms of points assigned - is like participating voter communicating:
These top entries have highest quality, entries further down the ranking have lower quality, my entry is the worst, I'm a pathetic creature.
Whether the negative consequence is intentional or not doesn't matter - the end result is that's the message conveyed through the vote.

The proposed system would instead have a participating voter (roughly) communicate:
These top entries have highest quality, entries further down the ranking have lower quality, I refrain from judging my entry and agree with everyone else's opinion on it.

The ideal "agree-with-everyone-else" system would have tallying all other people's votes and ranking voter's entry at resulting rank (pushing other entries further down), but applying this system would be too complex and 1/(raw rank+1) would likely yield extremely similar if not identical results.

But yeah - as I see it, the reason why voters can't rank their own entries is based around the idea of "you are not supposed to judge your own entry".
In this regard, the voting message of "I agree with everyone else" is closer to that intent than "my entry is the worst".
That's also why I feel - despite slightly larger mathematical complexity - the proposed system feels closer to voting intended message than the current one.

(whether the results actually vary significantly between systems or not is yet to be found out)
 

Toque

Member
Seems like lots of reviews now already?

Adds more work for jam organizer? Seems like a lot of time for them already.

If Dan snaps, rage quits, declares Toque the winner causing whole thing to burn to ground I Could be persuaded.

otherwise I’m good either way.
 

ghandpivot

Member
Philosophical differences are the best ones.
The current voting system isn't necessarily pure. We enforce a rule where people who vote put their game in the last place of their votes. We have chosen this system, and all the negative consequences are thus either chosen as necessary evil or unexpected (which they're not).
The proposed voting system is that instead of us choosing to put the reviewer's game in the bottom, we choose to put the reviewer's game in another place based on what the community thinks as a whole.

To sum it up in my view:
Old system --> We have chosen where a reviewer's game is in their votes.
New system --> The community chooses where a reviewer's game is in their votes.

No less pure and we do not mess with people's votes more than before as they never had a say in the matter to begin with.
 

HyprBlu

Member
From someone who didn't cast votes last Jam, it's not because I was lazy or just didn't feel like it. I still played every game and still had winners picked out in my head, but the reason why I struggled and am still feeling uncomfortable, is I hate being put in the position where I have to tell someone "I like your game least, you were my last place vote".

And yeah, I could just vote my top 3 but then all the other entries are missing out on votes which is even worse.

But what's helping me want to vote this time, is that because I didn't vote last time, I caused every single entry to miss out on points and feedback that they could use to help them become better game developers so I felt even more terrible than I'm going to by voting a last place.

Not really a solution or even a point to my post, just thought I'd share the mindset of someone who participated but didn't end up voting
 

dadio

Potato Overlord
GMC Elder
Just popping on to say MEGA is very slow to download from... (45mins-1hour for me here!)
It's very offputting for those of us with very limited free time...
If there is any other service/upload site that could be used that would be faster, I'd much appreciate it!
(Just finished downloading now... hopefully can manage to play through and place before deadline.)
 

GameDevDan

Former Jam Host
Moderator
GMC Elder
Just popping on to say MEGA is very slow to download from... (45mins-1hour for me here!)
It's very offputting for those of us with very limited free time...
If there is any other service/upload site that could be used that would be faster, I'd much appreciate it!
(Just finished downloading now... hopefully can manage to play through and place before deadline.)
There aren't many websites that will let you upload 500MB and not murder the download speed in some way. Better advice would be for people to reduce their game size in the first place through optimisation... and failing that unfortunately maybe advise people with bad connections to pick & choose which games they want to play from the topic (I already did that this time round fearing 500MB was a bit much)
 

Alice

Darts addict
Forum Staff
Moderator
Just going back to the topic of non-punishing voting entrants, because I just came up with a method so dead simple I can't believe no one came up with that before (or at least I don't remember that).
Definitely not for this Jam, but maybe for the next one?

So we know that there can be something like N votes.
Out of these N votes, X can belong to authors of the specific entry.
Thus, only N-X votes are eligible to rank this specific entry and contribute to its score.
With that in mind, it makes perfect sense to base average each entry's score by the number of its eligible votes, rather than the number of all votes.

For example, with 20 votes:
- the entry whose author didn't vote would have average score of <BASE SCORE> / 20
- the entry with 1 voting author would have average score of <BASE SCORE> / (20-1) = <BASE SCORE> / 19
- the entry with 3 voting authors would have average score of <BASE SCORE> / (20-3) = <BASE SCORE> / 17

If we multiplied the averages by number of total votes, it would be effectively as if each voting entrant added the score equal to eligible votes average. This is very in line with earlier philosophy of "the author ranks their entry like all the non-author voters", except using much simpler method. If anything, it's less like authors ranking for their own entry and more like adjusting the score for differences in number of eligible votes.

The general formula for each entry score would be:
<BASE SCORE> * <TOTAL VOTES> / <ELIGIBLE VOTES>
or more specifically:
<BASE SCORE> * <TOTAL VOTES> / (<TOTAL VOTES> - <VOTING AUTHORS>)

----------------------------------------

Advantages:

1. Prevents punishing entries for their authors voting.

2. Based on a common-sense principle (total score is relative to number of eligible votes that could potentially contribute to that score).

3. Simple to execute:
- only need to keep track of total votes and per-entry voting authors - it's just a single spreadsheet column, and pretty sparse at that
- the calculation is based on a basic division; no need to figure out where some entry would rank in unaffected ranking

4. Doesn't give disproportionate advantage to higher-ranked or lower-ranked entries (unlike e.g. a flat 1st place score boost).
If anything, the adjustment is as proportionate as it can possibly get.
 

Mightyjor

Member
Just going back to the topic of non-punishing voting entrants, because I just came up with a method so dead simple I can't believe no one came up with that before (or at least I don't remember that).
Definitely not for this Jam, but maybe for the next one?

So we know that there can be something like N votes.
Out of these N votes, X can belong to authors of the specific entry.
Thus, only N-X votes are eligible to rank this specific entry and contribute to its score.
With that in mind, it makes perfect sense to base average each entry's score by the number of its eligible votes, rather than the number of all votes.

For example, with 20 votes:
- the entry whose author didn't vote would have average score of <BASE SCORE> / 20
- the entry with 1 voting author would have average score of <BASE SCORE> / (20-1) = <BASE SCORE> / 19
- the entry with 3 voting authors would have average score of <BASE SCORE> / (20-3) = <BASE SCORE> / 17

If we multiplied the averages by number of total votes, it would be effectively as if each voting entrant added the score equal to eligible votes average. This is very in line with earlier philosophy of "the author ranks their entry like all the non-author voters", except using much simpler method. If anything, it's less like authors ranking for their own entry and more like adjusting the score for differences in number of eligible votes.

The general formula for each entry score would be:
<BASE SCORE> * <TOTAL VOTES> / <ELIGIBLE VOTES>
or more specifically:
<BASE SCORE> * <TOTAL VOTES> / (<TOTAL VOTES> - <VOTING AUTHORS>)

----------------------------------------

Advantages:

1. Prevents punishing entries for their authors voting.

2. Based on a common-sense principle (total score is relative to number of eligible votes that could potentially contribute to that score).

3. Simple to execute:
- only need to keep track of total votes and per-entry voting authors - it's just a single spreadsheet column, and pretty sparse at that
- the calculation is based on a basic division; no need to figure out where some entry would rank in unaffected ranking

4. Doesn't give disproportionate advantage to higher-ranked or lower-ranked entries (unlike e.g. a flat 1st place score boost).
If anything, the adjustment is as proportionate as it can possibly get.
Im just going to assume the math all works here because it’s late and I’m tired. That said, I love it!
 

Saurus

Member
I only yesterday properly looked at the new ranking algorithm and spotted what seems like a fairly large problem - if I've understood it correctly.
If I've misunderstood any part, please correct me so that everyone can be on the same page.
But this is how I see it, and I'd be interested to hear other people's thoughts:

Definitions​
<BASE_SCORE>the averaged score that the game received
sum of scores awarded to the game / number of votes the game received
<ELIGIBLE_VOTES>the maximum number of votes a game cold have received, which necessarily excludes the creator(s) of the game
number of participants who cast a vote for any game - number of team members (for the game in question) who cast a vote


In scenarios with 20 participants, entries with a Base Score of 5 would be calculated like so:

Team​
Total # of Votes Cast​
# of Votes Cast by the Team​
Eligible Votes​
Calculation​
Observation​
B190 : an entry whose author(s) didn't vote19 - 0 = 195 / 19 = 0.263(Same score.)
A200 : an entry whose author(s) didn't vote20 - 0 = 205 / 20 = 0.25Lowest score.
B201 : an entry with 1 voting author20 - 1 = 195 / 19 = 0.263Final score boosted. (Same score.)
C202 : an entry with 2 voting authors20 - 2 = 185 / 18 = 0.278Final score boosted further.
D203 : an entry with 3 voting authors20 - 3 = 175 / 17 = 0.294Final score boosted even further.

All else being equal, there is no penalty for voting or not voting (see Team B). But teams with author(s) that cast a vote do gain an advantage over teams that cast fewer votes than them. Rather than increasing fairness this seems to decrease it. If this scoring boost to larger teams is something we want to avoid, I don't see a way to fix the current algorithm. Dividing each game's <Base Score> by a constant (like the total number of games) seems like the only fair situation, but dividing by a constant is no different than the raw average of sum of scores awarded to the game / number of votes the game received.

I haven't given much thought to solutions yet. 🤷‍♂️
 
Last edited:

Alice

Darts addict
Forum Staff
Moderator
First of all, about the problem being solved:
  • <BASE SCORE> is a sum of scores contributed by individual votes according to 1/(rank + 1) formula; so an entry gets 1/2 score for each 1st place, 1/3 score for each 2nd place, 1/4 score for each 3rd place and so on
  • the authors can't vote for their own entries, so they can't increase their <BASE SCORE> in any way; at the same time, they will increase base scores for each entry they rank
  • additionally, some voters are unable to play all entries (e.g. because they don't have time); if they can't play the entry, they can't rank it and thus increase its <BASE SCORE>
  • overall, relying on <BASE SCORE> alone inherently hurts voters' own entries as well as entries some voters didn't manage to play; it's especially bad for voting entrants since their entry is effectively punished for them taking part in voting and reviewing

The proposed solution is to adjust the <TOTAL SCORE> so it compensates entries which couldn't be ranked in some votes - originally counting as <VOTING AUTHORS> but then expanded to <UNJUDGED VOTES> to account for voters who were unable to play the entry but don't want to punish it.

The general gist of the compensation is the following:
  • if a voter was able to rank the entry, their vote counts towards <ELIGIBLE VOTES> for that entry, and also usually increases the entry's <BASE SCORE> (unless the voter deliberately refused to rank the entry)
  • if a voter couldn't rank the entry (because they made the entry or they couldn't play it), the entry receives a <TOTAL SCORE> compensation equal to average base score over eligible votes (it's as if the voter agrees with eligible votes in general in regards to that entry)
  • thus, the <TOTAL SCORE> is a sum of <BASE SCORE> coming from eligible votes and <COMPENSATION SCORE> from unjudged votes

Note that whatever advantage author gets from the compensation is diminished by their vote increasing competing entries' base score (while the base score stays the same for author's entry).

In the spoiler I wrote in more detail where <BASE SCORE> * <TOTAL VOTES> / (<TOTAL VOTES> - <VOTING AUTHORS>) formula came from (with voting authors now generalised to unjudged votes), but hopefully the points above should convey the core idea.
First, some basic definitions:
  • <BASE SCORE> - the sum of scores contributed by individual votes
  • <TOTAL VOTES> - the total number of valid votes in the Voting Topic
  • <UNJUDGED VOTES> - the number of votes that couldn't contribute to the given entry's base score
  • <ELIGIBLE VOTES> - the number of votes that, unlike unjudged votes, were able to contribute to the given entry's base score
  • <AVERAGE SCORE> - the average base score contributed by each eligible vote
  • <COMPENSATION SCORE> - the sum of compensations from the unjudged votes
  • <TOTAL SCORE> - the overall score used for ranking entries

With these definitions in place, we can identify the following relations:
Code:
<TOTAL VOTES> = <ELIGIBLE VOTES> + <UNJUDGED VOTES>
<AVERAGE SCORE> = <BASE SCORE> / <ELIGIBLE VOTES>
<COMPENSATION SCORE> = <UNJUDGED VOTES> * <AVERAGE SCORE>
<TOTAL SCORE> = <BASE SCORE> + <COMPENSATION SCORE>
Then we can calculate the general formula for <TOTAL SCORE>, given the <BASE SCORE>, <TOTAL VOTES> and <UNJUDGED VOTES>:
Code:
<TOTAL SCORE> =
    = <BASE SCORE> + <COMPENSATION SCORE>
    = <BASE SCORE> + <UNJUDGED VOTES> * <AVERAGE SCORE> =
    = <BASE SCORE> + <UNJUDGED VOTES> * <BASE SCORE> / <ELIGIBLE VOTES> =
    = <BASE SCORE> * (1 + <UNJUDGED VOTES> / <ELIGIBLE VOTES>) =
    = <BASE SCORE> * (<ELIGIBLE VOTES> + <UNJUDGED VOTES>) / <ELIGIBLE VOTES> =
    = <BASE SCORE> * <TOTAL VOTES> / <ELIGIBLE VOTES> =
    = <BASE SCORE> * <TOTAL VOTES> / (<TOTAL VOTES> - <UNJUDGED VOTES>)
Hopefully this clarifies where the formula came from.

@Saurus When it comes to your example, one thing that stands out to me is how you have a column for Total # of Participants which is never taken into account when tallying the vote.
Generally, suppose we have an entry with 3 authors which got base score of 5.00 over 20 votes - none of which belonged to either author.
Then the authors all cast their votes, increasing the Total # of Votes Cast to 23 (not 21, it's three different votes!). The number of Eligible Votes stays the same (rather than decreasing to 18), because all the other people's votes didn't get anywhere.
According to my formula, the <TOTAL SCORE> would end up equal to 5.00 + 3 * 5.00 / 20 = 5.00 + 3 * 0.25 = 5.75. Which is indeed quite an increase, but it's balanced out by other entries getting their <BASE SCORE> increased from these votes. And if someone manages to get an average score of 0.25, it would mean they consistently ranked in top 3, so I'd say them getting a compensation this large is quite appropriate.

Finally, I ran a little bit of simulation on GMC Jam 50 results, to get a better idea how adding authors' votes affects their entries ranking. In the spoiler is the table with exact results.
First of all, some core concepts to understand:
  • New ranking - The current ranking system, based on compensated <TOTAL SCORE>
  • Old ranking - The previous ranking system, based on raw <BASE SCORE>
  • Eligible ranking - The ranking calculated if only the given entry's eligible votes counted; it can tell how a given entry would rank if the entry's score wasn't "distorted" by the authors' votes
  • Total ranking - The ranking calculated by taking all the votes into account

The above combine to form New Eligible ranking, New Total ranking (the overall Jam results), Old Eligible ranking and Old Total ranking.

Now, how to understand specific columns:
  • Entry - the name of the Jam entry
  • Unjudged - the number of unjudged votes for the given entry (usually same as the number of voting authors)
  • # New Eligible - the rank of the entry in the New Eligible ranking
  • # New Total - the rank of the entry in the New Total ranking
  • New Eligible vs Total - the change between New Eligible and New Total rankings; it shows how the entry authors votes affected its rank (lower is better, i.e. authors helped their entry)
  • # Old Eligible - the rank of the entry in the Old Eligible ranking
  • # Old Total - the rank of the entry in the Old Total ranking
  • Old Eligible vs Total - the change between Old Eligible and Old Total rankings; it shows how the entry authors votes would affect its rank (lower is better, i.e. authors helped their entry)
  • New vs Old Total - the change between New Total and Old Total rankings; it shows how switching back to the old system would affect overall rankings

EntryUnjudged# New Eligible# New TotalNew Eligible vs Total# Old Eligible# Old TotalOld Eligible vs TotalNew vs Old Total
Dunk Tank111=0 11=0=0
Tricky Shot122=0 22=0=0
Warlords of Mars133=0 33=0=0
Sniper Waifu044=0 44=0=0
Cloven Canyon155=0 55=0=0
Big Dawg Revengeance156+1 56+1=0
Bullseye Cola177=0 77=0=0
Comeback088=0 88=0=0
Tower Bouncer1109-1 99=0=0
Siolfor's Super Jam Kart1910+1 910+1=0
El Ojo Del Toro11111=0 1111=0=0
Return to Castle Meta11312-1 1112+1=0
Drunken Darts11413-1 1113+2=0
FROG FLINGA11414=0 1314+1=0
Party Pilot01515=0 1515=0=0
pidGÉON THE POOFESIONAL01616=0 1616=0=0
Sebastian the Bread11717=0 1717=0=0
Wisp Tag01818=0 1818=0=0
Raccoon Rampage31919=0 1919=0=0
Bullseye Bonanza22020=0 1922+3+2
Bowcabulary12121=0 1920+1-1
The Throw02222=0 2121=0-1
Bullseye Bash02323=0 2323=0=0
Defend the Bullseye22424=0 2425+1+1
Target Tommy02525=0 2424=0-1
U.F.O.O.D.12526+1 2526+1=0
Eirin's Hourai Elixir Hunt02727=0 2727=0=0
Jigazo SUPER12728+1 2728+1=0
BULLSEYE12829+1 2829+1=0
Rocket Brawl23030=0 2930+1=0
SpaceBIZ.startup33131=0 2931+2=0
Shootyshoots and Bangybangs03232=0 3232=0=0
Weak Point03333=0 3333=0=0
Shoot The Pigeon 513434=0 3434=0=0
TopQB03535=0 3535=0=0
Golfseye23636=0 3640+4+4
BULL'S EYE03737=0 3636=0-1
Mr. Penguin's Adventure03838=0 3737=0-1
Spinning Arrows03939=0 3838=0-1
Sugar Glider04040=0 3939=0-1
Color Roundup24041+1 4042+2+1
Pickle14242=0 4141=0-1
CrossBow14443-1 4144+3+1
Bull's I14444=0 4145+4+1
Free Sports Resort: Archery04545=0 4343=0-2
Essence of the King of Thorns34846-2 4749+2+3
Bullseye Skeet14847-1 4747=0=0
Hit the Bulls Eyes!04848=0 4646=0-2
Crick the Crown14949=0 4848=0-1
50 Darts05050=0 5050=0=0
The archery training05151=0 5151=0=0
Nano Medics Lite05252=0 5252=0=0
Building Bullseye05353=0 5353=0=0
Maseye05454=0 5454=0=0
Butt Eels15555=0 5555=0=0
The Phantom Gate55656=0 5557+2+1
Socky III25757=0 5656=0-1

Key takeaways:
  • under the current system based on <TOTAL SCORE>, most entries' rankings weren't affected by additional authors' votes
    • for entries that were affected, it went both ways - sometimes authors' votes improved the entry standing, sometimes they made it worse (but never more than by 2 ranks)
  • under the old system based on <BASE SCORE>, lots of entries would still be unaffected by extra authors' votes, but there are more changes
    • when entry is affected by added authors' votes, it's always for worse (which is to be expected, because the author doesn't increase their entry's base score while increasing it for competitors); in one case it was as bad as 4 ranks
  • overall, the old system compared to the new one elevates entries with fewer voting authors and punishes entries with more
    • promoting entries with fewer voting authors - together with semi-consistent worse ranking - means Jam entrants would be discouraged from voting and reviewing so as not to undermine their own entry, and that's terrible

Hopefully it shows that the current system doesn't really promote large teams voting to such a great degree (in fact, sometimes a voting author may shoot their entry in a foot) and that it works pretty well at not harming those teams, and thus not discouraging them from voting. ^^'
 

Saurus

Member
That's a really insightful walkthrough of all the moving parts in the voting system. Thank you sincerely Alice for going to such extraordinary lengths to explain how it all works. With all the information presented (and out of respect for the work you're putting into this) I'll have to postpone follow-up questions until I've been able to look at everything comprehensively. But I didn't want to wait that long before thanking you for your phenomenal efforts here.

And for anyone wondering why Alice referred to a Total # of Participants column in my example, which is no longer there, my post accidentally got submitted to the forum before I had made final revisions. In that draft I included the total number of participants, thinking it would clarify what was happening with Team B, but in the final draft I decided it was extraneous information and removed it.
 
I clicked the GMC Jam subforum. Scrolled down a little. Hey, hey, there's a new post in the suggestions thread I haven't read. Y'know, that makes me think, since the voting thing has finally become satisfactory for everyone after all these years, I could do well to poke a little fun and "suggest" changing the voting, ha ha ha ha ha. Anyway, let's see what's up.

the new ranking algorithm
Oh. Wow. LOL. $X^ D
 

Alice

Darts addict
Forum Staff
Moderator
I've been wondering a bit about the Best Reviewer award. For reference, the current system works like so:
  • for each "Like" or "Laugh" reaction the reviewer's voting post receives, they get +1 score
  • for each "Love" reaction (smiling face with heart-eyes) the reviewer's voting post receives, they get +2 score
  • if the given reviewer makes multiple voting posts (e.g. because of hitting the max character count) and the same person gives reaction to many of this reviewer's posts, the best scoring reaction counts
  • the reaction scores are added up and the reviewer(s) with the highest reaction score wins

I was wondering about expanding it by allowing people to give Best Reviewer nominations, the same way they could give it in the past.
But while earlier Best Reviewer award was based on the number of Best Reviewer nominations alone, the suggested system would instead count a nomination as a +5 score increase. More specifically:
  • each person can only nominate one Best Reviewer (possible exception can be made for collective reviewing e.g. by doing shared streams, even if they eventually make separate voting posts)
  • for each Best Reviewer nomination the reviewer gets +5 score
  • the Best Reviewer nomination overrides the reaction given to the reviewer

For example, if someone gives:
  • a "Like" reaction to Bob's post
  • a "Love" reaction to one of Charlie's post, and a "Like" reaction to another Charlie's post
  • a "Love" reaction and Best Reviewer nomination to Dylan's post
Then:
  • Bob will get +1 score from that person
  • Charlie will get +2 score from that person ("Love" reaction overrides the "Like" reaction on another post)
  • Dylan will get +5 score from that person ("Best Reviewer" nomination overrides the "Love" reaction)

I feel this kind of Best Reviewer nomination system allows giving significant boost to people who made reviews that are outstanding in some way or another, while still allowing reaction score advantage to help determine the winner in a more fine-grained way. What do you think?
 
Top