We are having a current issue with some few people entering chat to be nuisances, interrupting with nonsense & profanity, etc. Is there some way to report these or at least keep their names off the screen?
oh, an eteRNAboss just appeared. maybe that is the control. thank you boss - or maybe it’s another troll
There is a mechanism to fix this – devs and certain long-time community members have the power to kick trolls off chat.
thanks, one of them took care of it shortly after
devs and certain long-time community members have the power to kick trolls off chat.
We could use one of these now. Lots of offensive language taking place the last 15 minutes by several of them cluttering the chat window
I was on at the same time as Hog. Is it possible to incorporate a “block” option on the chat if we are unable to turn the chat off? That way, people can continue to communicate and just turn that individual off.
I think we need more IRC options for players similar to foldit. Where a player can choose to block another players chat from being displayed in their chat box.
This has been added to our issue tracker with ticker number #55
You can check the issue at
You can check all player proposed issues at
Thanks for your idea!
The trolls are back and it is bad tonight.
I think you will need a better mechanism than just blocking a user name. The trolls come back many times under different user names. It is too easy for them to come back into chat after they have been kicked out. Perhaps an email verification system, where a person cannot use chat until their email has been verified. I heard other suggestions to make the new user at least complete 1 tutorial puzzle before allowing access to chat. Any way, more needs to be done to stop this nuisance.
In my corner of the internet, we learned to ignore the trolls. You start reacting to them and they know they’ve drawn blood. Don’t give them the satisfaction, they get bored, and eventually they leave. Alternatively, I don’t see why it wouldn’t be reasonable to make a reliable handful of non-staff members of EteRNA into moderators with the capacity to temporarily/permanently account and IP ban troublesome members.
Back in force tonight (nov 8, 2012) - unfortunate.
Trolls are back. I like RedSpah’s idea: https://getsatisfaction.com/eternagam…
I have often wondered why EteRNA chat did not have user blocking and chat monitors to regulate the truly disruptive or offensive. Unfortunately there are always some who feel the need to disrupt, shock or offend and cannot manage themselves in a chat environment.
I think some tools to help the polite users are worth considering.
Just a thought, but I don’t think there are actual chat guidelines. Perhaps some should be drawn up?
HAVE A PLACE TO REPORT BAD BEHAVIOUR
Suggestion from Mat: What if Dev made a user account where player could send they issue to via the current message system, as a very quick starter, before making a report/system page?
I have compiled the top solutions suggested by players, some of which the devs are already working on.
To double check boundaries on these ideas, earlier tonight I reached out to the community to ask them for input, and I have compiled accordingly. Due to the time-sensitive nature of this issue, and Jee’s generous efforts this weekend working on the issue, I have made an effort to outline these ideas right away for the devs’ benefit. However, further player input is still encouraged, since mine is but one perspective, even if I have sourced it from the team.
Everyone very much appreciates that the dev team has acted so quickly to escalate the priority of this concern, due to safety and security concerns in the community. Thank you!!!
Memorable quotes from the lively discussion illustrating the concern of the community:
“This place has become my home.”
“Player safety is a key aspect in making EteRNA more friendly.”
“We seriously need to get security in place, because we won’t be able to guarantee these kids to be safe if nothing is done.”
I’d like to emphasize that we have a very egalitarian community that has a very high tolerance and patience for all community members; therefore, the general desire is not to go penalizing or banning people willy-nilly. Especially for any feature involving a potential ban, the thinking is to have a very high threshold, to allow for normal and forgiving player interaction. The community feels strongly that everyone deserves a chance to learn and grow.
Players feel that the majority of behavior correction should be accomplished through being pointed to the Chat Guidelines, with penalty and banning being a last resort for clear violation of the guidelines, whatever those may be.
An excellent idea to extend tolerance was suggested whereby before implementing a permanent ban, we sandbox violations into a ‘penalty box’ timeout. In this system, the penalty begins very low and forgiving, and incrementally increases as the level & / or frequency of offense increases. For example, 10 lines in a row, or within the same # of [milli]seconds could result in a 5 minute ban. Not enough to shut down a legitimate player in case of a misunderstanding, but enough to give a potential violator time to read the Chat Guidelines. And only then would we escalate from there.
Memorable quotes from the lively discussion illustrating the tolerance of the community:
“There ain’t no devil, there’s just G@d when he’s drunk”
"I hope we don’t have a survivor style EteRNA going, I’d be the first voted off the island "
1) Chat Guidelines / Code of Conduct
A. Players would like the Guidelines posted more prominently, so that there is some authority and definition to easily point to when informing and correcting behavior.
B. It has been especially noted that Guidelines are both to protect and educate the many young users of our site, who may just be learning how to interact on the internet safely and appropriately. This is not a comprehensive list, just a few examples pulled from the community:
C. In addition to whatever protocol is deemed appropriate for addressing emergency concerns which may arise on site, if your team approves it, then it may help to point to third-party resources intended for the quick response to emergency which individual players and devs are not in a position to provide.
D. Having a clear disclaimer of the limitations of the site’s resources will help define boundaries, for example:
2) Mute Button
A. Individual player-level mute button.
B. List you can browse to unmute.
C. Players emphasized that this will empower individuals to not be harassed, while limiting the need to outright penalize or ban someone.
D. Above in this thread, mat747 suggests IRC controls like in FoldIt, for which a GitHub ticket has been made:
3) Report Channel / Log
A. Clear method / central place for reporting bullies, spam, etc…
B. Repeat complaints from x # of multiple accounts sharing same IP result in IP-level penalty / ban.
- Leave tolerance for multiple accounts on same IP legitimately belonging to, say, hoglahoo & his bots,
or xyz player and their family members who use the same computer but with different accounts.
C. All chat logged for reference. ( I was asked if this is the case… I assume as much, but want to check. )
4) Chat Moderators
A. Potentially get some impartial outsiders. Unknown who, how, or from where, but I like this idea for a few reasons:
- We could crowdsource to trustworthy teachers/psychologists with experience and a wish to make a difference.
- Perhaps issue an appeal for teachers to volunteer along with the NOVA announcement?
- With incoming NOVA kids, players are concerned that the few experienced players will become quickly out-numbered.
- Players note that their strengths are in sharing and teaching RNA folding, not in disciplining behavior.
B. Dev-approved veterans with x # years.
C. Top players with x # points.
D. Adults with specific intent to protect children, not just ban people randomly.
E. Potentially the ability to implement penalty & / or ban, subject to dev approval, and revokable on abuse.
F. Once approved, if the devs are too busy to maintain, then potentially consider the ability for dev-approved moderators to approve x # of additional moderators & / or x # of secondary moderators ( 6) below ).
G. If multiple moderators online at same time, could implement consensus-based penalty / ban approval, to increase Tolerance level.
H. Could review chat log occasionally in addition to moderating live chat.
5) Chat Bot Moderator
A. To take some responsibility & labor off the shoulders of devs & players.
B. Boundaries sourced from Chat Guidelines.
C. Autoreply in response to blacklist keywords: i.e. similar to dictionarybot:
output: ‘Please see Chat Guidelines, x # offenses will result in x penalty / ban. Children are present & chat / IPs are logged…’
D. Silence bot / shorten message for offenses occurring between 1st & last warning, to prevent bot from being used as spam itself.
E. To be extra kind, potentially give final warning before last chance penalty escalation / ban.
F. Potential auto-penalty / ban in response to: ( choose highest possible threshold of tolerance )
- x # use of blacklist keywords ( daily & / or global tallies with fair threshold ? )
- x # spam sequential posts in chat ( e.g. 10+ threshold, give room for loquaciousness, helpful bots, & dev chat ? )
- x # user accounts from same IP ( high enough threshold to permit legit hogla bots / multiple family members, for example )
G. Potentially have some or all warning messages visible only to intended recipient, to reduce spamming channel with warnings. Potentially with global unmasked view accessible to moderators.
6) Crowd-Sourced Moderation
A. As noted in Tolerance, not intended to permit popularity contest among players, but rather to flag truly concerning behavior.
- Perhaps limited to highest violations in Guidelines
- Easily revokable, perhaps by approved moderators in 4) above.
B. Like craigslist spam vote button: consensus-based x # anonymous votes = penalty / ban & report log.
C. To ensure the reported player-name is correct, instead of a button / link, perhaps it could be like a right-click name in chat to vote?
D. To prevent abuse if a lot of spammers logged on, maybe unlock with x # of points, or make this category of moderators pooled from a secondary volunteer position approved by 4) above?
7) Unlock Chat After X
A. 95% of all trolls have exactly 0 points.
B. Having to spend 5-10 minutes playing would be similar to the inconvenience of the temporary penalty box, with the benefit of filtering abuse before it occurs.
C. Unlocking after, say, 2,500 points makes it very likely that chat will unlock before Chlamydomonas reinhardtii, so when they need help, they’ll receive it.
D. Regarding newbies who really need help: anyone can beat the tutorials, which are there to train in the first place. And clearing a few more puzzles isn’t beyond skills of anyone who’s really interested.
E. Another suggestion is to unlock after email verification. While this adds new liabilities due to the incorporation of sensitive data, it is a common practice and may help, similar to the points unlock.
Thanks for the excellent summation machinelves.
I really like the concept in #7, that is require puzzle completion before gaining access to chat. Since the creation of a new userid is fairly easy, getting around a ban by coming back with another user name is problematic. But if the troll had to earn 10000 pts (as an example) they might quickly tire and go somewhere else.
I really like the points unlock thing. If you’re not going to play, you shouldn’t be here anyway.