Combating abuse in Matrix - without backdoors (Matrix blog)
Just like the Web, Email or the Internet as a whole, there is literally no way to unilaterally censor or block content in Matrix. But what we can do is provide first-class infrastructure to let users (and room/community moderators and server admins) make up their own mind about who to trust, and what content to allow. This would also provide a means for authorities to publish reputation data about illegal content, providing a privacy-respecting mechanism that admins/mods/users can use to keep illegal content away from their servers/clients."
Posted Oct 20, 2020 14:20 UTC (Tue)
by arcivanov (subscriber, #126509)
[Link] (8 responses)
Posted Oct 20, 2020 14:40 UTC (Tue)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Oct 20, 2020 15:20 UTC (Tue)
by smoogen (subscriber, #97)
[Link] (1 responses)
Posted Oct 20, 2020 16:47 UTC (Tue)
by raven667 (subscriber, #5198)
[Link]
I'm sure none of this will be perfect, but it doesn't have to be, it just has to be better
Posted Oct 20, 2020 18:18 UTC (Tue)
by martin.langhoff (subscriber, #61417)
[Link]
Posted Oct 20, 2020 18:24 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (1 responses)
If I flag the people *I* trust, and then the algorithm preferentially weights the people they trust as "people I trust", then that's hard to game. It's expensive to implement, though.
And it's also susceptible to the "echo chamber" effect - if I trust people I *like*, then I'll only see stuff I agree with. If I trust people I *respect*, then I'll see a far wider spread. There's quite a few people here I've had run-ins with, but whether I like them or not I respect their honesty, skills and integrity. I hope there's people here who feel the same way about me - I'm pretty certain there are some :-)
But the problem with all of this is that the more voices there are out there competing to be heard, it's almost inevitable that those voices that disagree with you are going to be preferentially filtered out, even if their arguments are good and compelling ... the more people who filter on "like" the less impact filtering on "respect" will have :-(
Cheers,
Posted Oct 25, 2020 19:34 UTC (Sun)
by NYKevin (subscriber, #129325)
[Link]
As a result, even if everyone filters on "respect" instead of "like," you will still get bubbles or echo chambers, they'll just be a bit broader than you might otherwise see.
Posted Oct 21, 2020 10:37 UTC (Wed)
by k3ninho (subscriber, #50375)
[Link]
The problem is hard, they know that. And Element are looking to adopt an approach, learn from it and enable their userbase to protect itself according to their own free choices. We build communities from meaningful human engagement, and this looks to protect those communities with insight as to who or what they should trust. It still remains a hard problem.
K3n.
Posted Oct 22, 2020 15:22 UTC (Thu)
by cyphar (subscriber, #110703)
[Link]
Posted Oct 21, 2020 2:08 UTC (Wed)
by IanKelling (subscriber, #89418)
[Link]
Posted Oct 21, 2020 7:53 UTC (Wed)
by Sesse (subscriber, #53779)
[Link] (2 responses)
Also, what reputation would you give a completely unknown user on your server? (Assume that user could be a person who's never used Matrix before, _or_ that person you just banned who just created a new identity.)
Posted Oct 21, 2020 8:01 UTC (Wed)
by mjg59 (subscriber, #23239)
[Link]
Posted Oct 24, 2020 13:47 UTC (Sat)
by gdt (subscriber, #6284)
[Link]
I think that's a bit harsh on the PGP Web of Trust. The reason that doesn't work well is because ubiquitous deployment of the web of trust was resisted ferociously by the Wassenaar Arrangement treaty partners. So PGP never made it into the baseline e-mail client Pine, and thus into the feature-set of competing products such as Netscape Communicator or Microsoft Outlook as part of the typical user experience. We're still paying the price for that decision of the intelligence/defence community. The success of phishing crimes is partly because e-mail clients lack a rigorous notion of trust.
Posted Oct 22, 2020 1:19 UTC (Thu)
by landley (guest, #6789)
[Link] (4 responses)
Posted Oct 22, 2020 8:16 UTC (Thu)
by anton (subscriber, #25547)
[Link] (3 responses)
What I wonder about the system is how they deal with new accounts. If someone with a bad reputation can make a new account and start again with a clean slate, the end result may be that many will only read accounts with a good reputation. But then, if you are new and nobody reads you, how do you get a good reputation?
Posted Oct 23, 2020 7:02 UTC (Fri)
by hifi (guest, #109741)
[Link] (2 responses)
Creating an account is free and anonymous and if you create a system that requires some sort of trust level to participate it essentially makes getting in hard or even impossible.
On the other side of the spectrum if you make duplicate account creation almost impossible you can "trust" a person on a server is truly who they claim to be regardless if you agree on their views.
Some sort of verified account system which gives you undisputable identity across the network would be the best way to combat abuse and spam by creating communities that require said verification. Then you can start filtering based on views and behavior.
The problem with all this is that what entity would you trust to verify people? That could be simplified to known trusted servers that are operated by people who claim to verify there's a real person behind an account but that itself is subject to abuse by the administrators.
Posted Oct 23, 2020 7:59 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Nov 4, 2020 10:55 UTC (Wed)
by anton (subscriber, #25547)
[Link]
Posted Oct 23, 2020 13:50 UTC (Fri)
by enkiusz (guest, #142702)
[Link]
Posted Oct 25, 2020 17:13 UTC (Sun)
by jnxx (guest, #142729)
[Link] (1 responses)
https://en.wikipedia.org/wiki/Kuro5hin
Posted Nov 10, 2020 12:02 UTC (Tue)
by ksandstr (guest, #60862)
[Link]
Posted Nov 10, 2020 12:35 UTC (Tue)
by ksandstr (guest, #60862)
[Link]
Indeed this proposal of what's essentially a government-distributed list of distrusted hashes (media fingerprints, whatever) appears to enable not just censorship, but also the persecution of those who do not subscribe to the Official Naughty List. To wit, the ONL would identify a superficially benign piece of media that'd be monitored by agents (bots) of The Man to surveil nodes where the list is not being obeyed so as to give their operators a discretionary Social Demerit; and not for breaking any law, but for turning Fritz off.
[0] the standard counterargument is that "perfect is the enemy of good". In the case of censorship, the argument goes, irreparably opaque and infinitely tyrant-friendly censorship is better than nothing, so it should be preferred. Subsequently advocates are very surprised at allegations that critique of censorship is among the first things suppressed: certainly it's more likely[1] that everyone trusts censorship and regards it s/h/its friend.
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
Wol
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
You decide which reputation sources you use, not China's government. Also, the only effect of ruining your reputation on Matrix is that many people on Matrix don't read you; your can still travel, your children can still go to national universities, etc.
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
There are some ways to deal with this: postings from zero-reputation or even negative-reputation accounts can be shown to a few people, with a delay, and with the information that the posting is not widely shown, and that answering the posting may give the original posting wider audience than it has now. Or maybe quarantine an answer to such a posting in a similar way as the posting itselves; "Do not feed the troll" has seen limited success in Usenet, but with some audience-limiting measures it might work better.
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
Combating abuse in Matrix - without backdoors (Matrix blog)
[1] and the argument is always made in terms of seat-of-the-pants handwaving about "likelihood" because censorship cannot be discussed[2] from concrete facts, those having been memory-holed.
[2] woop woop that's the KKK right thur, woop woop