kaniini's blog! @kaniini@blog.dereferenced.org

This is the second article in a series that will be a fairly critical review of ActivityPub from a trust & safety perspective. Stay tuned for more.

In our previous episode, I laid out some personal observations about implementing an AP stack from scratch over the past year. When we started this arduous task, there were only three other AP implementations in progress: Mastodon, Kroeg and PubCrawl (the AP transport for Hubzilla), so it has been a pretty significant journey.

I also described how ActivityPub was a student of the 'worse is better' design philosophy. Some people felt a little hurt by this, but they shouldn't have: after all, UNIX (of which modern Linux and BSD systems are a derivative) is also a student of the 'worse is better' philosophy. And much like the unices of yesteryear, ActivityPub right now has a lot of missing pieces. But that's alright, as long as the participants in this experiment understand the limitations.

For the first time in decades, the success of ActivityPub, in part by way of it's aggressive adoption of the 'worse is better' philosophy (which enabled them to ship something) has made some traction that has inspired people to believe that perhaps we can take back the Web and make it open again. This in itself is a wonderful thing, and we must do our best to seize this opportunity and run with it.

As I mentioned, there have been a huge amount of projects looking to implement AP in some way or other, many not yet in a public stage but seeking guidance on how to write an AP stack. My DMs have been quite busy with questions over the past couple of months about ActivityPub.

Let's talk about the elephant in the room, actually no not that one.

ActivityPub has been brought this far by the W3C Social CG. This is a Community Group that was chartered by the W3C to advance the Social Web.

While they did a good job at getting some of the best minds into the same room and talking about building a federated social web, a lot of decisions were already predetermined (using pump.io as a basis) or left underspecified to satisfy other groups inside W3C. Finally, the ActivityPub specification itself claimed that pure JSON could be used to implement ActivityPub, but the W3C kept pushing for layered specs on top like JSON-LD Linked Data Signatures, a spec that is not yet finalized but depends on JSON-LD.

LDS has a lot of problems, but I already covered them already. You can read about some of those problems by reading up on a mitigation known as Blind Key Rotation. Anyway, this isn't really about W3C pushing for use of LDS in AP, that is just one illustrated example of trying to bundle JSON-LD and dependencies into ActivityPub to make JSON-LD a defacto requirement.

Because of this bundling issue, we established a new community group, called LitePub, this was meant to be a workspace for people actually implementing ActivityPub stacks so that they could get documentation and support for using ActivityPub without JSON-LD, or using JSON-LD in a safe way. To date, the LitePub community is one of the best resources for asking questions about ActivityPub and getting real answers that can be used in production today.

But to build the next generation of ActivityPub, the LitePub group isn't enough. Is W3C still interested? Unfortunately, from what I can tell, not really: they are pursuing another system that was developed in house called SOLID, which is built on the Linked Data Platform. Since SOLID is being developed by W3C top brass, I would assume that they aren't interested in stewarding a new revision of ActivityPub. And why would they be? SOLID is essentially a semantic web retread of ActivityPub, which gives the W3C top brass exactly what they wanted in the first place.

In some ways, I argue that W3C's perceived disinterest in Social Web technologies other than SOLID largely has to do with fediverse projects having a very luke warm response to JSON-LD and LDS.

The good news is that there have been some initial conversations between a few projects on what a working group to build the next generation of ActivityPub would look like, how it would be managed, and how it would be funded. We will be having more of these conversations over the next few months.

ActivityPub: the present state

In the first blog post, I went into a little detail about the present state of ActivityPub. But is it really as bad as I said?

I am going to break down a few examples of faults in the protocol and talk about their current state as well as what we are doing for short-term mitigations and where we are doing them.

Ambiguous addressing: is it a DM or just a post directly addressed to a circle of friends?

As Osada and Hubzilla started to get attention, Mastodon and Pleroma users started to see weird behavior in their notifications and timelines: messages from people they didn't necessarily follow which got directly addressed to the user. These are messages sent to a group of selected friends, but can otherwise be forwarded (boosted/repeated/announced) to other audiences.

In other words, they do not have the same semantic meaning as a DM. But due to the way they were addressed, Mastodon and Pleroma saw them as a DM.

Mastodon fixed this issue in 2.6 by adding heuristics: if a message has recipients in both the to and cc fields, then it's a public message that is addressed to a group of recipients, and not a DM. Unfortunately, Mastodon treats it similarly to a followers-only post and does not infer the correct rights.

Meanwhile, Pleroma and Friendica came up with the idea to add a semantic hint to the message with the litepub:directMessage field. If this is set to true, it should be considered as a direct message. If the field is set to false, then it should be considered a group message. If the field is unset, then heuristics are used to determine the message type.

Pleroma has a branch in progress which adds both support for the litepub:directMessage field as well as the heuristics. It should be landing shortly (it needs a rebase and I need to fix up some of the heuristics).

So overall, the issue is reasonably mitigated at this point.

Fake direction attacks

Several months ago, Puckipedia did some fake direction testing against mainstream ActivityPub implementations. Fake direction attacks are especially problematic because they allow spoofing to happen.

She found vulnerabilities in Mastodon, Pleroma and PixelFed, as well as recently a couple of other fediverse software.

The vulnerabilities she reported in Mastodon, Pleroma and PixelFed have been fixed, but the class of vulnerability as she observes keeps appearing.

In part, we can mitigate this by writing excellent security documentation and referring people to read it. This is something that I hope the LitePub group can do in the future.

But for now, I would say this issue is not fully mitigated.

Leakage caused by Mastodon's followers-only scope

Software which is directly compatible with the Mastodon followers-only scope have a few problems, I am grouping them together here:

  • New followers can see content that was posted before they were authorized to view any followers-only content

  • Replies to followers-only posts are addressed to their own followers instead of the followers collection of the OP at the time the post was created (which creates metadata leaks about the OP)

  • Software which does not support the followers-only scope can dereference the OP's followers collection in any way they wish, including interpreting it as as:Public (this is explicitly allowed by the ActivityStreams 2.0 specification, you can't even make this up)

Mitigation of this is actually incredibly easy, which makes me question why Mastodon didn't do it to begin with: simply expand the followers collection when preparing to send the message outbound.

An implementation of this will be landing in Pleroma soon to harden the followers-only scope as well as fix followers-only threads to be more usable.

Implementation of this mitigation also brings the followers-only threads to Friendica and Hubzilla in a safe and compatible way: all fediverse software will be able to properly interact with the threads.

The “don't @ me” problem

Some of this interpretation about Zot may be slightly wrong, it is based on reading the specification for Zot and Zot 6.

Other federated protocols such as DFRN, Zot and Zot 6 provide a rich framework for defining what interactions are allowed with a given message. ActivityPub doesn't.

DFRN provides UI hints on each object that hint at what may be done with the object, but uses a capabilities system under the hood. Capability enforcement is done by the “feed producer,” which either accepts your request or denies it. If you comment on a post in DFRN, it is the responsibility of the parent “feed producer” to forward your post onward through the network.

Zot uses a similar capabilities system but provides a magic signature in response to consuming the capability, which you then forward as proof of acceptance. Zot 6 uses a similar authentication scheme, except using OpenWebAuth instead of the original Zot authentication scheme.

For ActivityPub, my proposal is to use a system of capability URIs and proof objects that are cross-checked by the receiving server. In terms of the proof objects themselves, cryptographic signatures are not a component of this proof system, it is strictly capability based. Cryptographic verification could be provided by leveraging HTTP Signatures to sign the response, if desired. I am still working out the details on how precisely this will work, and that will probably be the what the next blog post is about.

As a datapoint: in Pleroma, we already use this cross-checking technique to verify objects which have been forwarded to us due to ActivityPub §7.1.2. This allows us to avoid JSON-LD and LDS signatures and is the recommended way to verify forwarded objects in LitePub implementations.

Unauthenticated object fetching

Right now, due to the nature of ActivityPub and the design motivations behind it, fetching public objects is entirely unauthenticated.

This has lead to a few incidents where fediverse users have gotten upset over their posts still arriving at servers they have blocked, since they naturally expect that posts won't arrive at servers they have blocked.

Mastodon has implemented an extension for post fetching where fetching private posts is authenticated using the HTTP Signature of the user who is fetching the post. This is a possible way of solving the authentication problem: instances can be identified based on which actor signed the request.

However, I don't think that fetching private posts in this way (instead this should always fail) is a good idea and wouldn't recommend it. With that said, a more generalized approach based on using HTTP Signatures to fetch public posts could be workable.

But I do not think the AP server should use a random user's key to sign the requests: instead there should be an AP actor which explicitly represents the whole instance, and the instance actor's key should be used to sign the fetch requests instead. That way information about individual users isn't leaked, and signatures aren't created without the express consent of a random instance user.

Once object fetches are properly authenticated in a way that instances are identifiable, then objects can be selectively disclosed. This also hardens object fetching via third parties such as crawlers.


In this particular blog entry, I discussed why ActivityPub is still the hero we need despite being designed with the 'worse is better' philosophy, as well as discussed some early plans for cross-project collaboration on a next generation ActivityPub-based protocol, and discussed a few of the common problem areas with ActivityPub and how we can mitigate them in the future.

And with that, despite the present issues we face with ActivityPub, I will end this by borrowing a common saying from the cryptocurrency community: the future is bright, the future is decentralized.


hacker teen puck

in reply to @kaniini@blog.dereferenced.org
CW: unfiltered thoughts

kaniini recently posted a few things on the security of the fediverse and i've been getting the feeling they are either misunderstanding things or purposely seeding FUD into the system. And as such, I went through the technical arguments and found .. a bunch of things that seem to make no sense, based on my knowledge of how this all works:

> Software which does not support the followers-only scope can dereference the OP's followers collection in any way they wish, including interpreting it as as:Public (this is explicitly allowed by the ActivityStreams 2.0 specification, you can't even make this up)

(ignoring the fact that as:Public isn't even in the AS2 spec) Oh but you can make it up. I had to put on my detective's hat to figure out what the rabbit pulled out of their hat, because this makes no fucking sense and only if you purposely read the spec wrong this would be the case. So, here we go!!

If you have 10000 followers, having to send one POST to each inbox would be a lot of work. So instead, you can group followers by their sharedInbox and post to each sharedInbox once, and trust that the receiving server will deliver the post to all followers. But how does this interact, in any way, with the above? well. If you have a public post, it's public. So to allow for better delivery of replies, the sharedInbox can be delivered to if the post is public (aka is addressed to as:Public), with the intent that it just sits there, so the server can use it to build up e.g. a federated timeline. I cannot find a reading in which the spec suggests follower collections can be read as 'public'.

> Mitigation of this is actually incredibly easy, which makes me question why Mastodon didn't do it to begin with: simply expand the followers collection when preparing to send the message outbound.

Well. quick thought process: what if there was no sharedInbox, and you have a 10000 large follower collection? It's not like you want to put all those 10000 followers in each post. So what do you do? You just don't. You just POST the post to each user's inbox, and let the receiving server handle it. This is actually exactly as specified. Not only that, but it allows for behavior that would otherwise be impossible. For example, the bcc and cc sets in a post. Magic, no?

> She found vulnerabilities in Mastodon, Pleroma and PixelFed, as well as recently a couple of other fediverse software.

The two vulns that started the thread are... very hard to defend against, as everything is doing exactly what it was specified to do. Nothing was running out of spec, and in theory, every And a bit later, I did find a vuln that covers *all* the fediverse software except Kroeg and Mastodon. And at this point, disclosure gets, well, exponentially hard. What do I do? tbh, at this point just saying "fediverse devs please message me if you want a copy of the vuln" is the only viable option, i feel...

And, for good measure:
> LDS has a lot of problems, but I already covered them already. You can read about some of those problems by reading up on a mitigation known as Blind Key Rotation.

This seems the most FUDdy, as HTTP signatures have all these same issues: They are irrevocable, can be permanently used as proof of posting (until keys change, of course), and-- wait, i just got hold of another post from them, let's look at that too

> as for proving the content existed -- not really. it only proves you signed some headers, including a digest header. hash collision attack is a plausible defense at the hash strength chosen.

... from my view, it seems you are either suggesting SHA-256 is hash-collidable, or that you should use a hash that is provably broken already to build the digest??? this seems ... ill-advised. And even then, the chance of finding a hash collision that changes /just/ the content of the post in such a way that it changes the entire meaning or even just replaces the content is very very very very very very small.


mastodon gold account holder @kaniini@pleroma.site

in reply to @puckipedia@puckipedia.com
CW: unfiltered thoughts

as:Public is part of AP, and is defined as a special collection. but AS2 does not require implementations to have knowledge about collections. yes, obviously an implementation of AP that just dumps whatever it doesn't know to as:Public is not compliant, but many implementations found in the wild do this.

I will however revise that part to be more fair to AS2/AP.

I have other reasons for expanding the cc list, the followers-only issue isn't entirely related to delivery, but also to disclosure of past messages before you had authorization to see them (if you follow someone later then you see their messages from the past).

the article describing blind key rotation also explains that it is a problem with HTTP sigs as well, but LDS is a larger problem, because there is a bunch of LDS signed objects sitting around on people's disks (you have to keep the sigs in order to forward the graphs after all). both schemes have problems but LDS is a much bigger problem because not only was a signature made that cannot be revoked, but HTTP signatures are detached from the object, and only are signing the message as a whole.

hacker teen puck

in reply to @kaniini@pleroma.site
CW: unfiltered thoughts

@kaniini expanding the follower collection (by, say, "virtually" putting it in bcc) only really matters in transport, the server instance can do whatever it wants. Storing the graphs, in current implementations, is only done while forwarding it, aka until the post has been delivered to all the servers. So up to a week in the incredibly worst case, but less than a few hours in 99% of the cases. And the LD signatures are also only valid with the exact graph, so you still need the full object (the same one that was HTTP signed) to validate it


mastodon gold account holder @kaniini@pleroma.site

in reply to @puckipedia@puckipedia.com
CW: re: unfiltered thoughts

and, what about, say, a JSON-LD-ified Pleroma which stores the graphs indefinitely because they are in essence the source of truth (much as we consider our AP-like IR form of AS2 to be our source of truth)? would you not be concerned about LDS signatures then? what happens *now* is not the only variable (although storing signed graphs up to a week makes me squeamish).

I know you think I mostly beat up on LDS (well, okay, I do), but I don't like the construction in general, including HTTP Signatures. the point of Blind Key Rotation is to try to mitigate these risks as much as possible. if you want, I can discuss more about HTTP Sigs having the same problems (but nobody is storing HTTP sigs at any time you must admit).

hacker teen puck

in reply to @kaniini@pleroma.site
CW: re: unfiltered thoughts

@kaniini "what happens *now* is not the only variable" so when it's about access control in current implementations it's all about now, but when it's about LD signatures it's not just about now? I mean, i'm not pro-LDsignatures either (mostly because i don't have the attention span to implement them). And of course if they were actually stored in any amount then i would suggest not doing so. but right now? meh.

Implementing decentralized systems that are deniable either requires handshaking beforehand (say, WebSub, or TLS client certs) or instead, what seems to be the only other solution to repudiability in decentral systems, a pickup system, so not actually signing the object's contents. Like, replace the embedded object in the activity with just a URI, then sign that object and POST it. Now the HTTP signatures don't really sign any useful information.

Also, you can't say if anyone is storing HTTP signatures or not, there's enough instances that run behind load balancers like Cloudflare, and who knows what happens behind their closed doors..?