When I say that ActivityPub is Turing Complete, I mean that I'm building a compiler where the source is json-ld (or another serialization format with a known schema), ActivityPub is the assembly language of a target virtual machine, and semantic triples are the machine code:

Front end:
- lexical analysis - convert characters to tokens
- syntax analysis - check for valid sequence of tokens
- semantic analysis - check for meaning
- some global/high-level optimization

Back end:
- some local optimization
- register allocation
- peep-hole optimization
- code generation
- instruction scheduling


The lexical and syntax analysis are handled transparently by the json library in most languages. Semantic analysis starts with the ActivityPub Rocks test suite then you start carrying around for the implementation specific documentation of the software you hope to interop with. Global optimization mostly involves handing exceptions to normal ActivityStreams workflows like ignoring a delete request if you don't have a local copy of the object

I could describe yaaps as a compiler front end collection for ActivityPub and compatibility protocols that emits a normalized form of serialized linked data. Bridges from other protocols could be implemented as front ends targeting the same representation

In this model, collections are stacks and actors are virtual machines. Inbox, outbox, follower, and following collections are required stacks for each VM, with pointer registers, even if not required collections in the protocol, (e.g. shared inbox and multibox are the inboxes of service actors whose outbox is privileged as a relay to local destinations.) The back end needs to be aware of the local protocol implementation, media handling, and authorizations. OCaps are handled implicitly in register allocation and the implementation of the VM. You can't process what you don't reference and you don't reference what you don't want processed. I have designed 2 back ends, each targeting a different storage architecture - one optimized for throughput, the other optimized for archival. The output of the protocol back end is a sanitized ActivityPub profile with storage optimized for the local architecture

This, in turn, allows construction of virtual machines for processing the protocol as symbols that are agnostic of input protocols, storage, and the network. The VM tracks state and all side effects are external, so Activities are always processed atomically

The consequence of this structure is a separation of concerns that allows me to tackle each layer in isolation. The intermediate representation between front end and back end is tractable to document. Storage can be optimized for applications independently of external considerations. State has guaranteed consistency. The VM can be implemented incrementally, even to the point of externalizing the development of features. (It can be left to authorized agents to define how specific protocol elements are processed, meaning that behavior definitions could be shared through federation)

I'm not claiming that this approach is uniquely valid or especially relevant for most folks. It's just the level of abstraction that I happen to need in order to be confident that my work will be able to evolve with the network and the scope of the applications I have planned

Dynamic Instruction Set Computing 

@yaaps turns out the actor model of computation is a thing
Sign in to participate in the conversation

Officially endorsed by @Gargron as a joke instance (along with Things that make unique as an instance.
- Federates with TOR servers
- Stays up to date, often running newest mastodon code
- Unique color scheme
- Strictly enforced rules
- A BananaDogInc company. Visit our other sites sites including, psychicdebugging and gonnaroll