@0x4d6165 well i think json is a helluva lot easier to parse than XML lmao, and I am an xml enthusiast, but trying to do server to server or client to server stuff in xml is not a good idea, xml is better suited for relatively static documents or one way publishing like rss/atom feeds, and that's probably what held back early attempts at decentralized social media is how janky xml can be, like diaspora or OStatus are largely xml based using the cursed and now abandoned Salmon protocol
i think activityPub stands on the shoulders of giants as there was a solid decade+ of web ontology development that was basically abandoned around 2012-14 because the tech oligarchs didn't think it was going anywhere, the only people interested in semantics n such in the 2010's were indie web people
i didn't know parsing json-ld was a hassle in other languages, i'm sorry to keep shilling php here but there's this built in function json_decode() you just take a json file or string and it turns it into an array, so you just work with an array, and if you want to turn it into json again you do json_encode and that's that, like the whole backend of this forum i'm working on is just folders with json files in it until i get the sql tables figured out, even then i kinda like the flatfile json idea
Eventually what I'm working on should serve as a basis of a minimal fedi instance, i've been playing around with a bunch of old fedi backups, the weird thing i've noticed is that mastodon, pleroma/akkoma and *key variants format their fedi backups slightly differently, like the way mastodon does it is highly redundant ( the more I learn about the technical stuff, the more i hate mastodon and it's consequences )
but i'm not even gonna fuck with activitypub directly, gonna use this library which handles most of that ( landrok.github.io/activitypub/ ) and i'm basically gonna just rip off the Misskey Sql tables lol
peopel talk shit about this and apub, but it's not actually that which is the problem it's the fedi devs having corpo brain worms and wierdly limited visions for what is possible , like there's nothing in the spec that prevents them from say, having robost media management or integrating both micro-blogging and forums,
case in point, prov ontology which is interested primarily in mapping relations of influence, like this could be used to compile a RICO case www.w3.org/TR/2013/REC-prov-o-20130430/
@0x4d6165 people are just going to complain regardless... my solution to this has just been off loading everything to the backend, using $_SESSION for a lot of stuff that I probably shouldn't, using weird CSS hacks excessively, php helps one out with this approach as writing switches involving templates manipulated by GET variables, pagination schemes, that's all quite trivial
But then doing that, the user has to click a lot more links and load a lot more pages, they're gonna have to refresh every time they want notifications, maybe that's a better way to do things, but no one wants that, so will opt back into javascript anyway
okay so figured out that cookie problem, it's always just one little line, one little typo that is the issue, probably not good practice but I will just save a cookie forever
psuedo code, not screen readablethe trick is handling sessions for non logged in ( anonymous ) users, since my board allows anonymous posting, what you can do is 'poison' a session by banning the cookie OR adding something to the session variable like Session['userType'] = banned
banning the IP of course works too, both of these are trivial to get around tho, so what i' gonna do is implement a 'hash cash" system, where you go to /post_office and request the minting of a stamp , we create jpeg and the user gives us the has value, and that counts as a $_SESSION['stamp'] = id_of_stamp and so the user can browse or post as long as the stamp is valid