teaspoon is a program I put together for my final project for Collective Action School (prev. Logic School).
It works much like the social media we are all so familiar with: you can post, and you can view a feed. However, there are a few critical differences to teaspoon:
Here's a little demo!
The actual site for teaspoon is inaccessible for anyone not authenticated to my private network, hence the video demo.
The private network for teaspoon was set up using tailscale! I created a tailnet with two users, myself and a shared email account I could use for the rest of my family, to spare them from having to manage another account.
Prior to using tailscale, I experimented with a bunch of different methods to grant access to the teaspoon server from other devices. I tried a combination of manual input from different users plus NAT hole punching, but turns out this is pretty hard to do, and tailscale takes care of loads more than routing, like managing dynamic IPs, authentication, and end-to-end encryption for free (for up to 3 users).
After I downloaded teaspoon onto my dad's phone, he asked me, "so, tell me, how is this different from uh ... iMessage?" I'm still working on an answer for that.
Simply? Well, because I don't think we should sacrifice control of our data and privacy for convenience, branding, and ignorance to what we really give up when we do so. Maybe this is a start.
I worked on this project largely as a stepping stone and learning experience in how we can make locally-hosted, small-scale and community software projects more accessible and easier to set up. I mean this both in the sense of taking back data ownership from big corporations and thinking about what different software might exist in the world if people who use it have a stake and say in its the development.
Because I was low-key cramming to finish this up and my front-end skills are lacking, I ended up with a simple post+feed as the final design. I would love to re-visit this in time and I hate that this was the path of least-resistance for my mind.
Throughout the brainstorming, experimenting, and coding for this project, there were a few themes I kept coming back to.
One of these was the idea of threat models vs. trust models. In traditional software systems, threat modeling is the process by which companies assess security risk and structural vulnerabilities. I think one of the biggest pain points of self-hosting today is understanding and managing these risks -- keeping security keys and secrets private and ensuring that your server is not accessible by malicious actors. Fortunately for this project, all of that (besides a few pre-emptive ACL rules) were managed by tailscale. The existence of this secured private network essentially for free was the reason I felt safe hosting it on my computer. This got me thinking about trust models.
When the security problem of self-hosted software falls away (assume only properly authenticated & authorized people have access to my program), what's left in terms of risk are the people already in my network, and the security of their devices. Ignoring the latter, this poses a predicament wholly different to that which a "software" "company" deals with on a regular basis. Namely, do I trust that the people I have privileged to use the software I'm running will use it with some level of discretion, which is only vaguely etched somewhere in my mind. The biggest "threat" to my software is probably someone I've trusted to use it. When reasoning about the software from this perspective, it reinforces the interdependence of the people using it on each other: maintaining it falls on everyone, rather than trusting unknown people and hoping for the best. (it would be impossible to come up with a threat model of my personal data that's been spilled out into the world's data servers)
I want to emphasize again that this type of speculative reasoning was really only possible because tailscale took care of so many of my needs for free.
This relates to the other idea that's been playing on my mind: the intentional misuse of existing software and APIs. Recently, I read this book about, as the subtitle states, "How Fangirls Created the Internet as We Know It", mostly on platforms such as Twitter and Tumblr. These platforms shaped fandoms and in turn they shaped the platform and even larger internet culture, largely by misusing them to get what they wanted e.g. trending takeovers to get noticed. Back in June, right around the time Logic School was ending, I hosted a birthday party. Similar to the lines of thinking in this project, I wondered -- how hard can it really be to make my own p2p version of partiful. Turns out, kinda hard.
Anyways, I ended up deeply misusing Figma APIs to use a Figma file as a database and file comments as rows, leaking an API key everytime I sent out the invite. The vestiges of this still exist on on my website, for memories. (This little invite was built on the trust model that no one I shared a link with would share it with someone ~suspicious~).
This concept isn't new, but I found that there's something that feels subversively empowering about using platforms this way. As if I could use it to my benefit rather than being used by it, with full knowledge that I'm exactly the type of "low-quality user" (someone who uses a service but doesn't pay for it or generate much revenue) that they despise. How can we intentionally misuse the existing software out there to build new systems, free from their control? Is this even possible?
First of all, I want to give a huge thank you to the Collective Action School (prev. Logic School) program! Specifically, I want to thank the teaching assistants, guest lecturers, and classmates for making the learning experience so fruitful. Yearbook link incoming!