A place to cache linked articles (think custom and personal wayback machine)
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

index.md 9.4KB

title: CommunityWiki: Smol Net url: https://communitywiki.org/wiki/SmolNet hash_url: cd91056571

The “smol” net is the “small” net. It’s small because it is build for friends and friends of friends. It doesn’t have to scale to millions of people because those millions should build their own local small nets.

That is:

  • servers are small, with fewer resources (cores, RAM, disk space)
  • communities are small (sometimes the number of accounts are limited arbitrarily)

That is, there is a plan to limit growth. Since scale is not a goal, the technology stack can be simpler and easier to understand. The smol net prefers simple systems.

Smolnet communities

  • Pubnix systems and the family of Tilde servers offer all sorts of hosting and shell access
  • The Circumlunar spaces offer Gemini hosting, shell access
  • Local, non-federating chat servers are simple to set up and run for a community

What is the smol net?

@Shufei said:

There’s a nascent movement offing, I think. Retrotech started it, hacker culture and tech, natch, but it’s incorporating new stuff like Gemini. Textnet, “slow internet”, I call it smolnet. People are tired of the corporate behemoths and cacophony. Demimondes like SDF provide a respite from all that, and a forum for some resistance and development apart from the bloat and blather. There’s a demotic tinge to it all in the hacker tradition, but without necessitating 1337ness.

Talon’s post Small Web said:

Unfortunately, capitalism has been working ever diligently in the opposite direction towards hooking people into unhealthy computing practices. It can feel hopeless hearing my loved ones actively complain about how Facebook and Twitter make them feel bad yet continue checking their timeline throughout the day. … By reconsidering the utility of time-tested protocols and hobbling together a few new ones, a growing community of people are leaving the proprietary world of flashy social-media websites to slow down and enjoy life accented by computers, not controlled by them. … On the Small Web, communities host themselves which means cross-domain browsing is very much encouraged and an important feature of the network at large. Real people, not corporations, host the Small Web.

Applying the concept of smol net

The emphasis on the small net or smol in general is surprisingly fruitful.

Here’s an example of how to use “smol” in reasoning about web crawlers.

Do you ever think about web crawlers? The Googlebots and Bingbots of this world? They don’t know whether a site is active or not. And since they have tons of money and work at scale, they just crawl the web permanently. They don’t mind the CO₂ they produce. Maybe they think their energy is green. They don’t mind the CO₂ your site produces as it serves their useless crawling, since that CO₂ is on you, not on them. They operate at scale, so they can’t ask you for your permission. It would take forever to ask everybody for permission, they say. It doesn’t scale, they say! Ah, but that’s only important to them because when it scales, they can make money. You don’t care if it scales! They also can’t be arsed to learn how your site works. Perhaps they should simply subscribe their machines to your feed. That’s where all the new stuff is, after all. But no, if it’s all the same to you, they don’t believe in feeds and they’d rather just download your site all over, follow all the links. Perhaps, if they’re nice and did their job, they’ll observe the [[Robots_exclusion_standard?]]. If you were an obedient citizen of the world, you will have written the robots.txt file they require, and you will have organised the URLs of your site such that the exclusion rules actually work. This might not be obvious but you can only exclude URL path prefixes, so it is impossible to exclude URLs that end in /history for example. You can only exclude URLs where the path starts with /history. And then the cancer grows. Bots can have their dedicated sections in the robots.txt file. They are expected to self-identify via the user agent header on the web. Now you can write more complicated rules on your server taking the user agent into account. And slowly but surely you have been pulled into the arms race, have made your technology stack more complicated, have spent time of your short life on dealing with problems they are causing, because they are making money, where as your website will crash and burn as it runs out of memory and CPU. They will say: why didn’t you install a caching proxy? Why don’t you rate limit access? Did you double check your setup? And more time is spent on the corpocaca web of scaling shit. Until you emerge on the other end, proud of what you have done not because it was easy but because it was hard. Now you can offer the same kind of condescending advice to newbies! Sweet, sweet knowledge is power. Of course, it was useless and unwanted knowledge. You could have played a musical instrument or painted some flowers, but no, you had to fix your website because you were thinking small and they are operating at scale. This is why you can never talk to a human on their side, and if you do, they can’t fix it for you. It wouldn’t scale. And if it doesn’t scale, it doesn’t pay. So you go with the flow and waste away your life.

Or you could decide to do your best and leave it. Join the small web instead. Discover other “smol” tech outside the web.

Failing at being smol

Let’s use a different example: Wikipedia. With wikis still being non-federating, one would think that they don’t really scale. And indeed, Google, Facebook, Twitter, Amazon: none of them run one big wiki to rule them all. But in a way, Wikipedia does. How?

Mediawiki (the wiki engine Wikipedia uses) has some features that were new at the time. Sure, technically, Wikipedia is surrounded by caching servers and all of that. But more fundamentally, from a social perspective:

There is no useful, unified list of recent changes. The torrent of changes is simply too big for individuals to absorb. Instead, people often just see their own subscribed changes. Thus, there is no community to build around community editing. There are no RecentChangesJunkies for the entire site.

If at all, communities form around subsets of pages. Pages belong to categories and some editors feel particular affinity with just a few categories. The “Project Gemini” page, for example, belongs to:

  • Project Gemini
  • Space program of the United States
  • History of the United States (1945–1964)
  • History of the United States (1964–1980)
  • 1962 establishments in the United States
  • 1966 disestablishments in the United States
  • Engineering projects
  • Human spaceflight programs
  • Crewed spacecraft
  • Military projects of the United States

Quite disparate fields of interest!

See Also

« … we need to adopt a broader view of what it will take to fix the brokenness of the social web. That will require challenging the logic of today’s platforms — and first and foremost challenging the very concept of megascale as a way that humans gather. If megascale is what gives Facebook its power, and what makes it dangerous, collective action against the web as it is today is necessary for change.» – in Facebook is a Doomsday Machine by Adrienne LaFrance, for The Atlantic, via Slashdot

«We’ve been killing conversations about software with “That won’t scale” for so long we’ve forgotten that scaling problems aren’t inherently fatal. … Situated software isn’t a technological strategy so much as an attitude about closeness of fit between software and its group of users, and a refusal to embrace scale, generality or completeness as unqualified virtues.» – Situated Software, by ClayShirky, on the InternetArchive


CategoryPhilosophy