Luis Quintanilla Avatar Image

Hi, I'm Luis 👋

Latest updates from across the site

snippets

Deploy Owncast to Azure Container Apps with Persistent Storage

This guide shows how to deploy Owncast to Azure Container Apps with persistent storage and scale-to-zero capability to minimize costs.

  • Azure CLI installed and logged in
  • An Azure subscription
  • A resource group created

Create Storage Account for Persistent Data

# Set variables (choose cheapest regions)
RESOURCE_GROUP="your-resource-group"
LOCATION="centralus"  # Often cheaper than eastus
STORAGE_ACCOUNT="owncaststorage$(date +%s)"  # Must be globally unique
CONTAINER_APP_ENV="owncast-env"
CONTAINER_APP_NAME="owncast-app"

az storage account create
--name $STORAGE_ACCOUNT
--resource-group $RESOURCE_GROUP
--location $LOCATION
--sku Standard_LRS
--kind StorageV2
--access-tier Cool
--allow-blob-public-access false
--https-only true
--min-tls-version TLS1_2

az storage share create
--name "owncast-data"
--account-name $STORAGE_ACCOUNT
--quota 1 # Start with 1GB, scales automatically

Create Container Apps Environment

# Create the Container Apps environment
az containerapp env create \
  --name $CONTAINER_APP_ENV \
  --resource-group $RESOURCE_GROUP \
  --location $LOCATION

Get the storage account key and create the storage mount:

# Get storage account key
STORAGE_KEY=$(az storage account keys list \
  --account-name $STORAGE_ACCOUNT \
  --resource-group $RESOURCE_GROUP \
  --query "[0].value" -o tsv)

az containerapp env storage set
--name $CONTAINER_APP_ENV
--resource-group $RESOURCE_GROUP
--storage-name "owncast-storage"
--azure-file-account-name $STORAGE_ACCOUNT
--azure-file-account-key $STORAGE_KEY
--azure-file-share-name "owncast-data"
--access-mode ReadWrite

Create the container app with persistent storage:

az containerapp create \
  --name $CONTAINER_APP_NAME \
  --resource-group $RESOURCE_GROUP \
  --environment $CONTAINER_APP_ENV \
  --image "owncast/owncast:latest" \
  --target-port 8080 \
  --ingress external \
  --min-replicas 0 \
  --max-replicas 1 \
  --cpu 0.5 \
  --memory 1Gi \
  --volume-mount "data:/app/data" \
  --volume-name "data" \
  --volume-storage-name "owncast-storage" \
  --volume-storage-type AzureFile

For Owncast to work properly, you need both HTTP (8080) and RTMP (1935) ports. This requires a Virtual Network (VNet) integration:

Create VNet and Subnet

# Create MINIMAL virtual network (smallest possible address space)
az network vnet create \
  --name "owncast-vnet" \
  --resource-group $RESOURCE_GROUP \
  --location $LOCATION \
  --address-prefix "10.0.0.0/24"  # Smaller than default /16

az network vnet subnet create
--name "container-apps-subnet"
--resource-group $RESOURCE_GROUP
--vnet-name "owncast-vnet"
--address-prefix "10.0.0.0/27" # Only 32 IPs instead of /23 (512 IPs)

Recreate Container Apps Environment with VNet

# Get subnet ID
SUBNET_ID=$(az network vnet subnet show \
  --name "container-apps-subnet" \
  --vnet-name "owncast-vnet" \
  --resource-group $RESOURCE_GROUP \
  --query id -o tsv)

az containerapp env delete
--name $CONTAINER_APP_ENV
--resource-group $RESOURCE_GROUP
--yes

az containerapp env create
--name $CONTAINER_APP_ENV
--resource-group $RESOURCE_GROUP
--location $LOCATION
--infrastructure-subnet-resource-id $SUBNET_ID
--enable-workload-profiles false # Forces consumption-only pricing

Deploy Container App with MINIMAL Resources

az containerapp create \
  --name $CONTAINER_APP_NAME \
  --resource-group $RESOURCE_GROUP \
  --environment $CONTAINER_APP_ENV \
  --image "owncast/owncast:latest" \
  --target-port 8080 \
  --exposed-port 1935 \
  --ingress external \
  --transport auto \
  --min-replicas 0 \
  --max-replicas 1 \
  --cpu 0.25 \
  --memory 0.5Gi \
  --volume-mount "data:/app/data" \
  --volume-name "data" \
  --volume-storage-name "owncast-storage" \
  --volume-storage-type AzureFile

Cost-Optimized Resource Allocation:

  • CPU: 0.25 cores (minimum allowed, sufficient for small streams)
  • Memory: 0.5Gi (minimum allowed, will work for basic streaming)
  • Scaling: Aggressive scale-to-zero with max 1 replica

For maximum cost optimization, use this YAML approach with the smallest possible resource allocation:

# owncast-minimal-cost.yaml
properties:
  managedEnvironmentId: /subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.App/managedEnvironments/{environment-name}
  configuration:
    ingress:
      external: true
      targetPort: 8080
      additionalPortMappings:
      - external: true
        targetPort: 1935
        exposedPort: 1935
    secrets: []
  template:
    containers:
    - image: owncast/owncast:latest
      name: owncast
      resources:
        cpu: 0.25
        memory: 0.5Gi
      volumeMounts:
      - mountPath: /app/data
        volumeName: data
      env:
      - name: OWNCAST_RTMP_PORT
        value: "1935"
      - name: OWNCAST_WEBSERVER_PORT  
        value: "8080"
    scale:
      minReplicas: 0
      maxReplicas: 1
      rules:
      - name: "http-rule"
        http:
          metadata:
            concurrentRequests: "10"  # Scale up quickly but keep minimal
    volumes:
    - name: data
      storageType: AzureFile
      storageName: owncast-storage

Deploy with:

az containerapp create \
  --name $CONTAINER_APP_NAME \
  --resource-group $RESOURCE_GROUP \
  --yaml owncast-minimal-cost.yaml

Scale-to-Zero Configuration

  • Min Replicas: Set to 0 to completely scale down when not in use
  • Max Replicas: Set to 1 (Owncast doesn't need horizontal scaling)
  • Scale Rules: Container Apps will automatically scale up when requests arrive

Resource Limits (Ultra Cost-Optimized)

  • CPU: 0.25 cores (absolute minimum, sufficient for 1-2 viewer streams)
  • Memory: 0.5Gi (minimum allowed by Azure Container Apps)
  • Storage: Cool tier with 1GB initial quota (auto-scales as needed)
  • Network: Minimal VNet addressing to reduce overhead

After deployment, configure OBS for streaming:

  1. Server Settings: Use rtmp://your-app-url:1935/live (note: rtmp:// not https://)

  2. Stream Key: Use the key from Owncast admin panel (Configuration > Server Setup > Stream Keys)

  3. Owncast Web Interface: Access at https://your-app-url (port 8080 is handled automatically by ingress)

  4. Persistent Data: All Owncast configuration, database, and uploaded files are stored in Azure Files and persist across container restarts and scale-to-zero events.

  5. Cold Start: When scaling from zero, there will be a brief cold start delay as the container initializes.

  6. VNet Requirement: For dual-port access (HTTP + RTMP), you must use a Virtual Network integration. This is a requirement for exposing additional TCP ports in Azure Container Apps.

  7. Security Configuration: After deployment, immediately change the default admin credentials:

    • Navigate to https://your-app-url/admin
    • Default login: admin / abc123
    • Go to Configuration > Server Setup and change the admin password
    • Create/copy stream keys from Configuration > Server Setup > Stream Keys tab
  8. Custom Domain: You can configure a custom domain using:

    az containerapp hostname add \
      --name $CONTAINER_APP_NAME \
      --resource-group $RESOURCE_GROUP \
      --hostname "your-domain.com"
    
  9. SSL Certificate: Azure Container Apps provides automatic SSL certificates for custom domains.

Check your deployment:

# Get the URL
az containerapp show \
  --name $CONTAINER_APP_NAME \
  --resource-group $RESOURCE_GROUP \
  --query properties.configuration.ingress.fqdn

az containerapp logs show
--name $CONTAINER_APP_NAME
--resource-group $RESOURCE_GROUP

With these optimizations, your monthly costs should be:

When Streaming (4 hours/month example):

  • Compute: ~$0.50/month (0.25 CPU + 0.5Gi RAM × 4 hours)
  • Container Apps Environment: ~$0.00 (consumption plan, no dedicated resources)
  • Networking: ~$0.05/month (minimal VNet overhead)

When Idle (Scale-to-Zero):

  • Compute: $0.00 (scaled to zero)
  • Environment: $0.00 (consumption plan)

Always-On Costs:

  • Storage: ~$0.05-0.10/month (1-2GB in Cool tier)
  • VNet: ~$0.00 (no gateways or dedicated resources)

Total Monthly Cost: ~$0.60-0.65/month (assuming 4 hours of streaming)

Performance Expectations at Minimal Resources:

  • 0.25 CPU + 0.5Gi RAM: Suitable for 480p-720p streams with 1-5 concurrent viewers
  • Scale-up Path: Monitor performance and increase to 0.5 CPU + 1Gi if needed
  • Cold Start: ~10-15 seconds when scaling from zero (acceptable for personal streaming)
Blog Post

Mobile-First Static Site Publishing: Discord Bot Pipeline via Azure and GitHub

Ever since I published my first note(microblog post) on my website, I've always wanted a way to quickly publish while on the go. Unfortunately, I never found a good solution.

Because my website is statically generated and the source is hosted on GitHub(check out the colophon for more details), there is no backend for me to talk to. At the same time, I didn't want to build an entire backend to support my website because I want to keep things as lean and cost-efficient as possible.

Since my posts are just frontmatter and Markdown, I use VS Code as my editor. For some time, back when I used to have a Surface Duo, I authored posts from mobile using the github.dev experience. On two screens, while not ideal, it was manageable. After switching devices (because sadly there were no more security updates on the Surface Duo) and upgrading to a dumbphone and later a single screen smartphone, that workflow wasn't feasible.

At that point, what I resorted to was sending messages to myself via Element. The message would contain a link I wanted to check out later. Once I was on my laptop, I would check out the link and if I wanted to post about it on my website, I'd do so then.

That process, while it worked, wasn't necessarily scalable. In part that's a feature because I could spend more time digesting the content and writing a thoughtful article. However, it stopped me from sharing more in the moment and there were posts that were never authored or bookmarks that weren't captured because eventually that link got lost in the river of other links.

Basically what I wanted to replicate was the more instant posting that social media gives you, but do so on my own site.

That led me to doing some thinking and requirement gathering around the type of experience I wanted to have.

When it came to requirements for my solution, I was focused more on the workflow and experience rather on technical details.

Here is a list of those solution requirements:

  • Mobile is the primary publishing interface. Desktop publishing is a nice to have.
  • Be as low-friction as sharing a link via Element or posting on social media
  • Doesn't require implementing my own client or frontend
  • Doesn't require me to use existing micropub clients
  • Handles short-form and media posts supported by my website
    • Notes
    • Responses
      • Repost
      • Reply
      • Like
    • Bookmark
    • Media
      • Image
      • Audio (not used as often but technically supported)
      • Video (not used as often but technically supported)
  • Low-cost

For years, I had been struggling with actually implementing this system. The main part that gave me pause was not implementing my own client or relying on existing micropub clients.

Eventually, I just accepted that it might never happen.

One day, it eventually hit me. If the notes to self Element workflow worked so well, why not use a chat client as the frontend for publishing. At least have it serve as the capture system that would then format the content into a post that gets queued for publishing on GitHub. I'd seen Benji do something similar with his micropub endpoint.

While I could've used Element since that's my preferred platform, I've been contemplating no longer hosting my own Matrix server. So if I went through with this, I'd want something that I didn't feel bad about investing the time on this solution if that chat client went away.

That then left me with Discord as the next best option. Primarily because of its support for bots as well as its cross-platform support across mobile and desktop.

In the end, the solution then ended up being fairly straightforward.

More importantly, with the help of AI, I wrote none of the code.

Using Copilot and Claude Sonnet 4, I was able to go from idea to deployment in 1-2 days. At that time the solution supported all of the posts except for media which I hadn't figured out what the best way of uploading media through Discord was. Figuring that out, implementing it, and deploying it took another day or two.

Since I wanted for my solution to be as low-cost as possible, serverless seemed like a good option. I only pay for compute when it's actually being used which can be infrequent in my case. I don't need the server running 24/7 or even to be powerful. However, I didn't want to write my system as an Azure Function. I wanted the flexibility of deploying on a shared VM or container. A VM though wasn't an option since it's running 24/7. Keeping all of that in mind, my choice was narrowed down to Azure Container Apps which gave me the characteristics I was looking for. Serverless containers.

Once that decision was made, I used Copilot again to figure out how to optimize my container image so that it's space and resource efficient. And while at it, I used Copilot again to figure out the right incantations to get the container deployed to Azure Container Apps.

All-in-all, the solution had been staring at me in the face since I already had a workflow that for the most part worked for me, it just needed some optimizations and with the help of AI, I was able to quickly build and deploy something I'd been ruminating over for years.

The workflow for publishing is as follows:

  1. Invoke the bot in Discord to capture my input using slash command /post and the respective post type.

    Using slash commands to invoke discord publishing bot

  2. Provide post details. For media posts, I can provide an attachment which gets uploaded to Azure Blob Storage.

    A modal in discord with note post fields filled in

  3. Bot creates a branch and PR in my repo with the post content

  4. While logged into GitHub from my phone, if everything looks good, I merge the PR which kicks off my GitHub Actions workflow to build and publish the site including the new post.

  5. Post displays on my website.

The solution is not perfect.

One of the problems I've run into is cold-start. Since I scale my solution down to zero when it's not being used to save on costs, I suffer from the cold start problem. Therefore, when I first invoke the bot, it fails. I have to give it a few seconds and retry the invocation. It's usually about 5 seconds so it's not a huge issue but it does add some friction.

Overall I'm happy with my solution but there are a few improvements I'd like to make.

  • Open-source the repo - Currently I've kept the repo private since it was all AI generated. Since my system is already in production and processes were documented, I need to do a more thorough pass to make sure that no secrets or credentials are checked in or documented anywhere.
  • Improve UX - Discord limits modal fields to 5. Therefore, I'm playing around with the right balance between how much of the input should come from slash commands and how much should come from the modal.
  • Expand supported post types - I'd like to expand the number of posts supported by my publishing client. Reviews are a good example of the type of post I'd like to support as well as RSVPs. Reviews I already support on my website but RSVPs I don't yet. Also, I'd have to fix my Webmentions which are currently broken after upgrading my website.
  • Make it generator agnostic - Currently this only works for my website. With a few tweaks and refactoring, I think I can get the project to a place where it should work with other popular static site generators.
  • One-click deployment - Currently the solution is packaged up as a container so it can be deployed from anywhere. I want to make it even simpler to deploy. One click if possible.
Response

Engineering for Slow Internet

Does this webapp really need to be 20 MB? What all is being loaded that could be deferred until it is needed, or included in an “optional” add-on bundle? Is there a possibility of a “lite” version, for bandwidth-constrained users?

While Antarctica is an edge case, this article illustrates some of the motivations behind my text-first website.

By trimming the excess not only do you get to the core of the app or website, but it also loads faster.

Response

Stop saving everything

If your read-it-later list isn’t getting cleared weekly, perhaps it’s time to delete the lot.

Save from a mindset of abundance, rather than scarcity, and process the things you’ve saved each week (or month, at the most). If you are worried that something you deleted truly would have changed your life, just stop.

STOP.

You can’t read it all, do it all, be it all. Trust that those potentially life-changing ideas will come around again, when you are ready for them

Good reminder. Lately I've been sending a lot of notes to myself with stuff to read but just haven't had the time to get to it. That said, the act of sending myself those notes is low friction that I don't feel FOMO when I don't get to read the articles and consume the media.

Bookmark

URL Town

Love to see projects like this.

url.town doesn’t have any overly lofty ambitions; we’re just building our own directory of really nice websites. We’re not trying to fully recreate the original Yahoo! or DMOZ directories. We’re not aiming for some astronomical number of links. This is just one space on the web, tied to a community that loves to share neat things with one another. Quality matters much more than quantity. There’s no need to share everything just for the shake of sharing it; it’s much better to share things that are useful or interesting.

Bookmark

Claude Code Emacs Integration

Claude Code IDE for Emacs provides native integration with Claude Code CLI through the Model Context Protocol (MCP). Unlike simple terminal wrappers, this package creates a bidirectional bridge between Claude and Emacs, enabling Claude to understand and leverage Emacs’ powerful features—from LSP and project management to custom Elisp functions. This transforms Claude into a true Emacs-aware AI assistant that works within your existing workflow and can interact with your entire Emacs ecosystem.

Note

Hello world from the new site

Posting from my brand new redesigned website.

I was working on it for about a month so I plan on doing a longer writeup on what has changed.

There's still a few things that are broken, but for the most part, I'm happy with the progress and the changes that need to be made are incremental.

There's a ton of cleanup as well but again. That is not a blocker to publishing the site.

Blog Post

IndieWeb Create Day - July 2025

Since it was a holiday weekend in the U.S. that kind of snuck up on me, I found myself with nothing planned on a Saturday. So I chose to spend it creating stuff for my website with the IndieWeb community during IndieWeb Create Day.

Over the last few months I've been overthinking my website redesign and while I've made several attempts at it, I've never been satisfied the outcome. I end up throwing away all the progress I've made and go back to the drawing board.

Yesterday, I decided to not let perfect be the enemy of good and the approach I took was creating just a simpler piece of functionality outside of my website. How I integrate it into my website is a future me problem. But I want to work from a place of creativity and complete freedom to think of what could be rather than what is.

With that in mind, I set out to sketch out how I want to create and render media (image, audio, video) posts. The approach I took used a combination of front-matter YAML and custom markdown media extensions. The front-matter YAML is something that I already use for my website and it's something that I want to continue using. However, in contrast to my current website, I like that the front-matter was kept simple and only includes a basic amount of information. The actual post content was handled by my custom markdown extension which leveraged YAML-like syntax to define media content. What's great about this is that it is composable so once I got one type of media working, the rest for the most part "just worked". I could even mix different media types within the same post with no additional work or code changes required. Once I had the skeleton, it was all about refactoring, documentation, adding finishing touches, and vibe-coding some CSS which Claude did a relatively good job with given the aesthetic I was going for.

Overall, I'm happy with the end result.

A screenshot of a website post containing an image and audio player

For more details, you can check out the repo.

At some point, I want to be able to integrate these media posts into my static site generator but for the time being, there are other kinds of posts such as reviews, RSVPs, and other post types that I want to design and eventually also support on my website. I liked the approach I took this time around because it gave me the freedom to explore posibilities rather than constrain my creativity to what I've already built. So I think I'll keep doing the same for subsequent post types.

At the end of the day, it was nice seeing everyone else's projects. My favorite one was Cy's recipe website. I want to be like them when I grow up 🙂.

Note

Website Post Statistics - June 2025

I haven't published much this past month. Notes and responses are significantly down from last year.

That said, I'm happy that so far I've published seven long-form blog posts, which is how many I published all of last year.

I haven't spent as much time bookmarking and resharing content. Partially due to the fact that I'm still working on the redesign. So far vibe-specing the redesign hasn't yielded good enough results. Maybe I need to break down my problem further. That experience on its own might make a good blog post.

Part of the challenge with the redesign is that I'm trying to find a balance between standardizing my front-matter YAML schemas so that I can simplify my code and just have a single function handle the parsing of the front-matter, while at the same time enabling custom rendering depending on the post type. I think in the end what I'll end up doing is having different schemas per post type and maybe refactoring some of my code to remove redundancies.

Blog Post

FediForum Day One Recap

Just wrapped up a successful first day of FediForum.

The vibes and energy were high. Tons of great conversations and projects around the social web.

A few emerging themes I noticed:

  • Identity
  • Portability / Interoperability
  • Feeds
  • Commerce

Ian Forrester kicked us off with his Public Service & The Fediverse keynote (Slides).

One of the ideas that struck a chord of public service integrated into the fediverse. More specifically the interest that sparked in me was that publishing and social shouldn't be two separate things. Following the POSSE principle from the IndieWeb. You publish on your own site and then it's syndicated elsewhere.

This was interesting enough for me I even hosted a session on the topic, I think it was called Tightening the Loop between CMS and the Fediverse. It was my first unconference, so I appreciated the way the agenda was built. Announce your topic, see whether there's interest, put it on the agenda, chat with fellow participants. Super easy.

These are such a huge topic but for the purpose of this post, I'm lumping them together.

https://bounce-migrate.appspot.com/ is one of the projects aiming to make portability easy. What's so interesting is they're making it easy to migrate across protocols. So if you're in one network like ATProto (Bluesky), migrating to the Fediverse should be relatively seamless with https://bounce.so.

Some great discussions that emerged on the topic as well include:

  • Reputation - How do you build a web of trust?
  • Compartmentalization and Deduplication - A single identity or multiple identities? When "following" someone, which of their feeds takes priority?

Talk of feeds was everywhere. I made a note to myself throughout the conference.

It's amazing how big the feeds theme is. Feed ownership, customization, and sharing. All powered by open protocols.

  • Bonfire releases 1.0 - Congrats to the Bonfire team on this milestone. I haven't tried Bonfire myself, but the Circles feature caught my attention. It made me reminiscent of Google+.
  • Surf.Social is now in beta - As an avid user and curator of RSS feeds, I'd heard about Surf before but hadn't really looked into it. The beta release was announced at the conference and I quickly was able to sign up and download it. Kudos to the team on this milestone and thanks for being so responsive to my request to join the beta. I did almost no waiting in the waitlist. Once I have a chance to try it out and get familiar with it, I'll share some thoughts.
  • Channels from the folks at Newsmast Foundation looks like an interesting way to curate and customize feeds. Bring Your Own Timeline Algorithm leverages semantic search to help you seamlessly leverage the power of algorithmic feeds but doing so under your control. Cool use of AI.

There were a few unconference sessions on the topic as well.

It was great to see folks talking about enabling creators to earn a living on open platforms and the social web.

I believe Bandwagon.fm showed of an implementation of a payments and subscription system built on top of Emmisary, a social web toolkit.

Here's a list of other links and projects I was exposed to during the conference.

As always, Cory Doctorow was great way to close out the first day. I even learned a new term, tron-pilled. Which means as a creator of a platform, you're on the side of the users.

Looking forward to tomorrow's sessions!

Blog Post

How do I keep up with AI?

This question comes up a lot in conversations. The short answer? I don’t. There’s just too much happening, too fast, for anyone to stay on top of everything.

While I enjoy sharing links and recommendations, I realized that a blog post might be more helpful. It gives folks a single place they can bookmark, share, and come back to on their own time, rather than having to dig through message threads where things inevitably get lost.

That said, here are some sources I use to try and stay informed:

  • Newsletters are great for curated content. They highlight the top stories and help filter through the noise.
  • Blogs are often the primary sources behind those newsletters. They go deeper and often cover a broader set of topics that might not make it into curated roundups.
  • Podcasts serve a similar role. In some cases, they provide curation like newsletters and deep dives like blogs in others. Best of all, you can tune in while on the go making it a hands-free activity.

For your convenience, if any of the sources (including podcasts) I list below have RSS feeds, I’ve included them in my AI Starter Pack, which you can download and import into your favorite RSS reader (as long as it supports OPML file imports).

If you have some sources to share, send me an e-mail. I'd love to keep adding to this list! If they have a feed I can subscribe to, even better.

I pride myself on being able to track down an RSS feed on just about any website, even if it’s buried or not immediately visible. Unfortunately, I haven't found a feed URL for either OpenAI or Anthropic which is annoying.

OpenAI and Anthropic, if you could do everyone a favor and drop a link, that would be great.

UPDATE: Thanks to @m2vh@mastodontech.de for sharing the OpenAI news feed.

I know I could use one of those web-page-to-RSS converters, but I'd much rather have an official link directly from the source.

Now that I’ve got you here...

Let’s talk about the best way to access all these feeds. My preferred and recommended approach is using a feed reader.

When subscribing to content on the open web, feed readers are your secret weapon.

RSS might seem like it’s dead (it’s not—yet). In fact, it’s the reason you often hear the phrase, “Wherever you get your podcasts.” But RSS goes beyond podcasts. It’s widely supported by blogs, newsletters, and even social platforms like the Fediverse (Mastodon, PeerTube, etc.) and BlueSky. It’s also how I’m able to compile my starter packs.

I've written more about RSS in Rediscovering the RSS Protocol, but the short version is this: when you build on open standards like RSS and OPML, you’re building on freedom. Freedom to use the tools that work best for you. Freedom to own your experience. And freedom to support a healthier, more independent web.

Bookmark

Pocket shutting down

I haven't used Pocket in a long time but sad to hear it's shutting down. It's great they're offering the option of letting you export your data and platforms like Micro.blog are making it easy to host that content on your own site (assuming you're using Micro.blog).

For bookmarking solutions, I've been using my website as well as messages-to-self on Element. What's on my website is the content that I really want to make sure I archive, whereas the messages to self I treat more as a read-it-later solution. Eventually, some of those make it to my website. I'm working on my mobile publishing flow to simplify my bookmarking process but overall, I'm happy with my current system.

This is also another great reminder why owning your content is important.

Blog Post

Vibe-Specing - From concepts to specification

Code generation is a common use case for AI. What about the design process that comes before implementation? Personally, I've found that AI excels not just at coding, but also helping formalize abstract ideas into concrete specifications. This post explores how I used AI-assisted design to transform a collection of loosely related concepts into a technical specification for a new system made up of those concepts.

Generally, I've had mixed success with vibe-coding (the practice of describing what you want in natural language and having AI generate the corresponding code). However, it's something that I'm constantly working on getting better at. Also, with tooling integrations like MCP, I can ground responses and supplement my prompts using external data.

What I find myself being more successful with is using AI to explore ideas and then formalizing those ideas into a specification. Even in the case of vibe-coding, what you're doing with your prompts is building a specification in real-time.

I'd like to think that eventually I'll get to the vibe-coding part but before diving straight into the code, I'd like to spend time in the design phase. Personally, this is also the part that I find the most fun because you can throw wild things at the wall. It's not until you implement them that you actually validate whether some of those wild ideas are practical. But I find the design phase a ton of fun.

The result of my latest vibe-specing adventure is what I'm calling the InterPlanetary Knowledge System (IPKS).

Lately, I've been thinking a lot about knowledge. Some concepts that have been in my head are those of non-linear publishing (creating content that can be accessed in any order with multiple entry points, like wikis or hypertext) and distributed cognition (the idea that human knowledge and cognitive processes extend beyond the individual mind to include interactions with other people, tools, and environments). Related to those concepts, I've also been thinking about how digital gardens (personal knowledge bases that blend note-taking, blogging, and knowledge management in a non-linear format) and Zettelkasten (a method of note-taking where ideas are captured as atomic notes with unique identifiers and explicit connections) are ways to capture and organize knowledge.

One other thing that I'm amazed by is the powerful concept of a hyperlink and how it makes the web open, decentralized, and interoperable. When paired with the semantic web (an extension of the web that provides a common framework for data to be shared across applications and enterprises), you have yourself a decentralized knowledgebase containing a lot of the world's knowledge.

At some point, IPFS (InterPlanetary File System, a protocol designed to create a permanent and decentralized method of storing and sharing files) joined this pool of concepts I had in my head.

These were all interesting concepts individually, but I knew there were connections but couldn't cohesively bring them together. That's where AI-assisted specification design came in.

Below is a summary of the collaborative design interaction with Claude Sonnet 3.7 (with web search) that eventually led to the generation of the IPKS specifications. I haven't combed through them in great detail, but what they're proposing seems plausible.

Overall, I'm fascinated by this interaction. Whether or not IPKS ever becomes a reality, the process of using AI to transform abstract concepts into concrete specifications seems like a valuable and fun design approach that I'll continue to refine and include as part of my vibe-coding sessions.


Our conversation began with exploring IPFS (InterPlanetary File System) and its fundamental capabilities as a content-addressed, distributed file system. We recognized that while IPFS excels at storing and retrieving files in a decentralized manner, it needed extensions to support knowledge representation, trust, and semantics.

Key insights from this stage:

  • IPFS provides an excellent foundation with content addressing through CIDs
  • Content addressing enables verification but doesn't inherently provide meaning
  • Moving from document-centric to idea-centric systems requires additional layers

We explored established knowledge management approaches, particularly:

Zettelkasten

The Zettelkasten method contributed these important principles:

  • Atomic units of knowledge (one idea per note)
  • Explicit connections between ideas
  • Unique identifiers for each knowledge unit
  • Emergent structure through relationship networks

Digital Gardens

The Digital Garden concept provided these insights:

  • Knowledge in various stages of development
  • Non-linear organization prioritizing connections
  • Evolution of ideas over time
  • Public visibility of work-in-progress thinking

These personal knowledge management approaches helped us envision how similar principles could work at scale in a distributed system.

When we proposed replacing "IPFS" with "IPKS" (changing File → Knowledge), we recognized the need to define what makes knowledge different from files. This led to identifying several key requirements:

  1. Semantic meaning - Knowledge needs explicit relationships and context
  2. Provenance and trust - Knowledge requires verifiable sources and expertise
  3. Evolution - Knowledge changes over time while maintaining continuity
  4. Governance - Knowledge exists in various trust and privacy contexts

These requirements shaped the layered architecture of the specifications.

Our discussions about distributed cognition highlighted how thinking processes extend beyond individual minds to include:

  • Interactions with other people
  • Cultural artifacts and tools
  • Physical and digital environments
  • Social and technological systems

This concept directly influenced the IPKS design by emphasizing:

  • Knowledge as a collective, distributed resource
  • The need for attribution and expertise verification
  • The value of connecting knowledge across boundaries
  • The role of tools in extending human cognition

Similarly, non-linear publishing concepts shaped how we approached knowledge relationships and navigation in IPKS, moving away from sequential formats toward interconnected networks of information.

Our exploration of complementary technologies led to incorporating:

Decentralized Identifiers (DIDs)

DIDs provided the framework for:

  • Self-sovereign identity for knowledge contributors
  • Cryptographic verification of authorship
  • Persistent identification across systems
  • Privacy-preserving selective disclosure

Verifiable Credentials (VCs)

Verifiable Credentials offered mechanisms for:

  • Expertise validation without central authorities
  • Domain-specific qualification verification
  • Credential-based access control
  • Trust frameworks for knowledge contributors

Semantic Web (RDF/OWL)

Semantic Web standards influenced:

  • Relationship types between knowledge nodes
  • Ontologies for domain knowledge representation
  • Query patterns for knowledge discovery
  • Interoperability with existing knowledge systems

Our conversation about supply chain management provided a concrete use case that helped ground the specifications in practical application. This example demonstrated how IPKS could address real-world challenges:

  • Material Provenance: Using DIDs and verifiable credentials to establish trusted material sources
  • Cross-Organization Collaboration: Enabling knowledge sharing while respecting organizational boundaries
  • Regulatory Compliance: Creating verifiable documentation of compliance requirements
  • Expertise Validation: Ensuring contributors have appropriate qualifications for their roles
  • Selective Disclosure: Balancing transparency with competitive confidentiality

This business context helped shape the Access Control & Privacy specification in particular, highlighting the need for nuanced governance models.

As we moved from abstract concepts to specifications, several technical considerations emerged:

  1. Building on IPLD: Recognizing that InterPlanetary Linked Data (IPLD) already provided foundational components for structured, linked data in content-addressed systems

  2. Modular Specification Design: Choosing to create multiple specifications rather than a monolithic standard to enable incremental implementation and adoption

  3. Backward Compatibility: Ensuring IPKS could work with existing IPFS/IPLD infrastructure

  4. Extensibility: Designing for future enhancements like AI integration, advanced semantic capabilities, and cross-domain knowledge mapping

The IPKS specifications represent a synthesis of our conceptual exploration, grounded in:

  • Established knowledge management practices
  • Decentralized web technologies
  • Real-world business requirements
  • Technical feasibility considerations

Moving from concept to implementation will require:

  1. Reference implementations of the core specifications
  2. Developer tools and libraries to simplify adoption
  3. Domain-specific extensions for particular use cases
  4. Community building around open standards

By building on the combined strengths of IPFS, DIDs, VCs, and semantic web technologies, IPKS creates a framework for distributed knowledge that balances openness with trust, flexibility with verification, and collaboration with governance.

Review

High Priest

High Priest cover
by Timothy Leary
Read
Rating: 4.0/5
wiki

Owncast

Owncast is a free and open source live video and web chat server for use with existing popular broadcasting software.

By default, when you set up streaming software, it will only stream to your Owncast instance. If you want to simultaneously broadcast to various services, you'll have to either use something like Restream or you can also use FFMPEG.

YouTube

With your brodcast stream started, use the following FFMPEG command to simulcast to YouTube.

ffmpeg -v verbose -re -i https://YOUR-OWNCAST-SERVER/hls/stream.m3u8 -c:v libx264 -c:a aac -f flv rtmp://a.rtmp.youtube.com/live2/YOUR-STREAM-KEY

This command will copy the video and audio feeds from your HLS Owncast live stream and forward them to YouTube.

snippets

Winget Configuration

My Winget Configuration file

winget configure -f <FILENAME>.dsc
# yaml-language-server: $schema=https://aka.ms/configuration-dsc-schema/0.2
# Reference: https://github.com/microsoft/winget-create#building-the-client
# WinGet Configure file Generated By Dev Home.

properties: resources:

  • resource: Microsoft.Windows.Developer/DeveloperMode directives: description: Enable Developer Mode allowPrerelease: true settings: Ensure: Present
  • resource: Microsoft.WinGet.DSC/WinGetPackage directives: description: Installing Microsoft.VisualStudio.2022.Community allowPrerelease: true securityContext: current settings: id: "Microsoft.VisualStudio.2022.Community" source: winget id: Microsoft.VisualStudio.2022.Community
  • resource: Microsoft.VisualStudio.DSC/VSComponents dependsOn:
    • Microsoft.VisualStudio.2022.Community directives: description: Install required VS workloads allowPrerelease: true settings: productId: Microsoft.VisualStudio.Product.Community channelId: VisualStudio.17.Release components:
      • Microsoft.VisualStudio.Workload.Azure
      • Microsoft.VisualStudio.Workload.NetWeb
  • resource: Microsoft.WinGet.DSC/WinGetPackage directives: description: Installing Microsoft.VisualStudioCode allowPrerelease: true securityContext: current settings: id: "Microsoft.VisualStudioCode" source: winget id: Microsoft.VisualStudioCode
  • resource: Microsoft.WinGet.DSC/WinGetPackage directives: description: Installing Git.Git allowPrerelease: true securityContext: current settings: id: "Git.Git" source: winget id: Git.Git
  • resource: Microsoft.WinGet.DSC/WinGetPackage directives: description: Installing Microsoft.PowerShell allowPrerelease: true securityContext: current settings: id: "Microsoft.PowerShell" source: winget id: Microsoft.PowerShell
  • resource: Microsoft.WinGet.DSC/WinGetPackage directives: description: Installing Docker.DockerDesktop allowPrerelease: true securityContext: current settings: id: "Docker.DockerDesktop" source: winget id: Docker.DockerDesktop
  • resource: Microsoft.WinGet.DSC/WinGetPackage directives: description: Installing Debian.Debian allowPrerelease: true securityContext: current settings: id: "Debian.Debian" source: winget id: Debian.Debian
  • resource: Microsoft.WinGet.DSC/WinGetPackage directives: description: Installing Microsoft.DotNet.SDK.8 allowPrerelease: true securityContext: current settings: id: "Microsoft.DotNet.SDK.8" source: winget id: Microsoft.DotNet.SDK.8
  • resource: Microsoft.WinGet.DSC/WinGetPackage directives: description: Installing Microsoft.DotNet.SDK.9 allowPrerelease: true securityContext: current settings: id: "Microsoft.DotNet.SDK.9" source: winget id: Microsoft.DotNet.SDK.9
  • resource: Microsoft.WinGet.DSC/WinGetPackage directives: description: Installing OBSProject.OBSStudio allowPrerelease: true securityContext: current settings: id: "OBSProject.OBSStudio" source: winget id: OBSProject.OBSStudio
  • resource: Microsoft.WinGet.DSC/WinGetPackage directives: description: Installing Microsoft.WSL allowPrerelease: true securityContext: current settings: id: "Microsoft.WSL" source: winget id: Microsoft.WSL
  • resource: Microsoft.WinGet.DSC/WinGetPackage directives: description: Installing Ollama.Ollama allowPrerelease: true securityContext: current settings: id: "Ollama.Ollama" source: winget id: Ollama.Ollama
  • resource: Microsoft.WinGet.DSC/WinGetPackage directives: description: Installing Microsoft.WindowsTerminal allowPrerelease: false securityContext: current settings: id: "Microsoft.WindowsTerminal" source: winget id: Microsoft.WindowsTerminal
  • resource: Microsoft.WinGet.DSC/WinGetPackage directives: description: Installing Brave Browser allowPrerelease: true securityContext: current settings: id: "Brave.Brave" source: winget id: Brave.Brave
  • resource: Microsoft.WinGet.DSC/WinGetPackage directives: description: Installing Thunderbird allowPrerelease: true securityContext: current settings: id: "Mozilla.Thunderbird" source: winget id: Mozilla.Thunderbird
  • resource: Microsoft.WinGet.DSC/WinGetPackage directives: description: Installing ProtonMail allowPrerelease: true securityContext: current settings: id: "Proton.ProtonMail" source: winget id: Proton.ProtonMail
  • resource: Microsoft.WinGet.DSC/WinGetPackage directives: description: Installing Bitwarden allowPrerelease: true securityContext: current settings: id: "Bitwarden.Bitwarden" source: winget id: Bitwarden.Bitwarden
  • resource: Microsoft.WinGet.DSC/WinGetPackage directives: description: Installing VLC allowPrerelease: true securityContext: current settings: id: "VideoLAN.VLC" source: winget id: VideoLAN.VLC
    configurationVersion: 0.2.0
snippets

Remove all installed Python packages with pip

I recently had the need to get rid of all the packages I'd installed due to conflicting dependencies.

To uninstall packages you need to:

  1. Get a list of the packages
  2. Uninstall them

This works both for virtual environments as well as system-wide installations.

Get all packages

pip freeze > requirements.txt

Uninstall packages

pip uninstall -r requirements.txt -y

N/A

wiki

Upgrade NixOS versions

This guide provides general guidance on upgrading between NixOS versions

cat /etc/lsb_release

This provides the URL used to download packages for nixos release

nix-channel --list | grep nixos

To get on the latest version, you need to update the nixos channel to the latest version.

You can find a list of versions in this repository.

For example, if you wanted to upgrade to the latest 24.05 version, you'd use the following command:

nix-channel --add https://channels.nixos.org/nixos-24.05 nixos

The general format is: nix-channel --add <CHANNEL_URL> nixos

Once you've configured the channel for the latest version, switch to it just like you would when upgading sofware packages.

nixos-rebuild switch --upgrade

After the operation completes, you'll want to check which version is running as mentioned in previous instructions.

wiki

Mastodon Server Cleanup

General commands for cleaning up resources on Mastodon servers

  1. Stop services

    sudo systemctl stop mastodon-sidekiq mastodon-streaming mastodon-web
    
  2. Restart postgresql

    sudo systemctl restart postgresql
    
  3. Log into mastodon user

    sudo su - mastodon
    
  4. Go to live directory

    cd /home/mastodon/live
    
RAILS_ENV=production ./bin/tootctl media usage
RAILS_ENV=production ./bin/tootctl media remove
RAILS_ENV=production ./bin/tootctl media remove --prune-profiles
RAILS_ENV=production ./bin/tootctl preview_cards remove
sudo systemctl restart mastodon-sidekiq mastodon-streaming mastodon-web
wiki

Mastodon Server Upgrades

This provides a guide for upgrading specific versions.

These instructions backup the database and environment variables file. It does not back-up media files.

  1. Create new directory called backups/<DATE>.
  2. Copy live/.env.production to backups/<DATE> directory.
  3. Dump database pg_dump -Fc mastodon_production -f /home/mastodon/backups/<DATE>/backup.dump

3.4.1

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file.
  3. Fetch tags - git fetch --tags
  4. Checkout 3.4.1 tag - git checkout v3.4.1
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install
  7. Migrate DB - RAILS_ENV=production bundle exec rails db:migrate
  8. Precompile Assets - RAILS_ENV=production bundle exec rails assets:precompile
  9. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

3.4.2

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file.
  3. Fetch tags - git fetch --tags
  4. Checkout 3.4.2 tag - git checkout v3.4.2
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install
  7. Migrate DB - RAILS_ENV=production bundle exec rails db:migrate
  8. Precompile Assets - RAILS_ENV=production bundle exec rails assets:precompile
  9. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

3.4.3

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch --tags
  4. Checkout 3.4.3 tag - git checkout v3.4.3
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install
  7. Migrate DB - RAILS_ENV=production bundle exec rails db:migrate
  8. Precompile Assets - RAILS_ENV=production bundle exec rails assets:precompile
  9. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

3.4.4

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch --tags
  4. Checkout 3.4.4 tag - git checkout v3.4.4
  5. Precompile Assets - RAILS_ENV=production bundle exec rails assets:precompile
  6. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

3.4.5

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch --tags
  4. Checkout 3.4.5 tag - git checkout v3.4.5
  5. Install Ruby dependencies - bundle install
  6. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

3.4.6

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch --tags
  4. Checkout 3.4.6 tag - git checkout v3.4.6
  5. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

3.4.7

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch --tags
  4. Checkout 3.4.7 tag - git checkout v3.4.7
  5. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

3.5.0

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Checkout 3.5.0 tag - git checkout v3.5.0
  4. Update available rbenv version - git -C /home/mastodon/.rbenv/plugins/ruby-build pull
  5. Install Ruby 3.0.3 - RUBY_CONFIGURE_OPTS=--with-jemalloc rbenv install 3.0.3
  6. Install Ruby dependencies - bundle install
  7. Install JS dependencies - yarn install
  8. Run predeployment DB migration - SKIP_POST_DEPLOYMENT_MIGRATIONS=true RAILS_ENV=production bundle exec rails db:migrate
  9. Precompile assets - RAILS_ENV=production bundle exec rails assets:precompile
  10. Update service files - cp /home/mastodon/live/dist/mastodon-*.service /etc/systemd/system/
  11. Reload systemd daemon - systemctl daemon-reload
  12. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming
  13. Clear cache - RAILS_ENV=production bin/tootctl cache clear
  14. Run postdeployment DB migration - RAILS_ENV=production bundle exec rails db:migrate
  15. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

3.5.1

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch --tags
  4. Checkout 3.5.1 tag - git checkout v3.5.1
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install
  7. Precompile assets - RAILS_ENV=production bundle exec rails assets:precompile
  8. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

3.5.2

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch --tags
  4. Checkout 3.5.2 tag - git checkout v3.5.2
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install
  7. Run predeployment DB migration - SKIP_POST_DEPLOYMENT_MIGRATIONS=true RAILS_ENV=production bundle exec rails db:migrate
  8. Precompile assets - RAILS_ENV=production bundle exec rails assets:precompile
  9. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming
  10. Run postdeployment DB migration - RAILS_ENV=production bundle exec rails db:migrate
  11. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

3.5.3

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch --tags
  4. Checkout 3.5.3 tag - git checkout v3.5.3
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install
  7. Precompile assets - RAILS_ENV=production bundle exec rails assets:precompile
  8. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

4.0.0

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Upgade NodeJS - curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash - && sudo apt-get install -y nodejs
  4. Install Ruby 3.0.4 - RUBY_CONFIGURE_OPTS=--with-jemalloc rbenv install 3.0.4
  5. Fetch tags - git fetch --tags
  6. Checkout 4.0.0 tag - git checkout v4.0.0
  7. Install Ruby dependencies - bundle install
  8. Install JS dependencies - yarn install
  9. Run predeployment DB migration - SKIP_POST_DEPLOYMENT_MIGRATIONS=true RAILS_ENV=production bundle exec rails db:migrate
  10. Precompile assets - RAILS_ENV=production bundle exec rails assets:precompile
  11. Update service files - cp /home/mastodon/live/dist/mastodon-*.service /etc/systemd/system/
  12. Reload systemd daemon - systemctl daemon-reload
  13. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming
  14. Run postdeployment DB migration - RAILS_ENV=production bundle exec rails db:migrate
  15. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

4.0.2

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch --tags
  4. Checkout 4.0.2 tag - git checkout v4.0.2
  5. Install Ruby dependencies - bundle install
  6. Precompile assets - RAILS_ENV=production bundle exec rails assets:precompile
  7. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

NOTE: Node 18 not supported yet. If you run into issues upgrading directly from 3.5.3, checkout v4.0.0 tag and the upgrade to v4.0.2

4.1.0

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch --tags
  4. Checkout 4.1.0 tag - git checkout v4.1.0
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install
  7. Run DB migration - RAILS_ENV=production bundle exec rails db:migrate
  8. Precompile assets - RAILS_ENV=production bundle exec rails assets:precompile
  9. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

4.1.1

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.1.1 tag - git checkout v4.1.1
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install
  7. Precompile assets - RAILS_ENV=production bundle exec rails assets:precompile
  8. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

4.1.2

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.1.3 tag - git checkout v4.1.2
  5. Update rbenv version - git -C /home/mastodon/.rbenv/plugins/ruby-build pull
  6. Install Ruby 3.0.6 - RUBY_CONFIGURE_OPTS=--with-jemalloc rbenv install 3.0.6
  7. Install Ruby dependencies - bundle install
  8. Install JS dependencies - yarn install
  9. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

4.1.3

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.1.3 tag - git checkout v4.1.3
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install
  7. (Optional) Upgrade reverse proxy Content-Security-Policy: default-src 'none'; form-action 'none' & X-Content-Type-Options: nosniff. More info can be found in dist/nginx.conf.
  8. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

4.1.4

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.1.4 tag - git checkout v4.1.4
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install
  7. (Optional) Upgrade reverse proxy Content-Security-Policy: default-src 'none'; form-action 'none' &X-Content-Type-Options: nosniff. More info can be found in dist/nginx.conf.
  8. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

4.1.5

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.1.5 tag - git checkout v4.1.5
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install
  7. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

4.1.6

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.1.6 tag - git checkout v4.1.6
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install
  7. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

4.1.7

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.1.7 tag - git checkout v4.1.7
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install --frozen-lockfile
  7. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

4.1.8

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.1.8 tag - git checkout v4.1.8
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install --frozen-lockfile
  7. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

4.1.9

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.1.9 tag - git checkout v4.1.9
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install --frozen-lockfile
  7. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming

4.2.0

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.1.9 tag - git checkout v4.2.0
  5. Update Streaming Server Service
    1. sudo cp ~mastodon/live/dist/mastodon-streaming*.service /etc/systemd/system/
    2. sudo systemctl daemon-reload
  6. Update rbenv version - git -C /home/mastodon/.rbenv/plugins/ruby-build pull
  7. Install Ruby 3.2.2 - RUBY_CONFIGURE_OPTS=--with-jemalloc rbenv install 3.2.2
  8. Install Ruby dependencies - bundle install
  9. Install JS dependencies - yarn install --frozen-lockfile
  10. Precompile assets - RAILS_ENV=production bundle exec rails assets:precompile
  11. Run predeployment DB migration - SKIP_POST_DEPLOYMENT_MIGRATIONS=true RAILS_ENV=production bundle exec rails db:migrate
  12. Restart services - systemctl start mastodon-sidekiq mastodon-web mastodon-streaming
  13. Run postdeployment DB migration - RAILS_ENV=production bundle exec rails db:migrate

4.2.1

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.2.1 tag - git checkout v4.2.1
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install --frozen-lockfile
  7. Precompile assets - RAILS_ENV=production bundle exec rails assets:precompile

4.2.2

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.2.2 tag - git checkout v4.2.2
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install --frozen-lockfile
  7. Precompile assets - RAILS_ENV=production bundle exec rails assets:precompile

4.2.3

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.2.3 tag - git checkout v4.2.3
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install --frozen-lockfile
  7. Precompile assets - RAILS_ENV=production bundle exec rails assets:precompile

4.2.4

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.2.4 tag - git checkout v4.2.4
  5. Update rbenv version - git -C /home/mastodon/.rbenv/plugins/ruby-build pull
  6. Install Ruby 3.2.3 - RUBY_CONFIGURE_OPTS=--with-jemalloc rbenv install 3.2.3
  7. Install Ruby dependencies - bundle install
  8. Install JS dependencies - yarn install --frozen-lockfile
  9. Precompile assets - RAILS_ENV=production bundle exec rails assets:precompile

4.2.5

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.2.5 tag - git checkout v4.2.5
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install --frozen-lockfile

4.2.6

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.2.6 tag - git checkout v4.2.6
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install --frozen-lockfile

4.2.7

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.2.7 tag - git checkout v4.2.7
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install --frozen-lockfile

4.2.8

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.2.8 tag - git checkout v4.2.8
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install --frozen-lockfile

4.2.9

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.2.9 tag - git checkout v4.2.9
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install --frozen-lockfile

4.2.10

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.2.10 tag - git checkout v4.2.10
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install --frozen-lockfile

4.2.11

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.2.11 tag - git checkout v4.2.11
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install --frozen-lockfile
  7. Precompile assets - RAILS_ENV=production bundle exec rails assets:precompile

4.2.12

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.2.12 tag - git checkout v4.2.12
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install --frozen-lockfile
  7. Precompile assets - RAILS_ENV=production bundle exec rails assets:precompile

4.2.13

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.2.13 tag - git checkout v4.2.13
  5. Install Ruby dependencies - bundle install
  6. Install JS dependencies - yarn install --frozen-lockfile

4.3.0 NOT WORKING

  1. Stop services - systemctl stop mastodon-*.service
  2. Backup database and env file
  3. Fetch tags - git fetch && git fetch --tags
  4. Checkout 4.3.0 tag - git checkout v4.3.0
  5. Install yarn4 - corepack enable then, corepack prepare
  6. Install Ruby dependencies - bundle install
  7. Install JS dependencies - yarn install --immutable
  8. Generate secrets - RAILS_ENV=production bin/rails db:encryption:init
  9. Copy secrets to .env.production file
  10. Precompile assets - RAILS_ENV=production bundle exec rails assets:precompile
  11. Run predeployment database migrations - SKIP_POST_DEPLOYMENT_MIGRATIONS=true RAILS_ENV=production bundle exec rails db:migrate
  12. Restart services - sudo systemctl restart mastodon-sidekiq mastodon-streaming mastodon-web
  13. Run postdeployment database migrations - RAILS_ENV=production bundle exec rails db:migrate
sudo systemctl restart mastodon-sidekiq mastodon-streaming mastodon-web
  • https://docs.joinmastodon.org/admin/troubleshooting/index-corruption
  • https://docs.joinmastodon.org/admin/install/
  • https://docs.joinmastodon.org/admin/upgrading/
  • https://docs.joinmastodon.org/admin/backups/
  • https://docs.joinmastodon.org/admin/migrating/
  • https://docs.joinmastodon.org/admin/tootctl/
wiki

DevContainer configurations

A collection of DevContainer configurations

{
    "name": "lqdev.me Base Debian DevContainer",
    "image": "mcr.microsoft.com/devcontainers/base:debian",
    "features": {
        "ghcr.io/devcontainers/features/git:1": {},
        "ghcr.io/devcontainers/features/docker-in-docker:2": {}
    },
    "customizations": {
        "vscode": {
            "extensions": [
                "ms-vscode-remote.vscode-remote-extensionpack",
                "ms-azuretools.vscode-docker",
                "GitHub.copilot",
                "GitHub.copilot-chat",
                "saoudrizwan.claude-dev"
            ]
        }
    }
}
{
    "name": "lqdev.me Python DevContainer",
    "image": "mcr.microsoft.com/devcontainers/base:debian",
    "features": {
        "ghcr.io/devcontainers/features/git:1": {},
        "ghcr.io/devcontainers/features/docker-in-docker:2": {},
        "ghcr.io/devcontainers/features/python:1": {
            "version": "3.11"
        },
        "ghcr.io/va-h/devcontainers-features/uv:1": {}
    },
    "customizations": {
        "vscode": {
            "extensions": [
                "ms-vscode-remote.vscode-remote-extensionpack",
                "ms-azuretools.vscode-docker",
                "ms-python.python",
                "GitHub.copilot",
                "GitHub.copilot-chat",
                "saoudrizwan.claude-dev"                
            ]
        }
    }
}
{
    "name": "lqdev.me Python (GPU) DevContainer",
    "image": "mcr.microsoft.com/devcontainers/base:debian",
    "features": {
        "ghcr.io/devcontainers/features/git:1": {},
        "ghcr.io/devcontainers/features/docker-in-docker:2": {},
        "ghcr.io/devcontainers/features/python:1": {
            "version": "3.11"
        },
        "ghcr.io/devcontainers/features/nvidia-cuda:1": {},
        "ghcr.io/va-h/devcontainers-features/uv:1": {}
    },
    "customizations": {
        "vscode": {
            "extensions": [
                "ms-vscode-remote.vscode-remote-extensionpack",
                "ms-azuretools.vscode-docker",
                "ms-python.python",
                "GitHub.copilot",
                "GitHub.copilot-chat",
                "saoudrizwan.claude-dev"                
            ]
        }
    },
    "runArgs": [
        "--gpus", 
        "all"
    ]
}
{
    "name": "lqdev.me .NET DevContainer",
    "image": "mcr.microsoft.com/devcontainers/base:debian",
    "features": {
        "ghcr.io/devcontainers/features/git:1": {},
        "ghcr.io/devcontainers/features/docker-in-docker:2": {},
        "ghcr.io/devcontainers/features/dotnet:2": {
            "version": "9.0"
        }
    },
    "customizations": {
        "vscode": {
            "extensions": [
                "ms-vscode-remote.vscode-remote-extensionpack",
                "ms-azuretools.vscode-docker",
                "ms-dotnettools.csharp",
                "Ionide.Ionide-fsharp",
                "GitHub.copilot",
                "GitHub.copilot-chat",
                "saoudrizwan.claude-dev",
                "ms-dotnettools.csdevkit"                                
            ]
        }
    }
}
{
    "name": "lqdev.me .NET (GPU) DevContainer",
    "image": "mcr.microsoft.com/devcontainers/base:debian",
    "features": {
        "ghcr.io/devcontainers/features/git:1": {},
        "ghcr.io/devcontainers/features/docker-in-docker:2": {},
        "ghcr.io/devcontainers/features/dotnet:2": {
            "version": "9.0"
        },
        "ghcr.io/devcontainers/features/nvidia-cuda:1": {}
    },
    "customizations": {
        "vscode": {
            "extensions": [
                "ms-vscode-remote.vscode-remote-extensionpack",
                "ms-azuretools.vscode-docker",
                "ms-dotnettools.csharp",
                "Ionide.Ionide-fsharp",
                "GitHub.copilot",
                "GitHub.copilot-chat",
                "saoudrizwan.claude-dev",
                "ms-dotnettools.csdevkit"                
            ]
        }
    },
    "runArgs": [
        "--gpus", 
        "all"
    ]
}
{
    "name": "lqdev.me Python and .NET DevContainer",
    "image": "mcr.microsoft.com/devcontainers/base:debian",
    "features": {
        "ghcr.io/devcontainers/features/git:1": {},
        "ghcr.io/devcontainers/features/docker-in-docker:2": {},
        "ghcr.io/devcontainers/features/python:1": {
            "version": "3.11"
        },        
        "ghcr.io/devcontainers/features/dotnet:2": {
            "version": "9.0"
        },
        "ghcr.io/va-h/devcontainers-features/uv:1": {}
    },
    "customizations": {
        "vscode": {
            "extensions": [
                "ms-vscode-remote.vscode-remote-extensionpack",
                "ms-azuretools.vscode-docker",
                "ms-python.python",
                "ms-dotnettools.csharp",
                "Ionide.Ionide-fsharp",
                "GitHub.copilot",
                "GitHub.copilot-chat",
                "saoudrizwan.claude-dev",
                "ms-dotnettools.csdevkit"                
            ]
        }
    }
}
{
    "name": "lqdev.me Python and .NET (GPU) DevContainer",
    "image": "mcr.microsoft.com/devcontainers/base:debian",
    "features": {
        "ghcr.io/devcontainers/features/git:1": {},
        "ghcr.io/devcontainers/features/docker-in-docker:2": {},
        "ghcr.io/devcontainers/features/python:1": {
            "version": "3.11"
        },        
        "ghcr.io/devcontainers/features/dotnet:2": {
            "version": "9.0"
        },
        "ghcr.io/devcontainers/features/nvidia-cuda:1": {},
        "ghcr.io/va-h/devcontainers-features/uv:1": {}
    },
    "customizations": {
        "vscode": {
            "extensions": [
                "ms-vscode-remote.vscode-remote-extensionpack",
                "ms-azuretools.vscode-docker",
                "ms-python.python",
                "ms-dotnettools.csharp",
                "Ionide.Ionide-fsharp",
                "GitHub.copilot",
                "GitHub.copilot-chat",
                "saoudrizwan.claude-dev",
                "ms-dotnettools.csdevkit"               
            ]
        }
    },
    "runArgs": [
        "--gpus", 
        "all"
    ]
}
snippets

lqdev.me Post Metrics

Generates an aggregate analysis of posts on lqdev.me / luisquintanilla.me.

dotnet fsi stats.fsx 

stats.fsx

// Reference DLL
#r "../bin/Debug/net8.0/PersonalSite.dll"

// Add modules open Domain open Builder open System

// Load posts let posts = loadPosts() let notes = loadFeed () let responses = loadReponses ()

// Organize posts by year let postCountsByYear = posts |> Array.countBy (fun (x:Post) -> DateTime.Parse(x.Metadata.Date) |> _.Year) |> Array.sortByDescending fst

let noteCountsByYear = notes |> Array.countBy (fun (x:Post) -> DateTime.Parse(x.Metadata.Date) |> _.Year) |> Array.sortByDescending fst

let responseCountsByYear = responses |> Array.countBy (fun (x:Response) -> DateTime.Parse(x.Metadata.DatePublished) |> _.Year) |> Array.sortByDescending fst

// Organize responses by type let responsesByType = responses |> Array.filter(fun x -> (DateTime.Parse(x.Metadata.DatePublished) |> _.Year) = DateTime.UtcNow.Year) |> Array.countBy(fun x -> x.Metadata.ResponseType) |> Array.sortByDescending(snd)

// Organize responses by tag let responsesByTag = responses |> Array.filter(fun x -> (DateTime.Parse(x.Metadata.DatePublished) |> _.Year) = DateTime.UtcNow.Year) |> Array.collect(fun x -> match x.Metadata.Tags with | null -> [|"untagged"|] | [||] -> [|"untagged"|] | _ -> x.Metadata.Tags ) |> Array.countBy(fun x -> x) |> Array.sortByDescending(snd)

// Organize responses by host name (domain) let responsesByDomain = responses |> Array.filter(fun x -> (DateTime.Parse(x.Metadata.DatePublished) |> _.Year) = DateTime.UtcNow.Year) |> Array.countBy(fun x -> Uri(x.Metadata.TargetUrl).Host) |> Array.sortByDescending(snd)

// Utility function to display counts let printEntryCounts<'a> (title:string) (entryCounts:('a * int) array) (n:int) = printfn $"{title}"

match entryCounts.Length with
| 0 -> 
    printfn $"No entries"
    printfn $""
| a when a > 0 -> 
    match n with 
    | n when n = -1 || n > entryCounts.Length -> 
        entryCounts
        |> Array.iter(fun x -> printfn $"{fst x} {snd x}")
        |> fun _ -> printfn $""
    | n when n > 0 -> 
        entryCounts
        |> Array.take n
        |> Array.iter(fun x -> printfn $"{fst x} {snd x}")
        |> fun _ -> printfn $""

// Print yearly counts printEntryCounts "Blogs" postCountsByYear 2

printEntryCounts "Notes" noteCountsByYear 2

printEntryCounts "Responses" responseCountsByYear 2

// Print response types printEntryCounts "Response Types" responsesByType -1

// Print response tag counts printEntryCounts "Response Tags" responsesByTag 5

// Print response by host name printEntryCounts "Domains" responsesByDomain 5

Blogs
2023 5
2022 7

Notes 2023 34 2022 36

Responses 2023 216 2022 146

Response Types bookmark 151 reshare 48 reply 10 star 7

Response Tags ai 104 llm 42 untagged 41 opensource 31 internet 17

Domains github.com 15 huggingface.co 11 arxiv.org 10 openai.com 6 www.theverge.com 4

snippets

NixOS Configuration

This is my NixOS Configuration file

  1. Update configuration file

  2. Run the following command to apply changes

    sudo nixos-rebuild switch
    
# Edit this configuration file to define what should be installed on
# your system.  Help is available in the configuration.nix(5) man page
# and in the NixOS manual (accessible by running ‘nixos-help’).

{ config, pkgs, ... }:

{ imports = [ # Include the results of the hardware scan. ./hardware-configuration.nix ];

boot.loader.systemd-boot.enable = true; boot.loader.efi.canTouchEfiVariables = true;

networking.hostName = "nixos"; # Define your hostname.

networking.networkmanager.enable = true;

time.timeZone = "America/New_York";

i18n.defaultLocale = "en_US.UTF-8";

i18n.extraLocaleSettings = { LC_ADDRESS = "en_US.UTF-8"; LC_IDENTIFICATION = "en_US.UTF-8"; LC_MEASUREMENT = "en_US.UTF-8"; LC_MONETARY = "en_US.UTF-8"; LC_NAME = "en_US.UTF-8"; LC_NUMERIC = "en_US.UTF-8"; LC_PAPER = "en_US.UTF-8"; LC_TELEPHONE = "en_US.UTF-8"; LC_TIME = "en_US.UTF-8"; };

services.xserver = { enable = true;

desktopManager = {
  xterm.enable = false;
};

displayManager = {
  defaultSession = "none+i3";
};

windowManager.i3 = {
   enable = true;
   extraPackages = with pkgs; [
     dmenu
 i3status
     j4-dmenu-desktop
 i3lock	
   ];
};

};

services.xserver = { layout = "us"; xkbVariant = ""; };

services.printing.enable = true;

sound.enable = true; hardware.pulseaudio.enable = false; security.rtkit.enable = true; services.pipewire = { enable = true; alsa.enable = true; alsa.support32Bit = true; pulse.enable = true; # If you want to use JACK applications, uncomment this #jack.enable = true;

# use the example session manager (no others are packaged yet so this is enabled by default,
# no need to redefine it in your config for now)
#media-session.enable = true;

};

services.gvfs.enable = true;

users.users.lqdev = { isNormalUser = true; description = "lqdev"; extraGroups = [ "networkmanager" "wheel" "docker"]; packages = with pkgs; [ firefox thunderbird vscode element-desktop ]; };

#Enable docker virtualisation.docker.enable = true;

nixpkgs.config.allowUnfree = true;

environment.systemPackages = with pkgs; [

wget
emacs
alacritty
mc
du-dust
htop
feh
duf
shutter
gparted
keepassxc
bitwarden
git
yt-dlp
streamlink
ffmpeg
gnome.seahorse
xfce.thunar-volman
xfce.xfconf
mpv
vlc
(with dotnetCorePackages; combinePackages [
  dotnet-sdk
  dotnet-sdk_7
])
docker
cargo
rustc
libreoffice

];

programs.bash = { shellAliases = { emacs="emacs -nw"; }; };

programs.thunar.enable = true;

system.stateVersion = "23.05"; # Did you read the comment?

}