Austin Lanari

Coding and comics and comics and coding.

    Give Me RSS or Give Me Death

    I've got a few websites (mainly this one and my comic crit site), but neither is really a center of operation for me. Previously, I've used twitter for that; but, I've stepped away from twitter both to take a break from the endless negative feedback loop of social media (of twitter in particular), and to dial back the proprietary software I use.

    I wanted to start putting content out into the internet on my own terms, meaning on my own sites, my own servers, with my own format, and my own style.

    The first step was maintaining some kind of social media presence, which I'm doing on Mastodon. The second step was to make a site that would aggregate my existing content in addition to being my base of operations for tinkering.

    You can find that site at

    The Goal: Aggregation

    To aggregate content, I needed to get posts from and as a start. Ghost (the blogging platform I use for ) has a public API that can be used for fetching data about a given blog and its posts. This site, however, has no such API since it's just a big ol' bundle of statically generated JavaScript goodness.

    What it does have is an RSS feed.

    Of course, the Ghost blog has an RSS feed too. Because, well, nearly everything on the internet has a damn RSS feed. With the death of Google reader, a lot of folks tossed their habit of feed-reading aside. And from a developer perspective, we should be miffed about this: RSS is one of the closest things we have to a standard on the incredibly fragmented internet. It exists out of the box on the most major website providers. It provides a tried-and-tested standardized format like XML with predictable results (unless some custom generation of the feed got in the way).

    In an age where Google wants to centralize everything to the point of re-serving your mobile pages under their own domain, we should be re-embracing technology like RSS that allows us to both distribute and aggregate the content we want to serve/view on the internet on our own terms.

    The Front-End: gatsby-source-rss...-fork

    There's really only one way to pull in an RSS feed in terms of actually retrieving it: you GET the requisite /rss endpoint (for instance and you use a library to parse the XML into JS objects or JSON as necessary, et voilà.

    The question is, when should this be done?

    If it's a live fetch in the browser when a user goes to my site, it's going to take too long. On top of requesting at /, they now have to wait for at least one other route to fetch. Then, they have to wait for the parsing to occur, which is meaty and takes time, on top of whatever actual data manipulation is happening to sanitize it for the client.

    The only upside of live fetching is that as soon as a post goes up on one of my RSS feeds, visits to will show that post to users. But since we're aggregating long-form content, it's not as if we're updating a Mastodon feed widget. It does not need to be that up to date.

    So, instead of doing expensive fetching live, we can do it at build time. Each time we statically generate, we'll fetch the RSS feed data and bake it in. One of the upsides here is that there are tools for doing this kind of data sourcing in Gatsby such that we don't need to rely on fetching feeds and writing the data ourselves. Source plugins are made exactly for this. By using one of these plugins, RSS feed data is exposed in Gatsby via a series of graphql queries and the actual act of fetching is abstracted completely from the declarative code which renders it.

    Unfortunately, gatsby-source-rss doesn't actually work, as far as I can tell. All the code looked right to me but the plugin wasn't hooked up to the Gatsby ecosystem correctly. Luckily, a plugin search yielded gatsby-source-rss-fork which worked correctly.

    Except it only worked for this site and not my Ghost blog. Despite the fact that I could curl in my konsole, any GET requests made by Gatsby or in the browser were failing without so much as an error message. Which could only mean one thing:

    God damn stinking CORS.

    The Back-End: Stop Ghost RSS from *looks into camera* Ghosting.

    You can see some discussion about the issue here, and the latest PR regarding the issue (3 years ago!) here. The long of the short of it is that at least one Ghost maintainer thinks that there's no reason that an /rss endpoint should be publicly accessible in a cross-origin fashion. Here's the relevant comment (emphasis mine).

    The use case you're suggesting here is being able to get your latest X posts on an external site of your choice, but by specifying global CORS headers, what you're actually allowing for is anyone to show any Ghost blog's latest X posts on any site. That's an enormous leap to add to Ghost core, and I don't think there's a justification for it.

    The JSON API is intended to allow for this sort of thing in a controlled way (via OAuth clients) which means that the owner of the blog would always have absolute control over who can do what with their content.

    I don't want to have to learn an API just to display links to posts on one of my blogs: RSS is literally made for this. Additionally, since I chose to have an RSS feed on my blog, I clearly want my posts to be publicly available. The only thing CORS blocking does is stop people who want to do stuff with my posts in a browser: folks can still write server-side scripts to grab my entire RSS feed and do whatever they want with it. Additionally, even if I had chose to have no RSS feed at all, someone clever enough to abuse my RSS feed could easily just scrape my site. The logic is nearly identical, just slightly more fragmented.

    To briefly rant, and to re-underscore my point, this kind of thing is so indicative of the modern web ecosystem. There are all these psuedo-proprietary ways of asking for data, driven by API's that think they are solving a problem when really they're just putting their preferred brand of dressing on an issue that is already half-solved, sometimes by tested standards (*cough* RSS *cough*). I should be able to run three blogs in three different platforms and aggregate data from all of them in a unified manner.

    It's not a security issue: it's a common sense issue.

    After initially trying to add CORS to the /rss route via my nginx config (that doesn't work because of the way Ghost apparently internally reverse-proxies things), I opted for the method implemented by the aforementioned rejected PR which just slaps the appropriate headers on the response in Node. Only problem is that since the PR is 3 years old there isn't even a core/server/controllers/frontend.js anymore. Luckily, there is a core/server/controllers/rss.js (snaps for solid naming), so adding the headers in the exact same manner as the original PR is possible within the generate function, which exposes the res object needed for setting the headers on the response.

    Styles, styles

    I'm sort of in-between projects at the moment, with one major project I'm not suuuuper at liberty to talk about yet (it's my first rather major journalistic undertaking, which is exciting and also horrifying), but in general I've been trying to write more. I've been journaling with the help of org-journal since I spend most of my waking hours with an emacs editor open. I've also found that having embraced emacs, I am often trying its various keyboard shortcuts as a matter of pure muscle memory across literally every application on my computer.


    Anyway, I've taken to making the blog the primary focus of my website rather than the portfolio which I haven't filled out yet. That's to encourage me to keep writing but also because... well, I haven't filled the portfolio out yet! That's not to say I'm not working on anything. As you might know, I'm a big fan of Gatsby.js (it powers this site, it's crazy fast, and you can do some fairly advance front-end stuff with minor config and zero webpack BS), and I also spotted a very new headless cms,

    For those who don't know, the appeal of a headless CMS, specifically with respect to websites like this one, or even simple web apps, is that you can structure your site around copy without having to tie the copy to the codebase. For instance, up top on my site it currently says "Coding and comics and comics and coding." If I wanted to change that to something less opaque (pfft), I would have to make a change to the html file with that text in it, re-commit it, and make a new build of my site. This might not sound super inconvenient, but imagine larger copy changes on larger sites with more people working on them and higher demand for staying up to date.

    Headless CMS's like Tipe allow for you to hook your copy to remote resources with modern API's (tipe uses graphql in addition to a more traditional REST api). I quite like the idea of being able to edit in a CMS, although I have nothing against emacs. To be honest, I'd do all my editing in a remote CMS; but, I do like having full control over the content I write and where it lives. In the long-term, I'm not crazy about writing all my stuff and keeping it on any kind of server that I can't directly control these days.

    In any case, I say all this because, as far as I know, I was the first person to automate a bit of the build process between Tipe and Gatsby, and you can view the code here. It's not a full-on source plugin for Gatsby, but it's good enough. And, honestly, despite the fact that I think a source plugin could be designed in the vein of something as robust as the Wordpress source plugin for Gatsby, I don't know if it's a great idea for something like Tipe. If you look at the code, you can see the whole thing works as it revolves around a graphql query particular to my project. Although it could be standardized with respect to a user picking a specific folder with an accepted conventional structure, I think it would have to be a community decision whether or not "convention over configuration" is the best way to go. If the source plugin forces that on people, it just seems non-ideal. And having a script dumb json and use that as a filesource doesn't strike me as dealbreaking, although the build process is lengthened by the need to manually format the markdown (in addition to the fact that I haven't actually thought at all about how images would be handled).

    In other news, I'm extremely excited about this twitter bot, the tweets of which you can view here. I'm not sure whether it's the intersection of my fairly newfound love of museums and coding, or if I just appreciate a good meme-bot at this point in my life, but one thing that feels great is seeing something cool like this out in the world and being able to look at the code and understand how it works. Two years ago this would have completely mystified me and now I spend parts of my spare time picking this stuff apart and loving it.

    In addition to the thing I can't really talk about, I've got another thing I can't really talk about cooking up too. Hopefully I can share soon. Until then, maybe expect more posts? I never know. I'm all over twitter these days but sometimes I think this is a healthier way to go. We shall see.

    The GNU Rabbit Hole

    I'm currently writing this post in Emacs.


    As a matter of fact, I just accidentally opened an error window while typing this because I hit ctrl instead of shift when trying to register the italics in markdown with an underscore and then the next key I hit caused an issue.


    emacs draft

    It might seem odd, a young web developer in 2018 using Emacs: Sublime and Atom are largely the norm, and it looks like VS is gaining a foothold (and I've heard a lot of good things about it lately). But, to be honest, the latest events with Facebook really shook me up. As someone who pays attention to what Facebook does, while also using the SDK at my job, I was already fully aware of both how Facebook has failed users in the past and how easy it is to get a base amount of semi-private data that is actually extremely valuable from sources like Facebook. So, the latest events weren't a wake up call out of ignorance.

    In short (and to spare you the serendipitous personal details), it just felt like another reminder among many in the last decade that Stallman Was Right. I've been using GNU/Linux for some time to get out of from under the thumb of Microsoft, but it was Ubuntu, and I wanted to see if I could set up my old personal laptop to run on a distro approved by the Free Software Foundation. On top of that, I also wanted to begin transitioning to as many GPL-licensed tools as possible.

    Hence the Emacs. Of course, the other reason for the jump to Emacs was because I realized that despite embracing all of the most useful Sublime shortcuts, I was still doing a lot of context-switching with my mouse at work, and constantly clicking around. This isn't an Emacs post (maybe I'll revisit that), but I will say that after three full weeks of forcing myself to use nothing but Emacs at my full-time job, the only thing I really miss about Sublime is the superior UX for grepping for a string in a given project.

    Anyway, GNU/Linux. Abiding by the FSF's recommendations is semi-arbitrary, and actually sort-of-kind-of-bull-shit. A redditor recently brought this up, and to give you the tl;dr, essentially, the FSF is here to recommend distros based on their openness: no proprietary or otherwise hidden firmware blobs, no non-free software of any kind, and it must be impossible to use that distro's mirrors to download non-free software; however, the FSF's process of selecting their distros to recommend is both non-democratic and completely opaque!

    Still, I wanted to take the dive, so I bit, and started out with PureOS from Purism, a company that makes its money selling completely libre hardware. I personally am using an old Lenovo, which means that no matter which distro I use, I have to use--gasp!--non-free firmware (an upcoming post will detail what a horror show this is). Since my wifi card is proprietary, none of the FSF recommended distros will install my wifi firmware for me. I have to break the rules, essentially making the whole thing unfree anyway.

    To save some time here, I'll just say that I don't understand the point of PureOS very much at all, and other than being something with some branding that Purism can slap on their own machines, I would not recommend it at all. Here are the differences between PureOS and Debian:

    1. PureOS mirrors host no non-free software. Debian has a mirror (two, technically) that is explicitly for downloading non-free software.
    2. PureOS comes with PureBrowser, their own fork of Firefox that comes with security add-ons installed (note: mine actually didn't come with any installed, but they're add-ons, so whatever).
    3. PureOS comes pre-install with TorBrowser (note: again, mine actually didn't come with TorBrowser pre-installed).

    ... That's it.

    (2) and (3) are bull shit reasons to install something like an operating system, especially since there's a huge drawback to PureBrowser: it's bonkers how out of date it is. Even if you are a free software nut, existing on the web means using some proprietary things like Twitter or Slack. The latest release of PureBrowser can't run Slack or several other common web apps. The only way around this is to naughtily force the install of a non-free browser, or cross your fingers and track down one of the other free browsers and hope it works.

    Meanwhile, with Debian you can just install Firefox ESR and it's all good.

    Then, there's the conceptual issue, one FSF and Stallman seem vehement about, which is (1), that I mentioned above: Debian explicitly mirrors and supports access to non-free software. It's not just out of convenience: it's an ideological decision. One of the core tenants of Debian is to support non-free software. For FSF, by definition, this invalidates Debian's candidacy as something it could ever endorse.

    At a certain point, however, even short of asking what "freedom" really means for a user (e.g. a hard philosophical libertarian would say that a truly free system would allow for an individual user to restrict his own freedom with proprietary software if that was what he wanted to do), we have to ask if Debian is really in violation of anything meaningful, even in its explicit support of non-free software.

    Take my case, for instance. I never intend to ever ever, never, ever never look at the source code for my wifi adapter. Is my freedom meaningfully inhibited by the fact that I can't sort through all the firmware blobs enabling my card to work? I have a perfectly good laptop and a perfectly good wifi card: I'm not going to up and buy some dongle to carry around just to use something that already works. Granted, it means that getting Debian up and running is going to be a massive pain in the ass where I have to jump through hoops like I'm at the circus just to connect to my own home fucking wi-fi, but still: that's my right no?

    Even with all of that said, I haven't enabled the non-free mirror for Debian updates, and I don't intend to do so. I'm typing this on Emacs, previewing my webpage in Firefox ESR, and viewing all of this in a KDE Plasma GUI. Literally the only non-free thing is the wifi firmware I had to manually install, and it is likely to stay that way.

    Without being utilitarian about it, I think it's safe to say that if toeing the line of complete freedom inhibits you from being able to conveniently access free software, then acting as if things are black-and-white is undermining the project of access to free software itself. Debian was a pain to install but I'm completely comfortable on the OS (now that I can actually see some goddamn wifi networks), and I have a very tangible sense of freedom that I operate with every time I open it as compared to Ubuntu.

    If Debian started off by connecting users up to the non-free mirror, or by default loaded some non-free stuff, it'd be one thing: but users have to go out of their way to add it. I don't think I can stress this enough: if you want to install non-free software on absolutely anything, you will find a way to do so.

    I do wonder if I would have had the same experience if I opted for Trisquel instead of PureOS as many recommended, but regardless of the particulars of the FSF-approved choices, what I find interesting is what they have not approved, both in terms of the lack of transparency in the process, and in their alarmingly rigid approach.

    Work in Progress

    Hey hi, I'm Austin Lanari. By day I'm a full stack developer for mobile. What does "full stack" mean in this case, you ask?

    It means I do... well, I do everything.

    Unfortunately, that also means that I get really excited about projects with a lot of breadth and cool things to get done. That's not so unfortunate for my company (or for my personal and professional development), but when I recently tried to build a comics-blogging platform from scratch, I got so caught up in building the site that I was never writing anything.

    As you can see now, I decided to go with the Ghost blogging platform so that I could focus more on my writing. But in the time before I jumped ship for a pre-built CMS, I spent the better part of 2017 learning what Gatsby had to offer and I remain impressed and in awe of the framework, both in the benefits it offers from a UI perspective and in how it puts developers in a workflow where they learn a lot of cool, core-front-end skills and end up with great results... once things get working.

    Since my old personal website was a Jekyll blog that I built while I was still at coding bootcamp, I decided to compromise with myself: as long as I continued my blogging about comics at a clip of one essay per week, I could fool around with Gatsby on my personal website.

    But I never told the more practical part of my brain that I'd make it easy.

    If you're reading this when I first posted it, you're on a bare-bones version of Gatsby with all of the boilerplate and... literally no UI tweaks... at all.

    It's going to be a long year.


  • email: [email protected] (Preferably encrypted)

  • PGP Fingerprint: E367 81A0 9018 CAD4 24A5 E3A5 5572 CC1A A449 C6E6

  • PGP Public Key

  • Austin on the web:

    This is my professional/personal site where you can see my resume, some longer-form tech writing about stuff I'm tinkering with, and (eventually) a fleshed out portfolio. Below, some links to my other sites.

  • I maintain a social media presence on my mastodon instance, which anyone can join. My mastodon account is located here.

  • For my critical comic writing, check out my comic blog.

  • For an aggregation of this site, my other sites, and more spur of the moment posts, check out