These look good, but putting them up sucks.

These look good, but putting them up sucks.

Wading back into the JS framework pool. Though I've been building quite a few side-projects recently using a variety of approaches (Deno, Hono, Fresh, etc.), those projects are all to my own peculiar taste.
The calculations change when building an application that someone else is paying for, and particularly if that application needs to be supported long-term by a team of varied skill-sets and expertises. Peril avoided.
The framework du jour is inarguably #nextjs. It's an unavoidable #react-ism, and has to be considered.
But, context. gcp and cloudrun is where this application will be deployed,
which is, evidently, hard with NextJS. While there will always be an "ahem,
actually", anecdote, blog post, and conversation seem to converge on the point
that NextJS just doesn't work as well unless you're deploying to Vercel. This
seems doubly reinforced by the existence of
shim-projects to leverage bespoke features of NextJS
in non-Vercel clouds.
My hesitation also relates to #wordpress, and chaos currently visible in that developer community, due to commercial strong-arming by wordpress.com.
Then there's this.
So I'll be looking pretty hard at react-router v7 instead...
Signs
point to it working well from gcp.
Tomorrow is my last day with my current company. I just want to note that, here.
Late-winter, 2018, only so many people that you could count us with a single hand. Up to much larger headcount, acquisition, and sun-setting.
A job is a job is a job, but I've been in this one chair for long enough that it just has me feeling nostalgic.
If you're using macOS, just use postgres.app -
homebrew installations of #postgres are too much heartache.
(or use a containerized postgres)
Be sure to add psql to $PATH in .zshrc or whichever flavor you're using:
PATH="/Applications/Postgres.app/Contents/Versions/latest/bin:$PATH"
A few notes on #hono
If you install hono from jsr, using
hono extensions from npm will not work, e.g.
@hono/zod-validator.
If you're curling and stuff is weird (validator("json", ...)), probably
forgot to add -H "Content-Type: application/json".
...And it's this kind of November. America, an anxiety disorder. Violent backlash almost certain, varietals both comic and tragic.

Something entirely unrelated. eslint version 9.
eslint, or "make-work" #I love tooling. #eslint has been ubiquitous for so long, the standard and the standard. But OSS is evolve or die, and sometimes both.
The upgrade path between eslint 8 -> 9 is rocky enough that it simply doesn't
look worth the squeeze, to me. For a new project, the choice is obvious (9), but
I've had little success using the migration tools.
$ npx @eslint/migrate-config .eslintrc.json
Given the scope of linting configuration, I'm probably better off declaring linter bankruptcy and simply starting over. Yeah, that's it, this is make-work.
// @ts-check
import eslint from '@eslint/js'
import tseslint from 'typescript-eslint'
export default tseslint.config(
eslint.configs.recommended,
...tseslint.configs.recommended,
...tseslint.configs.recommendedTypeChecked,
{
rules: {
'@typescript-eslint/no-unsafe-assignment': 'warn',
'@typescript-eslint/no-unused-vars': [
'warn',
{ argsIgnorePattern: '^_' },
],
},
},
{
languageOptions: {
parserOptions: {
projectService: true,
tsconfigRootDir: import.meta.dirname,
},
},
},
{
files: ['**/*.js', '**/*.mjs'],
...tseslint.configs.disableTypeChecked,
}
)
Given the advent of mjs and cjs, make sure your tooling recognizes those
file extensions when looking for configuration files.
Wanted cornbread, needed chili. A #recipe.

I've been wondering why my fly machines weren't auto-suspending after I added
#sentry to a project. From the logs, I could see / was being requested every
second... Smells like a bot.
So I added more logging to catch the user-agent from request headers, and was surprised to see the following:
user-agent: SentryUptimeBot/1.0 (+http://docs.sentry.io/product/alerts/uptime-monitoring/)
I guess #sentry has a new feature (beta) that does a healthcheck on URLs that
have thrown exceptions. I need to read the docs more closely, but a GET every
second seems a bit excessive. To disable it, I just revised my robots.txt:
User-agent: SentryUptimeBot
Disallow: *
I've been looking for an #elixir redux-alike (primarly the event-bus stuff),
and have found that Phoenix.PubSub paired with a GenServer and a Task
supervisor seems to get the job done.
I did chase a few different constructions, like using Task.start with
callbacks, but found that GenServer doesn't have a predicable state when
callbacks are executed.
The following is scratched out, as there's opportunity to make it a bit more generic, like sending messages back to the original caller, etc.
objective: Schedule async work that can crash.
defmodule MyApp.Dispatcher do
use GenServer
@topic "MY_TOPIC"
def start_link(_) do
GenServer.start_link(__MODULE__, %{}, name: __MODULE__)
end
def init(state \\ %{}) do
# subscribe to events emitted elsewhere
Phoenix.PubSub.subscribe(MyApp.PubSub, @topic)
# Start a Task.Supervisor
Task.Supervisor.start_link(name: MyApp.Dispatcher.TaskSupervisor)
{:ok, state}
end
# handle messages sent from dispatched tasks
def handle_info({:DOWN, _ref, :process, pid, _reason}, state) do
next = Map.delete(state, pid)
# broadcast that a task is complete
PubSub.broadcast(MyApp.PubSub, @topic, {:tasks?, next})
{:noreply, next}
end
def handle_info({_event, %{id: _id}} = msg, state) do
task =
Task.Supervisor.async_nolink(MyApp.Dispatcher.TaskSupervisor, fn ->
# doing something async. It may or may not crash
# For instance, maybe this does a database write
end)
{:noreply, Map.put(state, task.pid, task)}
end
def handle_info(_, state) do
{:noreply, state}
end
end
I've been fiddling with content extraction using @mozilla/readability. As a data-source, what better candidate than this very-here website, so I made some revisions.
The basic question is "what components must my webpage have in order to trigger reader mode". Surely, this is standardized.
Well... extremely no, it seems. Having done some reading, the best I've come up
with is that adding schema attributes won't hurt anything, but the ways in
which browser reader modes parse content is highly eccentric. For instance, the
following qualifies for reader mode using Firefox, but @mozilla/readability
fails to extract a publishedTime.
<article itemscope itemtype="https://schema.org/Article">
<h1 class="post-title" itemprop="headline">Reader modes are insane</h1>
<time datetime="2024-10-25" itemprop="datePublished">
2024-10-25
</time>
<section class="post-content" itemprop="articleBody">
<p>...</p>
</section>
</article>