Table of contents
Open Table of contents
Development environment
Do you work on open source, commercial projects, or several projects at once?
Many of us do, usually with a mix of all of them. For example, this blog is built with Astro, and it can become outdated if I do not touch it for a while. How can we make sure builds stay reproducible and still work years later?
Modern development relies on a few layers of version management:
- Dependency management, like
package.json,go.mod,Cargo.toml, etc. - Deployment environments like Docker/Podman, cloud VM images, etc.
- OS package managers like
pacman,apt,yum, etc.
All of this works well when we target isolated environments. We build one cloud instance per service, one Dockerfile per app, and one dependency manifest per project.
But in my local development environment, I still struggle.
Most of the time I use macOS, but recently I also started using Arch Linux.
Both operating systems have great package managers.
On macOS I use Homebrew, on Arch Linux, pacman and AUR are more than enough for me.
But when I work on different projects, they have different requirements:
- Different compiler versions
- Extra tools, such as dbmate, pre-commit, prek, biome, golangci-lint, etc.
- Project-specific configuration requirements, like environment variables and bootstrap scripts
And the most frustrating part: versions must be exact. Many tools are not backward compatible (especially linters and language toolchains), so one update can break expected behavior.
Testing solutions
Over the years, different teams I worked with tried many ways to solve this and build a reproducible environment.
Virtual machine with file sync
In my experience, the most obvious solution was a preconfigured project VM. I had to configure source synchronization and run everything remotely.
This works in many cases, but it is not very comfortable, and everyone depends on a good internet connection.
If you need to upload something large (I can only imagine uploading node_modules. I would run pnpm install directly on the server),
it is painful. It takes time, and sometimes it fails entirely.
Remote debugging is a separate topic, but it is rarely a pleasant experience.
It also does not solve outdated projects well, where you need an older version of Python.
That can mean one VM per project if it falls outside the team’s default stack.
But most importantly, it’s not what I personally prefer.
I want to have a native local experience. If I want to debug something, it just works. I can work without internet, or with an unstable connection. I want to get the best experience from my IDE.
It is hard to avoid LLMs now. If I use them (and many companies expect that), I still want a native local workflow.
In short, I have never had a truly pleasant experience using a remote instance as my main development environment.
Virtual machine with IDE
This is a relatively new idea for many developers, but not for people used to working in Vim or Emacs over terminals.
Right now we have many options:
- Gitpod - I used it when they had just started
- GitHub Codespaces - I tried it, but it feels expensive for personal use
- Visual Studio Code Server - also an option, but not all extensions are available
- [N]Vim or Emacs through SSH with TMUX
- Google internal corporate development experience
All of these are great, but they work best for companies ready to manage developer environments with templates.
They are often too expensive for personal use, and they still require internet access. Maybe I am old-fashioned, but sometimes I want to work offline.
I’m sure many people will go for it, but it’s not for me, at least for today.
Development Containers
Another obvious solution is containers.
The first time I tried this approach, a colleague suggested Vagrant as a dev environment. To be fair, it was workable, but that was more than 10 years ago, and it was less usable than most options today.
Since Docker was released, deployment workflows changed significantly. We can now build a container and run it much more easily and cheaply across projects.
Not Docker itself, but Dockerfile was a revolution!
Development Containers can work well, but they still do not feel as smooth as a truly local setup.
I still use Development Containers from time to time, but lately I almost completely migrated to a native experience.
💡 Tip: use isolated environments like virtual machines, containers, or devcontainers when running untrusted code.
Local and native experience
I used to just install all required tools using Homebrew. It just works, no quirks.
Until it does not.
A recent example: I had a website built with [Go]Hugo that I had not touched for years, except for small content updates. Some articles were added, edited, or removed. But [Go]Hugo is not strongly backward compatible. When I come back to that site, it often does not build locally until I fix things. I cannot compile something that used to work.
I had the same experience with Python, PHP, Ruby, and others. It is not just Hugo.
So the problem is clear. I can pin app dependencies to keep runtime behavior stable. But that does not fully solve the local development environment itself.
At one point, I wanted to try monorepo tooling. I thought a monorepo might fit one of my projects. I wired many separate Taskfile files together by including them in each other.
After some research for something that worked for me (outside the JS/TS world), I found moonrepo with proto, and surprisingly it could pull various dependencies automatically. So, I did not need to configure an environment for my app.
It downloaded and installed required languages and additional tools with specified versions.
I was happy with it at first, but it was not enough. Some tools I needed were not available, so I still had to install them separately, which brought back the same problem.
For me, it was a partial solution, not a complete one. It is definitely nice, but proto still feels unfinished.
mise-en-place
Then I discovered mise. It felt like a better fit, with even more potential as I learned it.
Initially I started with a simple [tools] block, like this:
node = "24.14"
pnpm = "10.32"
go = "1.25"
golangci-lint = "2.11"
rust = "1.94"
After configuring my Fish shell (you can check my dotfiles repo), it just worked:
#!/usr/bin/env fish
brew install mise
echo 'mise activate fish | source' > ~/.config/fish/conf.d/mise.fish
You cd into a directory and get all required tools with the right versions.
> prek --version
fish: Unknown command: prek
> cd safigo.dev
> prek --version
prek 0.3.5
Plugins for JetBrains IDEs and Visual Studio Code significantly improve the experience.
You simply create a mise.toml configuration (or another available way)
and everyone who has configured mise gets a reproducible local development environment and build results.
You can install tools from many sources, including npm and GitHub.
This is one of its most important features. So far, I have not found a tool I need that I cannot install with mise.
Simplification by mise-en-place
Before mise I used to have pyenv, pyautoenv, nvm, goenv, rustup, etc.
All of that needed to be configured and kept working.
Later I discovered tools like uv and fnm that simplified my environment significantly, but they are still far from mise.
mise is simply the next level, somewhere between a Brewfile and a Dockerfile.
Security
As I mentioned earlier, never run untrusted repositories, and DO NOT use mise with them.
By default, mise does not allow dependency installation or environment switching until you trust the directory/project
by running mise trust.
Be careful, since mise can execute code during mise install and maybe even when you cd there,
though I have not fully explored all such hooks yet.
My use cases
I want a reproducible environment when I return to my own projects or start working on someone else’s project, old or new.
I use various tools that can be specific to my preferences.
Here is an example from my blog setup (this is the first post here):
[tools]
# https://github.com/nodejs/node
node = "24.14"
# https://github.com/pnpm/pnpm
pnpm = "10.32"
# https://github.com/j178/prek
prek = { version = "0.3", postinstall = "prek install" }
# https://github.com/biomejs/biome
"github:biomejs/biome" = { version = "2.4.6", version_prefix = "@biomejs/biome@", bin = "biome" }
[tasks.install]
run = "pnpm install --frozen-lockfile"
alias = ["i"]
[tasks.dev]
run = "pnpm run dev"
alias = ["d"]
[tasks.lint]
run = "biome check"
[tasks.lint-fix]
run = "biome check --write"
[env]
ENV = "local" # Just an example
As you can see, a contributor only needs to run mise install, and everything I expect will be installed.
It also includes a post-install script that sets up the pre-commit hook automatically for anyone who contributes.
Another interesting line in this config is:
"github:biomejs/biome" = { version = "2.4.6", version_prefix = "@biomejs/biome@", bin = "biome" }
It does not use a default biome plugin and instead downloads from GitHub, even though Biome releases use unusual tags like @biomejs/biome@2.4.6.
Even when a repository uses unusual version tags, mise can strip prefixes and rename the binary.
Why don’t I use npm to fetch biome?
I used to. It worked well, but it only covers a subset of tools.
mise feels native and consistent across my projects: go, rust, nodejs.
I see no reason why biome would be so unique to install it differently from other tools.
Global tools
Mise also supports to install tools on a global level, similar to npm or go install.
But why? We already have our package manager and npm and other tools!
There are few reasons:
- Your tool is not available in your particular package manager
- You want to have dotfiles that will install something independently on your OS and Linux distro
- You’d prefer to have the latest version of tools pulling directly from their direct repo.
Given all of that, Mise can be handy for global (user) level tool management as wel las project-level.
Bottom line
I know everyone is focused on AI/LLM right now. But foundational tools are still here: libc, gcc, go,
python, makefile, taskfile, curl, brew, pacman, apt, mise, and more.
AI may replace some tools, some programs, maybe even some professions, but there will always be a foundation. Something still has to run the infrastructure where AI operates.
I see mise as closer to those fundamental tools and services that power development infrastructure, rather than just
another hype tool.
I encourage my colleagues to integrate it into their workflows. So far, I have received only positive feedback.