In 2026, we will regain lost ground in the realm of computational autonomy.
But this extends far beyond the blockchain space.
In 2025, I made two major changes to the software I used:
* Almost a complete switch to (open-source, cryptographic, decentralized documentation)
* Decisively making Signal my primary instant messaging tool (no longer using Telegram). I also installed Simplex and Session.
This year, I've made the following changes:
* Google Maps -> OpenStreetMap. OrganicMaps is the best mobile app I've ever seen. It's not only open-source but also privacy-preserving because it's localized, which is important because reducing the number of apps/locations/people that know your actual location is beneficial.
* Gmail -> Protonmail (although ultimately, the best approach is to use a truly cryptographic instant messaging tool directly).
* Prioritizing decentralized social media (see my previous post).
While continuing to explore local LLM setups. This is where significant improvement is still needed in the "last mile" domain: while there are many excellent native models, including CPU-based and even mobile-enabled models, their integration is not high. For example, there isn't yet a user interface as user-friendly as Google Translate that seamlessly integrates with native language learning (LLM) models, supporting transcription/audio input, personal document search, and more. Comfyui is great, but we need a Photoshop-like user experience (I know people will recommend GitHub repositories in the replies, but the key is "various GitHub repositories," not a one-stop solution). Also, I don't want Ollama running continuously, as that would consume 35 watts of my laptop's battery. So, we still have a long way to go, but we've made tremendous progress—a year ago, most native models didn't even exist!
Ideally, we would leverage native LLMs as much as possible, using dedicated fine-tuned models to compensate for the limited number of parameters, and then, for high-usage scenarios, we could combine the following two approaches: (i) pay-per-query zero-knowledge proofs, and (ii) TEEs. (iii) Local query filtering (e.g., allowing smaller models to automatically remove sensitive information before pushing documents to larger models) essentially combines all imperfect measures as a best-effort approach, although ideally, we should ultimately find a highly efficient fully homomorphic encryption method.
Sending all data to a centralized third-party service is unnecessary. We have the capability to significantly reduce this effort. We should continue to build and improve upon this.