2025 end-of-year digest

Posted 2025-12-18 by Jan Onderka ‐ 5 min read

Improvements in all areas, but still more to be done.

After the release of version 0.7.1 discussed in the previous blogpost, I feel this is a good time to take a look back and maybe a little forward to 2026 as well.

I have already detailed the work before 2025 in the introductory blogpost. During 2025, development of machine-check was supported by the NGI0 fund from the NLnet foundation.

As the incremental improvements during 2025 can be seen by reading the blogposts, I will instead pick the key areas from the introductory blogpost where I think they deserve their own discussion. Intuitiveness has not really changed much and soundness is still kept (hopefully), so no changes there.

Ease of use

Looking at the version 0.3.1 from October 2024, the differences are striking. The most notable one is that this website did not exist!

In all seriousness, good documentation is one of the necessities for adoption of a tool by user -- the other being the actual usefulness of the tool for the user's needs. While both definitely could be improved further, the difference between having a readme with one example and having an online book is clear. I have added new chapters to the book with subsequent releases, and it should contain enough information to test things out.

In addition to the documentation, the Graphical User Interface is really the standout for making things understandable. When implementing new features or adding test properties, I tended to use the GUI to see why some result did not meet my expectations. Unfortunately, I did not have enough time to substantially improve the GUI after its initial release, so it is quite wonky. I feel like a big improvement can still be made here.

Generality

The crux of the work during this year has really been improving generality of verification. While machine-check previously supported Computation Tree Logic, stronger than many approaches that only support verification of safety properties, it did not feel enough for me. More fundamentally, staying with CTL for now would make it much harder to support more properties later. I knew that μ-calculus could be supported in theory, but doing it in practice proved much more painful than I thought. As CTL checking basically only does a few operations on graphs, moving to μ-calculus makes everything slower, and I needed to compensate by adding incremental verification, which caused much trouble. At least it should work now!

I was also able to add parametric verification, so we can actually determine whether the property depends on some parameter value. This is highly useful for the ubiquitous situations where we need to work with uninitialised memory: we can simply consider the initial values to be a parameter and decide whether our property works the same for all parameters.

The next way where generality was dramatically improved was with the properties. While the property format was basically unchanged, the properties now support basically the same subset of Rust as the systems! Not only allowing much richer properties, this allowed me to quite elegantly add custom property macros that can, for example, expand to expressions resolving to a Boolean or a bitvector, allowing me to add support for considering debug symbols in properties for the RISC-V system.

The final generality improvement is the support for more abstraction domains. While both the implemented dual-interval and equality domains can be used, they did not impress me too much in the machine-code cases I tested, so I left them as optional features for now. It is, however, clear that only using three-valued bitvector abstraction will not cut it in many cases, so this needs to be looked at more. The good news is that the underlying code is now nicely prepared for new domains and experiments.

Verification power

Unfortunately, it seems that adding generality also tends to produce verification slowdowns. For 2025, I have not considered verification power that much: it sufficed that the properties I already had were verified reasonably quickly. It seems to me that there is a lot of performance left on the table, but I don't want to prematurely optimise before trying out better algorithms for the crucial parts. This is something I want to explore, and hopefully things will become more clear with my forays into SAT/SMT solving in my capacity at the University of Freiburg.

The good news is that the compile times have been drastically slashed in version 0.7.0, so the annoyance factor has gone down dramatically.

In conclusion

Overall, I am very happy with the progress made during this year. As far as I know, machine-check is the only formal verification tool that can demonstrably verify general μ-calculus properties on machine-code systems, and things are becoming better every release. I would like to thank the the NLnet foundation: without their support, the progress would be much slower. Finally, I am happy to announce that I will be presenting the paper on the core technique used by machine-check, Input-based Three-Valued Abstraction Refinement, in January at the VMCAI 2026 conference.

Let's see what 2026 brings. Until next time!