
For years, at best, I goofed off trying to solve the Voynich Manuscript.
At one point, I was writing PHP code that drew heatmaps one pixel at a time, analyzing prefixes and suffixes. It was laborious, tedious, and mind-numbing. And like everyone else on this planet who has seriously touched the Voynich, it led to the same conclusion:
“I don’t know what the Voynich is, but now I know it doesn’t do this or that.”
That’s the usual outcome.
Then Came LLMs
When large language models appeared, I bided my time. I hoped, like many, that they could finally help crack the problem.
So once again, I set out to solve the Voynich.
Once again, I failed.
But this time, I learned something important.
- LLMs will lie.
- LLMs will cheat.
- LLMs will cheat and then lie about cheating.
After an LLM “solved” the Voynich for me on at least four separate occasions, I got into a heated debate with it about its usefulness. Eventually, it admitted it was only reliable in domains where truth was irrelevant.
I asked:
“Exactly when is the truth not relevant?”
It answered:
- Satire
- Fiction
- Propaganda
At one point, I even asked it to make a satire meme. And it refused to do that citing ethical concerns. Uhhh… Hang on. Machine that excels in generating propaganda suddenly has ethics? Okay.
So, for fact-finding research, which requires a good degree of truth, and with something as linguistically hostile as the Voynich Manuscript, LLMs fail badly.
But They’re Not Useless
There is one thing LLMs do exceptionally well:
They write code.
My Voynich experiments shifted into having the model write Python, while I verified and ran it locally. Usually, it took days to untangle bugs and logic errors before I had something usable.
And then I had an epiphany.
I realized that almost all Voynich research code works the same way:
- Run a script
- Get a giant dump of numbers and text
- Paste it into a spreadsheet
- Manually build charts
- Hope you didn’t misunderstand the output
No wonder the Voynich is treated like a book dropped by an alien child.
Most people can’t write Python.
Many can’t run it.
Even fewer can interpret the output correctly.
I can’t really write Python either. That and I loathe it’s indentation requirements so I have no desire to learn it’s syntax. But I can read it, edit it, and reason about it. I’m a Pascal and PHP developer, with some JavaScript and C# mixed in.
So I asked a simple question:
Why not take everything we compute about the Voynich and put it into something almost anyone can use?
No code.
No math.
Click a button.
See a chart.
And, use AI to help write the code.
The Plan
The Voynich Workbench exists to do exactly this:
- Create interactive pages that run computational linguistics tests on the Voynich Manuscript
- Present results in a way that a non-technical researcher can eventually understand
- Provide tables and charts that can be downloaded
- Make everything interactive: buttons, toggles, sliders, choices
- Let the user decide how they want to view the data
Most importantly:
Make NO claims about what the Voynich is or isn’t.
This is not a solution engine.
This is not a decoding claim.
This is not “look, I cracked it.”
This is about putting actual, testable science into the public’s hands.
And If It Fails?
There’s still a fallback.
I end up with a really cool Voynich calculator to play with. And honestly, that alone makes it worth building.