Old school crypto

Recently I have begun coding a simple library that can encrypt/decrypt files based on the old enigma machines (and type-x etc).

But what fun is it to just digitize a known breakable piece of hardware?Technology has come far enough for us to hold way more computing power in the palm of our hands than a soldier was able to carry around on his back a few decades ago.

The first big improvement step is to switch the alphabet for bytes, which gives us a few distinct advantages:

  • Ability to process any file we want to
  • Still easily represented in a human readable form [0-255]
  • Bigger “rotors”, which means more possibilities
  • Easier comparisons programmatically

Also the following steps have been implemented to improve the strength based on the type-x and other ideas:

  • A byte can become itself!
  • A second switching board after the rotors
  • Configurable rotors (including stronger random generator)
  • Configurable number of rotors
  • Filler bytes which can only be removed when successfully decrypted
  • Random filler data before and after payload
  • Configurable hashing algorithm combined with length indicator
  • Ability to rotate more than one rotor after each character

As well as a few basic functions:

  • Import and export keys (XML and binary)
  • Proper file handling
  • Basic DLL calls

This should be more than enough to stop the attacks on the old enigma and should prevent most modern day crypt analysis.

The implementation is written in C# but should be easy to convert to other languages as well, keep in mind that given the large number of bytes a simple document has become and the amount of array operations required this will not be a great solution for large files (AMD X4 965 scores about 5MB/min). Which will obviously decrease as more steps are implemented, though I think there still are some optimizations that be done to simplify some operations.

The following points are on the bucket list:

  • Plugboards
  • Fully functional UI
  • Implement a configurable stepping maze (hard work, but should increase the entropy big time)
  • Better file deletion (first overwrite with random data, then delete the inode)
  • Release the source if there is some interest
  • Ability to encrypt/decrypt a continues stream

Virtualization woes

This storm has been brewing (in hindsight) for the last couple of months, but shit only hit the fan today when Pure-ftpd was fired up.

It’s a quite simple setup, windows 8.1 pro, VMware and a few virtual machines used for debugging and a basic network and internet connection.

The transfer speeds weren’t great, but hey it’s still way better than the internet connection we have. For legacy purposes we installed Pure-ftpd and started it without a problem, the VMware host connected just fine, so far so good.

But as luck ran out for the rest of the day no other machine was able to connect, a “quick” hour of happy debugging later and nothing was fixed.

The first clue came from a ping, all machines reported “(DUP!)”, a closer inspection with Wireshark revealed that all virtual machines had this issue (even the Windows server and the VMware host). All packages were being quadrupled, most protocols seemed to handle this just fine, except for the FTP.

Some quick googeling pointed towards the “Routing and Remote Access” service, but since that was already disabled and no other interesting suggestions popped up it was nearly back to square one. It did however lead to the simple test of disabling the “Base filtering engine” which did do the trick for some unknown reason.

As we don’t care about the machine’s firewall and don’t need ipsec either the case is closed for now.

Lag impact of SSL certificates

When purchasing (or getting/generating a free one) a SSL certificate the end-user performance is often overlooked (in most cases even unknown). I am not talking about selecting a bigger number of bits, always get the best available which has the support of the clients.

There is a second more sinister impact of securing data connections with a certificate, the chain length.

When getting a cheaper certificate the chain is usually longer, so there are more steps for the browser to perform, sure performing OCSP stapling mitigates this for most browsers.

But consider other connections which do not support this technology (smtp, imap, pop, mysql (including Mariadb), in fact most non https connections), this is were our problems start. For each connection most of these technologies have to re-verify the complete chain!

But after that the pain is far from over, please consider the party that has to respond to all these requests. After recently switching from StartSSL (paid, DV) to Comodo (DV) the average connection initialization time has seen a reduction of 40~50%!

This also has a measurable effect to the complete server, less open connections means more resources for doing useful stuff (it even has a considerable impact on the life of the IT staff, no more lag when moving emails about).

 

A pretty significant improvement for a simple upgrade, and it didn’t even require digging much deeper into my wallet.