It’s been several days since I noticed that my oh so very clever notion wasn’t actually a good one. I’ve been developing my little subleq creatures, called “figures,” layer by layer. There is one more layer to go, and I had what I thought was a brilliant way to implement it. That is, until I tested the new layer against the old version and saw that the same exact run, with the same exact results, was taking up to 59 percent longer!
The last layer involves a couple of objects I call “handlers.” I’d thought to be able to layer them together in order to have a modular design where I and other developers would be able to activate combine or suppress whatever new ideas one might happen to have. At first I thought it was the price of doing business. If you want this much flexibility, you’ll have to deal with this much overhead. Still, I wanted to see if maybe a developer could optimize their design later in the project, perhaps by using inheritance instead of layers once testing had shown that the implementation you wanted was working correctly. It would cut down on dynamic late binding, and maybe run a bit faster. Then, during one test, I noticed that I’d forgotten to link together some of the layers, and yet it was still working, dutifully gathering and displaying all the info I wanted and needed, and running faster. That’s when I realized just how ridiculous my oh so very clever notion really was.
There is nothing that layered handlers can do that polymorphism, inheritance, and composition cannot. What’s more, other programmers with experience in java are used to thinking and coding that way—layering handlers would be new and strange.
The handlers are still there, and they still add some drag to the system, but without layering and connecting them, the overhead is currently less than 2 percent. That will go up as I’ve other hooks to put in place, but it shouldn’t add much more drag compared to what was happening. I’ve also got another couple tricks up my sleeve that could help speed things up, but now that this particular self-generated crisis has passed, I’ll save optimization for when the system is fully implemented.
Here’s how I calculate the overhead:
One bench mark run took 14 minutes and twenty seconds, while the slow version was taking 22 minutes and 47 seconds. First, and you can blame the Babylonians for this one, I need to convert both times to seconds to avoid the base 60 issue.
14 minutes times 60 seconds a minute is 840. Add the twenty seconds to that and you get 860. That is the base time.
22 minutes time 60 seconds a minute is 1320. add 47 seconds to that and you get 1367. that’s the test result time.
Subtract the base time from the test time, 1367-860, and you get 507. That’s the difference.
Divide the difference you just got by the base time. 507/860=0.5895348837209302
Multiply by 100, round up at the second decimal place and discard the rest of the digits, and you have, 58.96 percent overhead, or roughly 59 percent longer.
It would be nice to have someone else in on this project. They might have been able to save me a couple days of coding, several days to fix and test it, and a week or two of design work that turned out to be fundamentally flawed.