Hacking SSL support into smtpop.dll

In one of our applications, we use smtpop.dll to retrieve email from a POP3 mailbox. Today, we had to connect to a mailbox that supports only SSL, while smtpop.dll does not work with SSL.

Our code to retrieve email is behind a fa├žade, so I expected that it would be easy to replace smtpop.dll with another mail framework. However, I found out that the interface mimicked that of smtpop.dll closely instead of presenting a broader abstraction; that is, the interface has methods that map one-to-one to those of the POP3Client class of smtpop.dll. We tried an adapter implementation based on the interface to translate calls to the replacement mail classes, but we encountered difficulties during the spikes, mostly because of the state of the code.

The solution I went for was to decompile smtpop.dll, examine how it worked internally, and derive a new class from it. It turned out that the only two methods that needed changing were Open and Quit .

Unfortunately, the methods of class POP3Client are not virtual. But, luckily for us, some of the class members are of protected visibility and so accessible in the derived class. I rewrote the Open and Quit methods as “new” methods, which means that they are not polymorphic. This also forces us to replace usage of POP3Client with Pop3ClientWithSsl everywhere in the code. But, at least, we have some respite before we have to implement a cleaner solution.

Protecting *.html files in ASP.NET

A quick and simple way of protecting *.html files in an ASP.NET web application is to enable runAllManagedModulesForAllRequests in the web.config file. In addition, set the authorization settings as required on locations that need authenticated access.

This will cause all requests to go through the authentication module, which will trigger the ASP.NET authentication mechanism.

Note that this solution has caveats, so use it with caution.

Not giving up on JSF yet

I have calmed down and can now reason on what prompted my JSF rage of yesterday night.

It started with <f:viewParam> not working as expected in GlassFish 4.0. <f:viewParam> is an addition in JSF 2.0 that simplifies bookmarkable URLs, one of the biggest improvements in the new JSF specifications. Yet, as simple as my test case was, a query string parameter was not populating the Javabean on the JSF page.

An extensive web search showed that there was nothing wrong with my code, so after going from denial through to acceptance, I resigned to upgrading to the latest Mojarra.

The GlassFish 4.0 documentation stated that it was possible to include an updated Mojarra in the WAR file and to keep the original version that is bundled with GlassFish. I liked this solution because it meant that I did not have to change the default GlassFish installation. So, I proceeded by adding the following lines to glassfish-web.xml .

That still did not fix the issue, and worse, I now had the notorious Target Unreachable, identifier <xyz> resolved to null error.

I knew it had to do with the Mojarra upgrade, yet my full confidence in the setting above (recommended from other sources) kept me from trying a different approach. So I spent the evening fighting a losing battle with Mojarra.

Today, with renewed resolve, I decided to tackle the problem again. I started looking for bug reports and found this JIRA ticket. It matched the scenario that I had: <f:viewParam> was not working with the new JSF namespace. At least, this finding proved that the Mojarra upgrade was the correct fix.

Now, all I had to do was try the other approach, which is to copy the updated Mojarra JAR file into the GlassFish installation directory. I did that, and the problem promptly went away. <f:viewParam> is now working.

I did not give up on JSF because I just could not accept that such a cornerstone feature as <f:viewParam> was not working. In hindsight, I should have tried the second upgrade method much sooner, but I trusted the documentation too much. And, I was also reminded of the importance of bug reports in investigating unexpected behaviour of software. For now, I am back in full JSF swing.


Java Server Faces rage!

From http://thoughtworks.fileburst.com/assets/technology-radar-jan-2014-en.pdf:

We continue to see teams run into trouble using JSF– JavaServer Faces — and are recommending you avoid this technology.

Teams seem to choose JSF because it is a J2EE standard without really evaluating whether the programming model suits them. We think JSF is flawed because it tries to abstract away HTML, CSS and HTTP, exactly the reverse of what modern web frameworks do. JSF, like ASP.NET webforms, attempts to create statefulness on top of the stateless protocol HTTP and ends up causing a whole host of problems involving shared server-side state. We are aware of the improvements in JSF 2.0, but think the model is fundamentally broken.
We recommend teams use simple frameworks and embrace and understand web technologies including HTTP, HTML and CSS.

This is exactly how I am feeling about Java Server Faces today.

If you go back through the old posts on this blog, you will see that I have been an avid supporter of Java Server Faces from the beginning. Even when it lacked in many aspects, I believed that it would get better and worked around the shortcomings.

The promises were supposedly fulfilled with JSF 2.0, what with bookmarkable URLs, view parameters, and so on. But, JSF still does not work properly in many cases, and when it does, there are so many hoops to jump through and so many different behaviours from one implementation to another that JSF as part of the JavaEE specifications is but a farce.

About contributing to open-source projects

About a month ago, I decided to participate in the NancyFx open-source project. I downloaded the code, read the documentation, and set up a few demos to experience the framework for myself. It was very interesting, but when I knew enough and was ready to start contributing code, I was no longer motivated. I felt no connection with the items that were on the issues and to-do list.

Reflecting on why I lost interest, I recall someone’s remark that an essential criterion in choosing an open-source project to contribute to was the value it represented for the person. Perhaps, I failed to launch with NancyFx because I did not have to use it on a daily basis and would not enjoy any return for my effort.

I wanted to join an open-source project, to begin with, because I felt that I was consuming too much and was not giving back enough. But, in hindsight, I see several occasions when I have contributed to open-source.

I implemented support for MSTest 2005 in MbUnit when it only supported MSTest 2008; I submitted custom controls to wxWindows; I wrote articles for, submitted bug reports to, and participated in discussions for JPOX; I was very active on the RIFE mailing list; and, I initiated an open-source project to implement an artificial neural network in PHP.

All these contributions started from something that I needed but that would also be useful to the community. Giving back these small bits felt natural. On the other hand, what happened in my attempt to join NancyFx was that I was forcing it onto myself.

I guess what I am trying to say is that nobody should feel compelled to give back to open-source. Just using the software produced by the community, I suppose, is an acknowledgment and appreciation of their effort. When the time comes for you to contribute something, it will just happen naturally.

Current read, Growing Object-Oriented Software, Guided by Tests

I am reading Growing Object-Oriented Software, Guided By Tests again. The first time I read the book was on my Kindle, and I was put off by the poor formatting of the sample code. Now, using the “view code listing as image” feature, I can enjoy the book. It is also nice to see similarities with the architecture I favour for applications, namely components with ports and adapters.

Learning from a failed deployment

This morning, at work, we released an update to one of our internal applications. It involved database schema changes, data migration, and ASP.NET MVC application changes. The release went mostly well, except for the deployment of the database changes which failed when the script generated by SSDT dropped user objects. We are now revising our deployment process to prevent this failure in the future.

Despite having a good data-centre Operations team, we developers are responsible for application releases. For code deployment, we are given an FTP account to copy files to the production server. For database changes, we are limited to a db_owner account to run scripts on the relevant databases. Code deployment is usually straightforward, and in the rare cases when we have to roll-back files, it is just a matter of replacing the new files with the old ones. But, when a database deployment fails, we have to engage the Operations DBA in a rather bureaucratic way in order to get backups restored. To minimise the chance of failure, our process already included a mock deployment on a copy of the database in a staging environment before the actual production deployment.

However, when we ran the script against the production database this morning, we were horrified when it dropped the db_owner account that we were using. The rest of the script failed, as expected, and we were forced to request the Operations DBA to restore the database. Whilst that was in progress, we investigated the failure and found that SSDT had been too clever in generating the deployment script. Because the logins differed between the staging server and the production server, SSDT had added statements to drop users before recreating them with the correct logins.

We re-generated the script, removed the lines that dropped the users, and attempted the deployment again. This time, it completed successfully.

The failed deployment was frustrating because we had rehearsed it many times without noticing that users were dropped. We were fairly confident that we could publish database changes directly from SSDT. However, having experienced this scenario, we have changed our process to include a review of the deployment script by at least two people before executing it on the production database. We are also considering submitting the deployment script to the DBA to review and run on our behalf, but that is unlikely to happen because their responsibilities do not include DevOps function.

Model-Based Testing

Robert Binder’s massive Testing Object-Oriented Systems is always on my desk. At over 1500 pages long, the book is a challenge to read cover-to-cover, but I like to peruse a few choice chapters from time to time.

Binder wrote an article about the compliance testing of Microsoft’s court-ordered publication of its Windows client-server protocols in 2012. Interestingly, instead of testing software against documentation, they were required to do the reverse, because the code was already published and had to be used as the gold standard. Under scrutiny and tight deadlines, they managed to check that 60,000 pages of documentation matched the protocols exactly, by using model-based testing (MBT).

I had not paid attention to MBT in the book, but after reading this article, I have become very curious and will certainly learn more about it.

Test fixtures

I know many people swear by the Arrange-Act-Assert (AAA) pattern for writing unit tests, but I have never liked it. I think it creates unnecessary duplication/repetition when the same arrangement is required in several test methods.

I prefer to have the unit under test initialised to the required state before each test method is run. In automated testing, this baseline state has a name: test fixture. Like its real-world counterpart, the fixture holds the unit in place so that it can be tested in different ways without its state and context changing between method executions.

So, instead of using the AAA pattern where the arrangement is done in each method, I have one test class in which the fixture is set up in one place and several methods act on the fixture and assert the outcomes in different ways. I name the class according to the context (for example, NewCustomerTests and ExistingCustomerTests) and the methods according to what they test (for example, NewCustomerTests.PropertiesAreSet and ExistingCustomerTests.CanBeDisabled).

Most test frameworks have a feature to mark a method for automatic execution before each test method call. For example, in JUnit such a method is decorated with the @Before qualifier; in MSTest, the attribute [TestInitialize] identifies such a method.

Organising test code like this helps reduce mindless mechanical tests and repetition, which are two of the main obstacles to TDD adoption.

TDD is not dead after all

Today, on Facebook, Kent Beck posted a rebuke to David Heinemeier Hansson’s “TDD is dead. Long live testing”. At the same time, Hansson also published another blog entry that looks like an attempt to appease the people who took offence to his earlier claim. His latest post bears some similarities to what I wrote about the consequences of abusing mocks in “The unit in unit-testing”. I can only ascribe that to coincidence. And, for the record, I see the value in TDD.