Programming Arduino in Atmel Studio

As a fun way to improve my C, I started programming the Arduino using Atmel Studio instead of the friendlier Arduino IDE.

Below is my very first program. It blinks a red LED and a white LED according to a preset pattern.

With C in Atmel Studio and the AVR-LIBC library, IO ports are manipulated by changing bit patterns in the registers associated with the relevant ports. This requires a good understanding of bitwise operations in C despite the availability of macros to simplify the task.

For example, to set pin 12 of the Arduino to output mode, bit 6 of register DDRB must be set to 1. To do so requires an | (OR) operation with a bit pattern operand where bit 6 is set to 1 and the rest set to 0, so that the states of the other bits in the register are not disturbed.

Using macros DDRB and PB6 defined in AVR-LIBC, this is done like this: DDRB |= 1 << PB6 .

If you are new to C and are unfamiliar with macros, you might wonder about that statement. Besides, DDRB and PB6 are not referenced anywhere else in my program, so how does this line of code work?

DDRB is a macro that expands into C code to dereference the address of the register associated with setting the mode for pin 12, and PB6 is just a symbolic constant for the value 6. In the statement above, by shifting the value 1 left by 6 positions, we create a new value which is then applied to the bit pattern stored at the dereferenced address with an | operation to turn bit 6 of the register to 1. In this case, this sets pin 12 to output mode.

In a nutshell, the sequence of operations is as follows.

Step 1:

Step 2:

Assuming the register DDRB is initially 00000001:

In my C program, the result of step 1 is assigned to struct field and is used as the second operand for the operation in step 2.

It took about an hour to refresh my knowledge of bitwise operations, but the real challenge was interpreting the Arduino schema and the information in datasheets, especially to find the right registers to manipulate.



Undefined behaviours in C

In C programming, undefined behaviours (UB) present perhaps one of the biggest challenges to beginners. As an occasional practitioner, I am still baffled by them–which happened recently with the code that I wrote in an Arduino sketch.

The function get_input() is generic. Besides parameter String prompt, it takes two generic function pointer parameters (*parse_func)(void* const) and (*validate_func)(const void* const) to read user input and validate user input, respectively. They are generic because they have parameters of type void (albeit with different constraints, to which we will come back).

Below is the function parse_int() passed in place of the parameter (*parse_func)(void* const).

There are two things to note here. First, the function signature matches exactly the function pointer declaration: it takes a parameter of type void* const–or, const pointer to void–and does not return any result. Second, it casts the parameter to the desired type before dereferencing the memory location where the input value is written.

And below, the function validate_positive_int() passed in place of the parameter (*validate_func)(const void* const).

Again, this matches exactly the function pointer declaration. It takes a parameter of type const void* const–or, const pointer to const void–and returns a result of type int. Also, the parameter is cast to the desired type–in this case, const pointer to const int–before the logical operation.

In the course of debugging the sketch, I inadvertently modified this last function as follows.

At some point, I realised my mistake and began to wonder why it had not triggered a compilation error and why my code was running normally with the value 1234 as user input. After all, my intention was to have a const pointer to const int; that is, I did not want the argument being passed to be modifiable.

I joined ##c (IRC) to seek answers.

The answer was, an undefined behaviour arises when a const object is modified through a non-const pointer.

It took me some time to understand–or rather, accept–this. As a programmer of “safer” languages, I found it difficult to take this explanation at face value and so embarked on further exploration with the goal of getting the compiler to throw the error that my mistake deserved.

Eventually, I found that if I cast the argument to exactly what was declared in the function signature, like below, the error would be thrown.

Success in displaying the compilation error helped me understand this undefined behaviour: the constraints in the function declaration, just like much of C, are merely promises that the function will interpret arguments in a certain way, but they do not offer any guarantee that it will fulfil its promises.

Undefined behaviours are still stumbling blocks to those who are not expert C programmers, but understanding–and accepting–that they are part of the language makes coding in C much  easier. I– for one–am going to embrace undefined behaviours from now on.

Hacking SSL support into smtpop.dll

In one of our applications, we use smtpop.dll to retrieve email from a POP3 mailbox. Today, we had to connect to a mailbox that supports only SSL, while smtpop.dll does not work with SSL.

Our code to retrieve email is behind a façade, so I expected that it would be easy to replace smtpop.dll with another mail framework. However, I found out that the interface mimicked that of smtpop.dll closely instead of presenting a broader abstraction; that is, the interface has methods that map one-to-one to those of the POP3Client class of smtpop.dll. We tried an adapter implementation based on the interface to translate calls to the replacement mail classes, but we encountered difficulties during the spikes, mostly because of the state of the code.

The solution I went for was to decompile smtpop.dll, examine how it worked internally, and derive a new class from it. It turned out that the only two methods that needed changing were Open and Quit .

Unfortunately, the methods of class POP3Client are not virtual. But, luckily for us, some of the class members are of protected visibility and so accessible in the derived class. I rewrote the Open and Quit methods as “new” methods, which means that they are not polymorphic. This also forces us to replace usage of POP3Client with Pop3ClientWithSsl everywhere in the code. But, at least, we have some respite before we have to implement a cleaner solution.

Protecting *.html files in ASP.NET

A quick and simple way of protecting *.html files in an ASP.NET web application is to enable runAllManagedModulesForAllRequests in the web.config file. In addition, set the authorization settings as required on locations that need authenticated access.

This will cause all requests to go through the authentication module, which will trigger the ASP.NET authentication mechanism.

Note that this solution has caveats, so use it with caution.

Not giving up on JSF yet

I have calmed down and can now reason on what prompted my JSF rage of yesterday night.

It started with <f:viewParam> not working as expected in GlassFish 4.0. <f:viewParam> is an addition in JSF 2.0 that simplifies bookmarkable URLs, one of the biggest improvements in the new JSF specifications. Yet, as simple as my test case was, a query string parameter was not populating the Javabean on the JSF page.

An extensive web search showed that there was nothing wrong with my code, so after going from denial through to acceptance, I resigned to upgrading to the latest Mojarra.

The GlassFish 4.0 documentation stated that it was possible to include an updated Mojarra in the WAR file and to keep the original version that is bundled with GlassFish. I liked this solution because it meant that I did not have to change the default GlassFish installation. So, I proceeded by adding the following lines to glassfish-web.xml .

That still did not fix the issue, and worse, I now had the notorious Target Unreachable, identifier <xyz> resolved to null error.

I knew it had to do with the Mojarra upgrade, yet my full confidence in the setting above (recommended from other sources) kept me from trying a different approach. So I spent the evening fighting a losing battle with Mojarra.

Today, with renewed resolve, I decided to tackle the problem again. I started looking for bug reports and found this JIRA ticket. It matched the scenario that I had: <f:viewParam> was not working with the new JSF namespace. At least, this finding proved that the Mojarra upgrade was the correct fix.

Now, all I had to do was try the other approach, which is to copy the updated Mojarra JAR file into the GlassFish installation directory. I did that, and the problem promptly went away. <f:viewParam> is now working.

I did not give up on JSF because I just could not accept that such a cornerstone feature as <f:viewParam> was not working. In hindsight, I should have tried the second upgrade method much sooner, but I trusted the documentation too much. And, I was also reminded of the importance of bug reports in investigating unexpected behaviour of software. For now, I am back in full JSF swing.


Java Server Faces rage!


We continue to see teams run into trouble using JSF– JavaServer Faces — and are recommending you avoid this technology.

Teams seem to choose JSF because it is a J2EE standard without really evaluating whether the programming model suits them. We think JSF is flawed because it tries to abstract away HTML, CSS and HTTP, exactly the reverse of what modern web frameworks do. JSF, like ASP.NET webforms, attempts to create statefulness on top of the stateless protocol HTTP and ends up causing a whole host of problems involving shared server-side state. We are aware of the improvements in JSF 2.0, but think the model is fundamentally broken.
We recommend teams use simple frameworks and embrace and understand web technologies including HTTP, HTML and CSS.

This is exactly how I am feeling about Java Server Faces today.

If you go back through the old posts on this blog, you will see that I have been an avid supporter of Java Server Faces from the beginning. Even when it lacked in many aspects, I believed that it would get better and worked around the shortcomings.

The promises were supposedly fulfilled with JSF 2.0, what with bookmarkable URLs, view parameters, and so on. But, JSF still does not work properly in many cases, and when it does, there are so many hoops to jump through and so many different behaviours from one implementation to another that JSF as part of the JavaEE specifications is but a farce.

About contributing to open-source projects

About a month ago, I decided to participate in the NancyFx open-source project. I downloaded the code, read the documentation, and set up a few demos to experience the framework for myself. It was very interesting, but when I knew enough and was ready to start contributing code, I was no longer motivated. I felt no connection with the items that were on the issues and to-do list.

Reflecting on why I lost interest, I recall someone’s remark that an essential criterion in choosing an open-source project to contribute to was the value it represented for the person. Perhaps, I failed to launch with NancyFx because I did not have to use it on a daily basis and would not enjoy any return for my effort.

I wanted to join an open-source project, to begin with, because I felt that I was consuming too much and was not giving back enough. But, in hindsight, I see several occasions when I have contributed to open-source.

I implemented support for MSTest 2005 in MbUnit when it only supported MSTest 2008; I submitted custom controls to wxWindows; I wrote articles for, submitted bug reports to, and participated in discussions for JPOX; I was very active on the RIFE mailing list; and, I initiated an open-source project to implement an artificial neural network in PHP.

All these contributions started from something that I needed but that would also be useful to the community. Giving back these small bits felt natural. On the other hand, what happened in my attempt to join NancyFx was that I was forcing it onto myself.

I guess what I am trying to say is that nobody should feel compelled to give back to open-source. Just using the software produced by the community, I suppose, is an acknowledgment and appreciation of their effort. When the time comes for you to contribute something, it will just happen naturally.

Current read, Growing Object-Oriented Software, Guided by Tests

I am reading Growing Object-Oriented Software, Guided By Tests again. The first time I read the book was on my Kindle, and I was put off by the poor formatting of the sample code. Now, using the “view code listing as image” feature, I can enjoy the book. It is also nice to see similarities with the architecture I favour for applications, namely components with ports and adapters.

Learning from a failed deployment

This morning, at work, we released an update to one of our internal applications. It involved database schema changes, data migration, and ASP.NET MVC application changes. The release went mostly well, except for the deployment of the database changes which failed when the script generated by SSDT dropped user objects. We are now revising our deployment process to prevent this failure in the future.

Despite having a good data-centre Operations team, we developers are responsible for application releases. For code deployment, we are given an FTP account to copy files to the production server. For database changes, we are limited to a db_owner account to run scripts on the relevant databases. Code deployment is usually straightforward, and in the rare cases when we have to roll-back files, it is just a matter of replacing the new files with the old ones. But, when a database deployment fails, we have to engage the Operations DBA in a rather bureaucratic way in order to get backups restored. To minimise the chance of failure, our process already included a mock deployment on a copy of the database in a staging environment before the actual production deployment.

However, when we ran the script against the production database this morning, we were horrified when it dropped the db_owner account that we were using. The rest of the script failed, as expected, and we were forced to request the Operations DBA to restore the database. Whilst that was in progress, we investigated the failure and found that SSDT had been too clever in generating the deployment script. Because the logins differed between the staging server and the production server, SSDT had added statements to drop users before recreating them with the correct logins.

We re-generated the script, removed the lines that dropped the users, and attempted the deployment again. This time, it completed successfully.

The failed deployment was frustrating because we had rehearsed it many times without noticing that users were dropped. We were fairly confident that we could publish database changes directly from SSDT. However, having experienced this scenario, we have changed our process to include a review of the deployment script by at least two people before executing it on the production database. We are also considering submitting the deployment script to the DBA to review and run on our behalf, but that is unlikely to happen because their responsibilities do not include DevOps function.

Model-Based Testing

Robert Binder’s massive Testing Object-Oriented Systems is always on my desk. At over 1500 pages long, the book is a challenge to read cover-to-cover, but I like to peruse a few choice chapters from time to time.

Binder wrote an article about the compliance testing of Microsoft’s court-ordered publication of its Windows client-server protocols in 2012. Interestingly, instead of testing software against documentation, they were required to do the reverse, because the code was already published and had to be used as the gold standard. Under scrutiny and tight deadlines, they managed to check that 60,000 pages of documentation matched the protocols exactly, by using model-based testing (MBT).

I had not paid attention to MBT in the book, but after reading this article, I have become very curious and will certainly learn more about it.