Getting Dirty in Modern Web Development
The IAPP just launched a complete rebuild of its virtual face to the world – privacyassociation.org. As part of my current position at the IAPP the website, and this project, is the responsibility of my business unit. It’s been a number of years since I tackled a website project of this magnitude and I was shocked to realize just how far web development has advanced in, what seems to me, such a short period of time.
It’s not like I don’t read the news or keep abreast of the latest technical trends, so when the project kicked-off I knew of the modern components and best practices that go into building a new website, I just hadn’t considered their overall scope when taken as a whole. This post might be helpful for those embarking on the same journey (with similar past experiences) but largely this one is for me. I think it’ll be fun to look back on this 4-5 years in the future and recall my wonderment at the current state of the web in 2014.
A Little History
I don’t mean to self aggrandize but a quick history of my professional experience for some perspective might be helpful. I spent the years between 1995 and 2001 as a professional web developer, then moved to managing teams of developers and so on. As a result I’ve either built by myself, or participated as part of a team on hundreds of websites (somewhere north of 300). Many of those sites were “run of the mill” brochureware but I had the pleasure of working on some really cool things.
- 1996: I hand coded, in Perl, a fully functional shopping cart for a retailer who had the foresight to hop online.
- 1998: I built an online auction, kind of like eBay but at a smaller scale, for an auction house.
- 2000: I used a 3DES Java encryption library (hot stuff at the time) to secure transactions for a bank, helped build an online grocery delivery site and hooked content produced for a major newspaper to the web.
- 2004: I built a true custom print-on-demand publishing solution using XSL-FO and Apache FOP. It was a technically cool project, but unfortunately a complete market failure.
- 2008: I built a fully dynamic online survey and analytics platform distributed to tens of thousands.
As you can see, in Internet years, those projects, even the more recent ones, are downright ancient. Since then I’ve been running an IT department, proselytizing privacy as well as all sorts of other non-development related tasks. Needless to say that after years of absence, my web development skills were a bit outdated.
Note: The vendor we hired to help architect, design and build privacyassociation.org, (Upstatement) did a phenomenal job, and my team here at the IAPP were awesome as well, so if this post comes across like I was the savior for the project, I can assure you that isn’t the case – the project just required all available manpower to make the launch date, so I pitched in.
Providing strategy and leadership is not a problem, but writing code isn’t something I do that often any longer, but the website launch date was closing in and it was clear that “all hands on deck” were going to be required if we were going to launch on time. That meant I needed to write code, something I hadn’t done in a serious way in 3 years, and I hadn’t done for a website of any significance in over 5. I thought a quick ramp up and I’d be productive, and in a way I was, but the ramp up while short was much steeper than I had expected. Building a website in 2014 was an amazing experience and I’m glad I was forced to participate at the level I did because I learned a ton. I also have a new appreciation for the depth of skills required to perform front-end development, UI/UX and design.
The following list comprises the main requirements, features and tools that went into building privacyassociation.org:
- The site is fully responsive. I actually think it performs spectacularly on a phone.
- Sass/Compass is used to manage/create the CSS.
- Wordpress provides the backend CMS but we use Timber and Twig to properly separate the code from the design.
- MySQL is serving our database needs.
- The code is versioned in Git on GitHub. A nice step up from Subversion!
- We are using Vagrant and Bower for dev environments.
- Bye, bye Apache: Nginx is our webserver.
- And finally an oldie but new to me: we are using Solr for search.
Setting Up the Development Environment
For the record, I humbly admit that while all of the aforementioned technologies are widely used I only had previous experience with Wordpress (and PHP), MySQL, Nginx and a bit of Git. That leaves a ton of new stuff to learn, and that in part made setting up my machine for development a difficult task. Vagrant is supposed to solve much of the burden of setting up an environment, and it was great once it was up and running, but to get it working was quite a heavy lift.
I spent the better part of a day struggling through errors and tweaking the provisioning script trying to get just the basic Nginx/PHP/Wordpress/MySQL environment to get up and running. Then another day struggling with Solr and Bower. With only a handful of developers on the project and not much going on with regards to changes to the basic environment, I’m not sure I would use Vagrant again. I think it would have been easier just to install all the stuff I need individually.
I can see the benefits of Vagrant for large teams and long-term projects, but for my purposes I could have just as well used the default Apache and PHP install that came with my Mac, cloned the Git repositories, and added MySQL and Sass/Compass and called it done – although Solr still would be trouble. It would likely run faster as well. However, the overall confidence in having a VM that contains what I need with little, if any, external dependancies is pretty cool.
Along those lines I’m a big fan of Bower. Even though we didn’t make much use of it past the initial installation of everything, it was a breeze to set up and seems easy enough to maintain.
So after a couple of days banging my head against my desk I was all set up. In the grand scheme of things that is not a long time, but given the commonality of my computer (Macbook Pro with OSX 10.9) and the software I was required to install it shouldn’t have taken more than half of a day.
Once I finally got my local machine up and running with a full copy of the site, I was ready to jump in and start coding. However, getting a handle on the site architecture and how functions are called a routed through the code was far more complicated than I could have predicted.
Note: Please let me know if I am wrong about this!
The first tasks I tackled were small isolated items, such as small interactive components on specific pages or forms, but within a week I was handling more complicated stuff. And within three weeks I had mastered the code, including all of the weird corners that crop up on a site this large. I feel pretty good about this, and although most of the preliminary work (always the hardest) was already done, I learned a ton. Did I mention there are over 15,000 pages on the site and it pulls and pushes from three different backend systems? And not only is the site highly customizable by the user, we also have a staff of writers and marketing folks who are publishing new content daily. It is a big site.
And it is worth mentioning that Sass is awesome. CSS was always be something I hated to work with but Sass made that chore but more tolerable. I could never go back to hand coding a CSS file.
Gotchas and Warnings
Coding is coding, and building a website either through dynamic pages or one page at a time is roughly the same as it was back in the mid-1990’s, but requirements to design/build for multiple resolutions, from phones to desktops, forced me to change some old habits.
A fully responsive site is something magical to behold but, of course, takes some phenomenal designers and front-end engineers to pull off well. Unfortunately that means the days of “I’ll just throw a table in here for this form for now,” are gone. I’ve long know tables aren’t a best practice for websites, but sometime you just need to getting something in there to move forward. It was never a good idea but that is really a bad idea now.
Back in the day, WYSIWYG editors for non-technical users always held great promise but never delivered. They still don’t, and it is almost worse now – the complexity of modern CSS makes it impossible for a WYSIWYG editor to do little more than paragraphs breaks, bold, italics and bullets with any measure of confidence. I wish someone would solve this problem once and for all.
Along those same lines, Internet Explorer was always the most difficult browser to get to work dependably, and it still is. It blows my mind that while almost a quarter of the traffic to privacyassociation.org comes from phones/tablets, we still get ~20% of our visits from outdated versions of Internet Explorer (that is IE 8 and below). If you count IE 9 and 10 in that category, since they are at version 11, that would be most of our Internet Explorer traffic. Like WYSIWYG editors, this is a problem that needs to be solved. What IT departments, in 2014, among a rash of know vulnerabilities, still force users to maintain outdated browsers? It is pathetic.
The funny thing about this whole experience, and why I want to document it, is that now that I am comfortable with all of the constituent components and architecture of the site, I can move around, fix things and build things fairly easily. It seems natural and intuitive. (Just today I added date range filters to the search with only a couple of hours of work.) But the truth is that the website, and other modern websites, are some complex beasts. They rival the complexity of some of the Windows applications I built back in the day on the then new .NET platform. Web development, for even simple sites, when done right, is the real deal.