Author: Matthew

  • CORS woes on Heroku

    After spending the past 4 hours attempting to solve what boiled down to a rather simple problem, I figure I’d better blog about it to save someone else the time and effort.

    If you’ve been leveraging Passenger’s new –nginx-config-template command line option to add CORS headers to static assets served from a Rails app hosted on Heroku, and the CORS headers recently disappeared under mysterious circumstances… read on.

    I’ve been using the method described here to add CORS headers to custom fonts served from a Heroku-hosted Rails app that’s proxied by Nginx which handles serving static files. I recently updated to Rails 4.2.2 and suddenly, my custom fonts (.woff and .woff2 files) no longer had CORS headers on them.

    After the aforementioned hours spent scratching my head, I discovered that the latest version of the sprockets gem is generating asset digests that are 64 chars in length, where previously they had been 32. Nginx’s default regexp for identifying requests for static assets assumes the digest will be 32 chars long, like so:

    # Rails asset pipeline support.
    location ~ "^/assets/.+-[0-9a-f]{32}\..+" {
      error_page 490 = @static_asset;
      error_page 491 = @dynamic_request;
      recursive_error_pages on;</code>
    
      if (-f $request_filename) {
        return 490;
      }
      if (!-f $request_filename) {
        return 491;
      }
    }
    

    Changing the regexp to recognize digests that are 64 chars in length immediately solved the problem:

    location ~ "^/assets/.+-[0-9a-f]{64}\..+" {
       ...
    }
    

    I had to laugh after something so stupid and silly cost me a good chunk of my Saturday to debug. But at least it’s working now. My statically served custom fonts have the correct CORS headers and Chrome and Firefox are happy again.

  • Faster PDFs with wicked_pdf and delayed_job (part 3)

    In part 2 we coded our PDF generator as a background job. But the PDF is still being stored on the local file system. Let’s store it in S3 instead and give our users a URL so they can download it.

    First let’s add the AWS SDK gem to our Gemfile:

    gem "aws-sdk"
    

    Let’s define environment variables for our AWS credentials:

    AWS_ACCESS_KEY_ID=abc
    AWS_SECRET_ACCESS_KEY=secret
    

    Next we’ll modify our background job to connect to S3 and upload our PDF file instead of saving it to the local file system:

    class PdfJob < ActiveJob::Base
      def perform(html)
        pdf = WickedPdf.new.pdf_from_string(html)
        s3 = AWS::S3.new
        bucket = s3.buckets['my-bucket'] # replace with your bucket name
        bucket.objects['output.pdf'].write(pdf)
      end
    end
    

    Nice! But how do we enable our users to download the file? S3 has several options for this. One option would be to make the bucket publicly accessible. The downside to this approach is that it would allow anyone to download any PDFs stored in the bucket, regardless of who originally uploaded them. Depending on what kind of data is being included in the PDFs, this could be a bad idea.

    A better option is to generate a temporary URL. This URL can be given to a user so they can download the file, but the URL is only usable for the period of time we specify. This reduces the likelihood that the PDF will be exposed publicly. Here’s how it’s done:

    class PdfJob < ActiveJob::Base
      def perform(html)
        # ...
        obj = bucket.objects['output.pdf'].write(pdf)
        url = obj.url_for(:get, expires: 3.minutes.from_now).to_s
      end
    end
    

    Looks good. But how do we get this URL back to the user? The background job is asynchronous so it’s not like we can generate the PDF and return the string to the user all in the same HTTP request.

    A simple approach is to write the URL back into the database. Let’s introduce a new user param and update the user with the URL (this assumes the column exists on the users table):

    class PdfJob < ActiveJob::Base
      def perform(html, user)
        # ...
        url = obj.url_for(:get, s3_url_options).to_s
        user.update_attribute(:pdf_url, url)
      end
    end
    

    Now that the URL is available in the database, we can display it on the user’s profile page.

    If we want to get even fancier we can write some JavaScript that’s executed immediately after the user requests a PDF. This script would periodically poll an Ajax endpoint in our app to determine if the URL has been written to the users table yet. When it detects the URL, it would redirect the user to the URL. This would make the PDF generation process seamless from the user’s perspective.

    An example in jQuery might look something like this:

    function poll(btn) {
      $.get("http://www.our-app.com/users/123/pdf_url", function(data) {
        if (data.length > 0) {
          window.location = data;
        } else {
          setTimeout(function() { poll(btn) }, 2000);
        }
      });
    }
    

    Our controller action might look like this:

    class UsersController < ApplicationController
      def pdf_url
        user = User.find(params[:id])
        render text: user.pdf_url
      end
    end
    

    And there you have it. I hope this gave you a good idea of just how easy it can be to generate PDFs in a background job. If your site isn’t getting much traffic, it’s probably not worth going this route. But if it’s a popular site (or you expect it to be one day) it would be well worth investing the time to background this process. It’ll go a long way towards keeping your HTTP response times short, and your app will feel much snappier as a result.

  • Faster PDFs with wicked_pdf and delayed_job (part 2)

    In part 1 we learned why backgrounding is important. Now let’s dive into some code.

    First things first. Add wicked_pdf and delayed_job to your Gemfile:

    gem "wicked_pdf"
    gem "delayed_job"
    

    Now we can generate a PDF from inside our Rails app with this simple command:

    html = "<strong>Hello world!</strong>"
    pdf = WickedPdf.new.pdf_from_string(html)
    IO.write("output.pdf", pdf)</pre>
    

    You’ll notice that the more complex the HTML, the longer it takes wicked_pdf to run. That’s exactly why it’s important to run this process as a background job instead of in a web server process. A complex PDF with embedded images can take several seconds to render. That translates into several seconds of unavailability for the web process handling that particular request.

    Let’s move this code into a background job:

    class PdfJob < ActiveJob::Base
      def perform
        html = "<strong>Hello world!</strong>"
        pdf = WickedPdf.new.pdf_from_string(html)
        IO.write("output.pdf", pdf)
      end
    end
    

    Now we can queue the background job from a Rails controller like this:

    class PdfController < ApplicationController
      def generate_pdf
        PdfJob.perform_later
      end
    end
    

    The only problem is, our job isn’t doing anything particularly interesting yet. The HTML is statically defined and we’re writing out to the same file each time the job runs. Let’s make this more dynamic.

    First, let’s consider the HTML we want to generate. In a Rails app, the controller is generally responsible for rendering HTML from a given ERB template using a specific layout. There are ways to render ERB templates outside controllers, but they tend to be messy and unwieldy. In this situation, it’s perfectly reasonable to render the HTML in the controller and pass it along when we queue a job:

    class PdfController < ApplicationController
      def generate_pdf
        html = render_to_string template: "my_pdf"
        PdfJob.perform_later(html)
      end
    end
    

    This assumes an ERB template named “my_pdf.erb” exists and contains the HTML we want to convert into a PDF. Our method definition within our background job then becomes:

    class PdfJob < ActiveJob::Base
      def perform(html)
        pdf = WickedPdf.new.pdf_from_string(html)
        IO.write("output.pdf", pdf)
      end
    end
    

    delayed_job actually persists the HTML passed to the job in a database table so the job can retrieve the HTML when it gets executed. Since the job is executed asynchronously, the HTML has to be stored somewhere temporarily.

    So far, so good. The job will generate a PDF based on the HTML rendered in the controller. But how do we return this PDF back to the user when it’s ready? It turns out there are a variety of ways to do this. Saving the PDF to the file system in a publicly accessible folder is always an option. But why consume precious storage space on our own server when we can just upload to Amazon S3 instead for a few fractions of a cent?

    What’s nice about S3 is that it can be configured to automatically delete PDFs within a bucket after 24 hours. Furthermore, we can generate a temporary URL to allow a user to download a PDF directly from the S3 bucket. This temporary URL expires after a given period of time, greatly reducing the chance that a third party might access sensitive information.

    Next week I’ll demonstrate how to integrate S3 into our background job using the AWS SDK.

  • Faster PDFs with wicked_pdf and delayed_job (part 1)

    What do you get when you combine the slick PDF generation capabilities of wicked_pdf with the elegance and efficiency of delayed_job? A high performance way to convert HTML pages into beautiful PDF documents.

    I’ve been leveraging wicked_pdf to generate high school transcripts from my SaaS app, Teascript, since 2009. Prior to that I had been using Prawn which ultimately proved to lack the flexibility I needed to produce beautiful PDFs.

    wicked_pdf converts HTML pages into PDF documents using WebKit, the engine behind Apple’s Safari browser (among others). For the past few years, Teascript produced PDFs without any kind of backgrounding in place. This meant that if someone’s PDF took an unusually long time to generate, they were tying up a web server process for that entire duration.

    If multiple users generated PDFs simultaneously, it might prevent other visitors from accessing the site. Not good. Furthermore, if the PDF generation process exceeded the web server’s default timeout, the user might not ever get the PDF, just an error page.

    Any time your web app integrates with a third party API or a system process, it’s a viable candidate for backgrounding. delayed_job to the rescue. By offloading the long-running processes onto background workers, we free our web server to do what it’s best at: serving static HTML and images.

    Backgrounding isn’t a silver bullet, though. It introduces added complexity into the app, making it more vulnerable to failures. This requires writing additional code to handle these failure scenarios gracefully. But at the cost of this added complexity, we can ensure our web server stays fast and lean while our users still get the pretty PDF they want.

    Next week we’ll dive into some actual code. I’ll demonstrate how to integrate wicked_pdf with delayed_job and hook the entire thing up to your Rails app. Don’t touch that remote.

  • Unicorn vs. Passenger on Heroku

    I’ve been hosting my flagship SaaS app on Heroku since 2008. Overall it’s been a stable, if a bit overpriced, platform. Over the past year, however, I’ve been experiencing mysterious performance problems. The app runs fine for several weeks. Then suddenly I begin receiving exception reports about certain methods not being found on certain objects. Restarting my dynos would fix the problem for a few days or a few weeks, but eventually I would start getting errors again. It definitely felt like some sort of memory issue.

    After profiling the app and discovering nothing, I installed the Librato dashboard which offers a basic line graph of memory usage across dynos. I began noticing a correspondence between this line getting above 200 MB and my app throwing errors.

    Each dyno on Heroku theoretically has 512 MB of memory. I was running my app on Unicorn with 2 processes per dyno. I wouldn’t expect problems unless each process exceeded 256 MB. I was confused why I was seeing problems at just 200 MB of usage. True, the line would continue creeping up if I didn’t restart my dynos, and would eventually exceed 256 MB which would trigger an auto-restart of the dynos. But this took a long time to happen, and in the meantime my visitors were experiencing a slower app and/or outright errors.

    I spent several days attempting to identify where the app was leaking memory. Why did the memory usage line continue climbing? I tried various techniques to identify the problem but was unable to reproduce the leak on my local system. Eventually I decided a different tactic was necessary. Heroku has been recommending Puma as an alternative to Unicorn for a while now, so my first thought was to switch to Puma which uses threads for concurrency instead of processes. However, my app runs under MRI, not JRuby, so I wouldn’t necessarily be able to take advantage of those performance gains. Instead I opted for Passenger which now runs on Heroku.

    The results have been beyond what I expected. My memory usage line is now perfectly straight. No increase over time. No eventual errors and dyno restarts due to overconsumption. What Passenger is doing under the covers is spinning up new processes during high traffic periods and killing them during low traffic periods. My app has been running for 3 months now and I haven’t had to restart any of my dynos, nor have I encountered any performance issues with the app. Success!

    I can think of two explanations as to why Passenger fixed these problems: first, perhaps Unicorn itself was causing my app to leak memory in a strange way. Second, and more likely, Passenger’s built-in ability to spin up processes on demand is keeping memory leakage to a minimum due to processes regularly being refreshed. Regardless of which explanation is correct, I’m happy the app is no longer throwing errors at inconvenient times. Most importantly, my users are having a far more consistent experience. If they’re happy, I’m happy.

  • Moving from Gmail to FastMail

    It was January, 2005. Google had launched its invitation-only beta release of Gmail just a few months ago. The initial storage capacity of 1 GB was dramatic, with its closest competitors offering an anemic 15 or 20 MB. My beta invite had finally arrived and I was in the process of signing up for an account. The excitement was palpable, “It’s email… but by Google! 1 GB of space… who could possibly use that much? And the web interface is so fast!”

    Jump ahead ten years to January, 2015 and you’ll find me ditching my Gmail account in favor of FastMail, a move that has been long overdue. “But why ditch Gmail?” you may ask. I have my reasons.

    First and most importantly, I’ve come to the conclusion that I want my email to be reasonably private. I just don’t like the idea of Google scanning my email and pulling out little bits of information about my personal life and buying habits. The speed, storage space, and features that Gmail offered used to offset the privacy disadvantage in my mind, but they don’t any longer.

    Gmail used to be fast. Really fast. It’s not anymore. Don’t get me wrong, it’s still fairly zippy, but it’s definitely slowed down over the years as the interface has grown increasingly more complex and bloated. I’ve found FastMail to be true to its name: it’s just as fast as Gmail. And the web interface is simple and non-bloated. I like simple.

    Another change in Gmail that really bothered me was the introduction of the social tabs. I know these tabs can be disabled now, but I don’t like the way it was initially forced on us. And I definitely don’t like the impact it had on legitimate email marketing. The average user isn’t going to notice their email is suddenly getting filtered into separate tabs, much less figure out how to turn it off.

    When it comes down to it, I’d rather pay FastMail for an equal amount of storage space, reliability, and speed as Gmail. I no longer have to deal with the ads, the privacy violations, or the sudden feature changes. Don’t fool yourself: you’re already paying for a free service like Gmail, just not with cash. You yourself are the payment: a consumer to be analyzed and sold to. And Google is very good at doing just that.

    And hey, Marco Arment recommends FastMail so it’s got to be good, right? Here’s what he says about the benefit of having an email address ending in a domain name that you control:

    For something as important as email, I’ve never trusted everything to a proprietary provider. My email address has never ended in someone else’s domain name, and has never been hosted in any way that would preclude me from easily switching to another provider.

    The transition to FastMail was very smooth. It was just a matter of modifying a couple of DNS records and using FastMail’s excellent IMAP import tool to transfer a decade’s worth of email from Gmail (this did take a few hours). I’m still able to check my FastMail account from my iOS devices, and I use their web interface on my desktop through a Fluid app.

    If you’re looking for a new email provider comparable to Gmail, I can recommend FastMail without hesitation.

  • Fix Bluetooth in OS X Yosemite

    I love OS X. It’s an incredibly reliable operating system and it’s usually a joy to operate. Unfortunately, since upgrading from OS X Mavericks to Yosemite I had been plagued with Bluetooth connectivity problems:

    • My Apple keyboard would randomly disconnect from the computer. Once this happened, it became impossible to reconnect it again without restarting. Turning the keyboard off and on again wouldn’t fix it.
    • My Magic Mouse’s tracking motion would randomly become jerky and stuttering. This would last for 2 or 3 minutes and then return to normal. Turning the mouse off and on again wouldn’t fix it.
    • Devices that I hadn’t added would show up in Bluetooth Preferences as being permanently “remembered.” Whenever I would try to “forget” these devices and closed the Preferences window, they would immediately show up again after opening Bluetooth Preferences.
    • My mouse and keyboard also showed up in Preferences and could not be “forgotten.” Same as above, as soon as I removed them and closed Preferences, they would appear when I immediately opened Preferences again.

    These problems were incredibly frustrating. I did a lot of research trying to determine how best to resolve them. None of the solutions I found worked. These included:

    • Replacing the batteries in the Bluetooth device
    • Disabling and re-enabling Bluetooth
    • Clearing the PRAM
    • Resetting the SMC
    • Restarting the computer (this temporarily fixed the problems but they always came back)

    However, I believe I’ve finally fixed these strange connectivity problems for good. A couple of days ago I moved the following files to my Desktop and restarted:

    • /Library/Preferences/com.apple.Bluetooth.plist*
    • ~/Library/Preferences/com.apple.Bluetooth.plist*
    • ~/Library/Preferences/ByHost/com.apple.Bluetooth.*

    It’s important to move (not copy) the files. This forces Yosemite to re-create the files on reboot. (I could have just deleted the files but I wanted to keep them around as backups in case something went wrong.) Since doing this, my Bluetooth devices have been happily connecting and disconnecting appropriately and I have no more stuck devices in my Preferences.

  • Why I’ll never buy from Virgin Mobile again

    Why I’ll never buy from Virgin Mobile again

    Today’s post is a bit self-serving and for that I apologize, but I’m hoping that telling my story publicly will accomplish 2 things: first, it will warn my readers that they do business with Virgin Mobile at their own risk. Second, and it’s a long shot, but it might provoke a response from VM and they might return the money they’ve owed me for years. A very long shot, I realize.

    “Trust, but verify.” I learned this lesson in early 2011. I have nothing against Virgin as a brand or a company. I admire Richard Branson and all he’s accomplished. He’s a remarkable example of a self-made entrepreneur. However, I can say without hyperbole that he has some real dolts working for him at Virgin Mobile.

    In December of 2010 I was looking for a mi-fi provider. I didn’t have an iPhone to tether with yet and needed an option to connect to the Internet while on the road. Virgin Mobile seemed to have the best deal at $130 plus S&H for a MiFi 2200. They also touted a 30 day money back guarantee which gave me confidence in making the purchase.

    When the device showed up, I quickly discovered that the coverage was not satisfactory for my area. I would frequently get dropped connections from home, and when out and about coverage was even spottier. So I called VM on January 5 to request service cancellation and get instructions on how to return the device for a refund. The rep I spoke with put me on hold for 20 minutes then said she would call me back later that night. She never did. Thus began 6 months of pure and utter frustration.

    • January 7: called a second time to find out what happened. Rep said they were sending me a mailing label to return the device. I waited over a week for the label to arrive but it never did.
    • January 16: called a third time to ask where the label was. The rep wanted to transfer me to the “mi-fi group” (first I had heard of this) but actually just dumped me back out to the automated call menu.
    • January 17: called a fourth time and the rep finally gave me an address and RMA for the device. I shipped the device back the same day via UPS and included a note in the package explaining in detail about what had happened.
    • January 19: UPS reports the package was delivered.

    Between January and June 2011 I called Virgin Mobile a total of 6 times to ask why my refund had not been processed. Each time I was told that it would be processed within a week. Each time the refund failed to appear.

    I switched to a different tactic and opened a service ticket through their web site. Here’s their response:

    We do understand how frustrating could be not having the answers when you need them. Unfortunately, there are procedures we have to follow and your issue is under review at this time. All we are asking you is for a little time in order for us to resolve the issue at your satisfaction. Again, we are deeply sorry for the delays, but we need to wait for the investigation that we have opened regarding the refund of your device.

    We have already confirmed your device has been returned and it might take up to 5 business days for us to have a resolution.

    So they admit they received the package. But for some reason, issuing a refund is a challenge for these people. Subsequent service tickets were equally useless. My refund was always “in process” or “under review.”

    Eventually, they claimed they had mailed me a refund check. The check never arrived. Subsequent customer service requests yielded no help whatsoever. Refusing to explain why I hadn’t received a check yet, they instead began telling me they couldn’t help me and that I had to contact the “Broadband department” for a refund. Whatever. I give up.

    By the time August 2011 rolled around I decided it was not worth the time and effort to continue pursuing this. So Virgin Mobile kept my money and I’ve heard nothing from them since. Overall, it’s a frustrating and disappointing experience when a company steals your money. Had I anticipated what was going to happen I would have kept the device and sold it myself. I wouldn’t have gotten all my money back, but I would have gotten something. As it happened, Virgin Mobile ended up with both the device and my money.

    Be warned. When it comes to Virgin Mobile, advice from The Princess Bride is appropriate: “Get used to disappointment.”

  • Pricing a SaaS app is hard

    Pricing a SaaS app is hard

    Pricing a SaaS app is hard. Really hard. My flagship product Teascript launched with a subscription-based pricing model in 2007. This was primarily due to a limitation in the payment system I was integrating with. I did a bit of “market research” before settling on $19 per year for unlimited use of the app. (And by “market research” I mean that I Googled some keywords related to my app to find competitors and learn what they were charging.)

    This pricing stuck for several years but I eventually realized the amount of value I was providing through the app did not match the price tag. As I continued building new features, the value was increasing and I needed to change my pricing accordingly. I also decided to move from an annual charge to a monthly charge, mostly because I wanted a shorter feedback loop to measure churn.

    I switched all subscriptions to $5 per month and also put a cap on app usage (which in hindsight, I should have been doing from the start but that’s a topic for another post). Surprisingly, this actually increased my sales even though the effective annual rate had more than tripled to $60. Why was this?

    I was scratching my head initially until I realized many of my users were signing up for one or two months and then canceling their subscriptions. So I had actually increased churn by moving from an annual to a monthly charge. But that told me something about how my customers were using the app. Teascript helps homeschoolers and private schools build high school transcripts for their students. This is something that’s typically only done once in a student’s lifetime. Therefore, even in a family with 3 or 4 kids, a parent is only going to be using the app for a few months at a time per student, then they won’t have any further need for it.

    This leads me to believe that moving to a fixed pricing model may be the right approach. Recently, I’ve been experimenting with various metrics to try to measure how much money I make off a typical subscriber. If most of my customers only remain subscribed for 3 or 4 months, that’s $15 to $20 of revenue. If I had instead been charging a fixed price of $39 (a price point comparable to most offline high school transcript kits) then I would have nearly doubled my revenue.

    I still haven’t found a reliable way to determine the lifetime value of a customer, though. I’ve been experimenting with various Stripe metrics providers but haven’t found anything that calculates metrics based on the past 7 years of payment data I have in Stripe (everything I’ve found only calculates metrics going forward). When I do figure this out, I’ll be sharing the results here. Stay tuned.

    In conclusion, did I mention pricing is hard? There are so many different ways to price an app. It’s hard to know ahead of time what will work for any given app. This is where A/B testing and customer feedback can be helpful. Even with that additional information, though, I feel like it’s something that could take a lifetime to master. I’m well on my way, but I still have a lot to learn.

    Have you run into challenges pricing a SaaS app? Share your story in the comments.

  • Nomadic programming (part 2)

    Nomadic programming (part 2)

    It’s time to re-visit nomadic programming. Read part 1 to get caught up.

    nomad [noh-mad]: (1) a member of a people or tribe that has no permanent abode but moves about from place to place, usually seasonally and often following a traditional route or circuit according to the state of the pasturage or food supply. (2) any wanderer; itinerant.

    As defined in part 1, a nomad is a freelancer who spends the day roaming between various wi-fi hotspots instead of working from home. This isn’t just about hanging out at a coffee shop like a hipster. This is about getting out of the house and into a more stimulating environment, creating opportunity for networking, and yes, enjoying some delicious food and drink in the process.

    So now that you’re onboard with the concept, what’s the actual procedure for being a nomad? I’ve been nomading for 8 years and have picked up a few tips and tricks that I’ve found maximize enjoyment and productivity. Follow these guidelines for nomading success.

    What to do

    • Bring a power splitter. Finding outlets is the perennial problem of the nomadic programmer. Most cafés and coffee shops have only a handful of outlets available. Instead of having to arrive early to snag one, bring a power splitter with you and politely ask to throw it on an outlet that’s already in use. If you get a big enough splitter, you can even offer power to fellow nomads who weren’t as forward-thinking as you were. This highly portable splitter is one of my favorites.
    • Bring headphones. Some people enjoy the noise at coffee shops, true. Even if you’re one of those people, it can be helpful to have a pair of headphones on you if the noise becomes too much, or if you need to watch a video or listen to a podcast. If your headphones have a boom mic, so much the better. It’s practically impossible to participate in a conference call in the midst of heavy background noise without a headset mic. I’ve used this model from Logitech for years. It’s light, inexpensive, and works well.
    • Tethering means freedom. Wi-fi hotspots are ubiquitous these days, but with that ubiquity comes increased unreliability. Slow wi-fi is the bane of the productive freelancer. That’s why you should always have a backup. Tethering to your iPhone, iPad, or Android device is the equivalent of “wi-fi insurance.” It’s a relatively inexpensive way to ensure you’ll always be able to get online, even when the hotspot at Starbucks is being rebooted. It also opens up a world of new nomading locations. I once ran a conference call with a client from beside a beautiful golf course. That wouldn’t have been possible if I hadn’t brought my own wi-fi.
    • Carry business cards. One huge benefit of nomading is the opportunity to meet and network with people. It’s amazing how frequently this happens. Don’t get caught without a stack of business cards. You need something to hand out to people you meet so they can follow up with you later. I actually landed a freelance job from someone I met at Bruegger’s once.
    • Bring a water bottle. Most cafes and coffee shops offer water, but the cups are usually tiny. Purchasing bottled water is always an option, but staff are usually happy to refill your bottle for you. I like these stainless steel bottles for their size, durability, and tactile feel.
    • A wireless mouse can’t hurt. It’s nice having an alternative to the trackpad, especially if you’ll be nomading for more than a couple of hours.
    • Use a quality bag. It’s important to have something to carry your stuff in. Don’t cheap out here. A good bag will serve you for years. I like 5.11 packs. They don’t have a fancy padded pocket for your laptop, but they’re practically indestructible.

    What not to do

    • Don’t dress like a slob. It’s easy for us programmers to let our clothing choices slide into the gutter. When we’re nomading, though, we’re out in public. We’ll be meeting new people. Some of those people might be potential clients. So it’s important that our dress reflect our professionalism. I’m not saying you need to wear a tux to Starbucks, but you should probably reserve the ratty jeans and stained T-shirt for home.
    • No freeloading. It’s incredibly inconsiderate to park yourself at an establishment, use their wi-fi, and not buy anything. Don’t do it.
    • Don’t ignore the owner and staff. Along those same lines, building good relationships with the business owner and staff can be very rewarding. When you become a regular customer, leave good tips, and clean up after yourself, the staff will remember and you’ll get better service as a result (and even some freebies at times).
    • Make healthy choices. Modern America is sedentary. As programmers, we’re likely more sedentary than the average American. That’s why it’s critical to make healthy choices while we’re out and about. Pass on the morning bagel or doughnut doughnut and enjoy some bacon and eggs instead. You don’t need that soda, unsweetened tea has far fewer calories and won’t trigger an afternoon crash. And try to get out for a brisk 20 minute walk at some point.
    • Security matters. Whenever you’re using public wi-fi you’re taking a risk. That risk can be mitigated by using a VPN or, better yet, by always tethering to your own wi-fi connection. Portable wi-fi hotspots are inexpensive and provide an extra layer of protection.
    • Avoid peak times. Nothing is worse than trying to perform an emergency deploy to a production web server during the lunch rush at Moe’s. A technique I’ve found helpful is to hit the popular lunch spots during mid-morning, hop over to a coffee shop during the lunch rush, and head back to the café during the afternoon lull. I despise overcrowded places and this technique was quite effective at ensuring that my surroundings were relatively calm throughout the day.

    Conclusion

    Pretty straightforward, right? Take what you find useful from these lists. Discard what doesn’t work for you. Come up with some best practices of your own. Half the fun of nomading is the adventure. Where will you end up? Who will you meet? You never know what each new day might bring. So get out there and start identifying your favorite places to nomad.

    If you’re not sure how to get started, consider joining a local programming Meetup like this one. Even user groups will occasionally host a social gathering at a restaurant or coffee shop. Just keep in mind that while nomading as a group can be fun, the real adventure resides in striking out on your own.

    Have you tried nomadic programming? Did you enjoy it or despise it? Do you have any tips or tricks that worked for you? Share your experience in the comments below.