You have all these engineers, now what?

It’s widely said that good people are hard to find, and in the tech industry, where a gap still exists between skills needed and qualified candidates, this is even more pronounced. Companies are competing with each other offering higher salaries, more interesting work, upward mobility, and benefits that range from your standard health insurance and 401k, to in-office beer and foosball (apparently all software engineers decided college should extend indefinitely). With such a demand market, it can be challenging to find and successfully recruit the right talent to your company. But what happens when you finally do?

Managing 20, 30, 50+ engineers

At a certain point, the number of engineers you have can’t fit onto one team. Administrative burden, communication, and collision become too difficult and lead to frustration and inefficient development. This is easy to spot; as you add more people, even after the ramp up period ends, you’re slower than you were before. That feeling of moving so fast with a small team is replaced with this heavy burden of making sure everyone is moving in the same direction and it simply feels like you should be getting more done than you are.

It’s at this point, that you begin breaking the group into teams. Some companies choose to split by role (putting all of the web developers on one team), others by business domain (for example, an Order-To-Cash business process), and still others choose a product or module to focus on. I’ve had a lot of conversations around which is best and I have my opinions, but one thing is for certain; you must avoid isolationism.

Getting it wrong

Let’s talk about some of the problems that can happen when you’ve constructed these teams:

  • Fiefdoms: The team becomes the sole owners of whatever they are in charge of. They become territorial.
  • Tribal Knowledge: Because the team works so closely with this one focus, they stop documenting and knowledge simply exists within the group.
  • Echo Chamber: Decisions and approaches to tackling problems don’t expand outside of the group, and yet the group continues to believe they are always coming up with the best way to do it.
  • Slowed Growth: A lack of new ideas slows growth of the product as well as the career development of those working in it.
  • Terrorists: This is a bit of dramatic wording, but one strong individual in the team, may capitalize the direction of the entire team and therefore the direction of the product/business/etc. Worse still, the team and business believes losing this individual would be so detrimental and costly there is nothing they can do (fear cycle).
  • Production Support: The team can’t move quickly on project work and misses dates, but it’s because they’re getting pulled into too many directions, specifically Production Support or Hotfixes.

Getting it right

In my opinion, there are a few simple fixes (at least conceptually) which can resolve most, if not all, of these problems.

Direction: All of us want to be successful, and most engineers I know, want to know what the end goal is and then be allowed to determine the best way to get there. Keeping a pulse of how the teams are doing, as well as checking in on individuals about what they are actually working on (even reviewing code or discussing architecture), helps to guide or refine direction as the team learns new information.

Empowerment: Software development has become much more complex than it ever has been in the past. It takes teams to build great software in most cases, and in order for those teams to be successful, they need to be empowered to make decisions. Engineering teams should not only make technical decisions, they should also help shape the product or business. Don’t be just order takers, be contributors of great ideas and allow your engineers to have this say.

Silver Bullet: Rotate. Yes, if there is one silver bullet to building highly functional teams, it’s to introduce a methodical rotation of fresh ideas, talent, skills, and personalities.

Team Assembly and Rotation

Here’s my go-to recipe for assembling teams, which requires a minimum of six engineers (but I usually prefer eight) and can scale as large as you like. For this example, I’ll assume the following: we’re a company which offers products (either internally or externally) that is consumable via web and mobile, and we prefer to build centralized API’s for our own applications as well as third-party integrations.

My technical role requirements are that we need a frontend engineer (web), a mobile engineer, a backend engineer, and I believe strongly in at least one quality engineer per team.

Feature Team 1: 1 web, 1 mobile, 1 API, 1 QA.

However, I do not want this feature team to be distracted with production support, or ad-hoc small feature requests while they are trying to get the big project work done. So I immediately add another team to focus specifically on support (questions, hot fixes, small features) and I model it with the same exact roles as the Feature Team.

Support: 1 web, 1 mobile, 1 API, 1 QA.

We’re now at eight individuals, all of which are able to perform a specific role on the team to keep them specialized and fast. However, if they run out of work to do (which almost never happens), then they can always jump over and help in an area they are not expert in (giving them a chance to learn new technologies).

Imagine that you have a well defined roadmap of all the projects you plan on doing. In each of these projects that you define, you create epics (small mini-projects that have defined conclusions), and then you assign the Feature Team to work on this epic. At the completion of each epic, you simply rotate a member with your support team that has a matching role (the frontend engineer on the Feature Team swaps with the frontend engineer on the Support Team). After the completion of the next epic, a different role swaps (the backend engineer for example).

As a business, let’s decide that we want to capitalize on more of the opportunities in our roadmap. We’d like to be able to do more than one epic at a time. This becomes simple, and with a known cost. We recruit a new Feature Team which will later become indoctrinated into the rest of the product offerings through their rotation in and out of support and feature teams. Individuals may start in Feature Team 1, move to Support, and then rotate into Feature Team 2.

The following diagram helps to illustrate the assembly of teams I’ve used successfully in the past.

What does this do?

By rotating a member who actively built the new epic into support, you put someone who truly understands the newly built project into supporting it. Even if bugs pop up in areas outside of this individual’s direct technical skillset, they will have enough knowledge of how it’s supposed to work to help guide the right technical skillset on the support team to resolve it.

At the same time, you’ve rotated someone off of support and allowed them to now focus on creating something new, which helps reduce burnout for your support staff. This new individual to the team can now bring a fresh perspective into the next epic.

The team is working closely with a small number of people at a time (maintaining focus and reducing administrative or communication burden), they are also breaking down walls and establishing one large community as they continue to work with each other.

Let’s revisit our list of problems and see if we’ve solved them:

  • Fiefdoms: Teams get continuity because the majority stay together after each epic, yet rotation causes ownership to be democratized across the entire organization.
  • Tribal Knowledge: Since new members will need to be introduced to the project, documentation and consistency of approach becomes more important.
  • Echo Chamber: Fresh ideas are constantly being introduced into the teams and approaches are constructively challenged.
  • Slowed Growth: Both the product and individual team members are introduced to new ideas and varying levels of expertise, allowing for a sharing of knowledge on each project.
  • Terrorists: No one person has the ability to remain on a project indefinitely, which limits a multitude of risks for products and business processes.
  • Production Support: Support is more skilled and involved with the building of each project, while the feature teams are no longer distracted. Estimates become closer to reality.

Running the Microsoft .NET Stack on a Fresh MacBook Pro

Well, after parting ways with Flightdocs, one of the first things I needed to do was get my own laptop and get back to work. The days of spending $2,500–3000 on a nice machine under the generosity of the business were over, and I settled for a slightly smaller, less powerful, but still pricey little MacBook Pro for around $1,800. I suppose there are people in this world that dread setting up a new computer, but for me, it’s one of the best feelings. A brief pause of tranquility and then the rush of excitement that comes with a fresh start and new possibilities.

Since 2005, I’ve worked with .NET, and though at times I’ve cursed Microsoft for so many things, I always come back to C#. Yes, I proudly proclaim myself a polyglot developer who loves new languages, but there is comfort and confidence in the familiar. So, with that in mind, let’s get setup to build enterprise-grade .NET software on our sleek little budget-friendly-not-so-friendly MacBook Pro!

Installation Disclaimer

Not all of the following installations are required, but these are my recommendations for getting setup to be able to cover a variety of common development tasks.

Node.js and NPM

Package managers are fantastic, it’s weird to think that many of us didn’t use them hardly at all even a couple of years ago. I tend to use Node.js for a variety of web development tasks, but even if you’re not going down that route, having NPM is a great way to pull down web-frontend libraries now that bower is deprecated. Head to https://nodejs.orgto download a fresh copy.


Why XCode when installing Visual Studio? First, Visual Studio for Mac uses many of the Xamarin components which tied in to the development tools for XCode, creating a dependency. Second, you might as well sharpen those mobile skills or at least have a sandbox-knowledge of Swift/Objective C if you are a developer and own a Mac.

Microsoft Visual Studio for Mac

Look at that, I finally got to the part where we install Visual Studio. Head over to get the bits. There should be a Community Edition that you can start with before shelling out any money. Also, many developers may not realize, but Microsoft does now offer monthly MSDN subscriptions which includes licensing for Professional/Enterprise versions of Visual Studio. Of course, I’m in favor of not spending any money at all if possible.

If you’re not familiar with Visual Studio, it’s a fairly rich (bulky) IDE for development, but the Mac version is quite a bit lighter than it’s Windows counterpart. In my opinion, it’s not as powerful or stable on Mac, but has quite a bit of charm. Since we’re in the Visual Studio for Mac section of this post, let’s briefly talk about Visual Studio Code as well. I recommend also installing Visual Studio Code from https://code.visualstudio.comeven if you don’t write in C# or another similar Microsoft-dominated language. It’s a solid code editor that rivals Sublime, Atom, or Brackets. Visual Studio Code is lightweight, very stable, and extremely extensible. Also, I frequently write my complex API code in the full blown Visual Studio IDE while writing my web application code in Visual Studio Code. Personal preference, I’m sure but I don’t think I’m alone in this combination.

At this point, you’ve got most of what you need to develop .NET applications on a Mac. However, if you actually want to persist any data in your application, you’ll likely need to setup a database.

It is pretty amazing how database offerings have changed in the past five or so years. You’ve got so many options on Linux/Unix based operating systems such as Postgres, MySql, Mongo, and tons more. But, if you’ve been working in MSSQL for the past decade+, you may be blown away to know you can run MSSQL on your Mac as well! Now would be a good time to check outside your window and see if you actually spot pigs flying.

SQLite and MS SQL

SQLite DB is available to run locally on your machine and it’s straight forward. If you’re using Entity Framework, simply point your connection string to a local file and initialize the database. However, if you want an IDE for accessing the data I tried DB Browser for SQLiteand it worked well for me.

Now, onto the really fun stuff — MSSQL on your Mac. You’ll need to install Dockerand a specific image for SQL Server which you can pull using the following command:

docker pull microsoft/mssql-server-linux

Once you’ve got Docker up and running and pulled down this image, you will need to step through a bit of configuration. Microsoft did a nice job of documenting this in the following link: I’ll wait while you spend the next 15 minutes working it through.

… Intermission …

Done? Fantastic, at this point you should be able to debug a nice WebAPI through Visual Studio for Mac pointing to a full MSSQL database running on Docker, and call everything from your web application that you’re editing through the lightweight Visual Studio Code. Did a few of you cringe at how many Microsoft products you used? Don’t stress! In a few years, the young new developers will start telling all of their Ruby and Python friends about this hot new open source language called C#.

Happy coding my fellow evil-empire-turned-friendly-open-source-contributerfriends.

Tech Talk: NBAA

Recently, I began preparing a session for the NBAA conference in Orlando targeted at the aviation industry. I struggled identifying the needs of the audience since they were a bit different than I usually have the opportunity to speak to. The following article was not the presentation I gave, but an early direction to introduce several technology concepts to the group and help them understand how these topics could improve their business processes and general operations. The primary topics included:

  • Electronic Data
  • Specialized Software (and SaaS)
  • The Cloud
  • Mobile
  • Security Tips

Though I ultimately presented a slightly different direction for this material, I think it still has some value to be presented here.

Migrating from paper to electronic systems is a challenge. Technically, the data needs to be structured in a way that computer systems understand, but the real challenge comes from user adoption. Paper let’s you write anything you want, change workflow however you need at that moment, and is comfortable to some workforces that are still not yet at ease with computers.

However, the move to electronic data allows for real time validation of the data to significantly reduce mistakes, gives us visibility into trends that may be occurring, and allows us to access the information by several people at a time.

Take reporting aviation discrepancies for example; at Flightdocs, we have seen a number of operators switch from a paper based write-up to electronic. Images and video can be captured and attached to the discrepancy for later evaluation and over time, we can begin to track trends in part failure or unexpected use.

As you begin to move your data to electronic systems you may be tempted to move to Excel or other similar general use software. This is a great start, but it has many drawbacks. Data is typically not validated and doesn’t reflect the real use of the data such as proper tolerances or required fields.

Collaboration becomes a real issue as emailing files back and forth is fraught with errors and typically if the file exists on a network it can only be in use by one person at a time.

Look for specific software that solves an important problem for you. Whether it is maintenance, flight scheduling, inventory, or accounting — find an expert company that can help you tackle these problems in a purpose driven way.

In most cases, I advise against buying on-premise software if possible. This is software that you have to install and maintain at your company and comes with all kinds of hidden costs and complexity. At Flightdocs, we both use software as a service as customers, and providers as our business. This is software that is hosted by someone else, often in a cloud, and is accessed by the internet for a monthly or yearly fee.

The cloud, at it’s simplest form is a way of renting servers from another company with quite a bit of magic thrown in to handle massive scaling. However, this over simplification shouldn’t belittle how important this shift in technology is and all of the tremendous opportunities it now gives us.

In the past, it was incredibly difficult to scale quickly and across continents. It meant purchasing servers, setting up data centers, staffing the appropriate IT resources to manage the hardware and software, and keeping everything up to date and running smoothly.

The cloud allows all of that scale and complexity to disappear and you to simply use or develop applications and has led to more innovation in mobile device software and internet enabled embedded devices.

Now that you’ve moved from paper to electronic, selected the right targeted software for your operations, and have access to that data anywhere through the internet, look to mobile for access away from your desktop.

Imagine each leg of your flight updating compliance metrics and aircraft times in real time to help keep your due list in check or notify home base of necessary maintenance or inventory orders.

You could even dispatch work to individuals who can follow up with their mobile devices and keep getting the latest information throughout the day.

Now that we’ve built up this discussion with all of the good things you can do with technology, let’s share a couple of important drawbacks.

When you go to software as a service or to the cloud, you intentionally give up a lot of responsibility. This can also work against you in that you may not have as much control if there is an issue. In computers, we all know that things aren’t perfect. Outages do happen, hardware does fail, and mistakes are made. If you are already outfitted with the best experts in supporting a production quality network and application, then it may not make sense for you to give over this control.

Security is a double edged topic. If the data that you are storing is of significantly high security, such as weapons systems, or medical patient information, then you may want to reconsider a cloud provider. This is not to say that a cloud provider is going to be necessarily less secure, but you have less direct oversight and therefore are unable to answer some specific security requirements for certain certifications.

  • Home Depot — 56m credit cards potentially stolen through installed malware on cash register machines.
  • JP Morgan — Month long attack stealing 76m names, email addresses, addresses, and phone numbers of account holders.
  • Ebay — 145m user accounts potentially compromised by hackers stealing employee accounts.
  • Adobe — 152m credentials accessed and sensitive information erased.
  • Target — 70m records stolen from compromised magnetic strips on card readers.

When you opt to start moving more and more data to the cloud, you’re making your information more accessible. This is a good thing, but it needs to be controlled for the right people. There are several steps that you can take to further protect yourself and your data.

Here are a few helpful tips:

  • Always use a strong password. These are passwords that can be harder to remember but provide much better security.
  • Never use the same password on more than one system. By enforcing this, you limit your exposure if by some chance a password is compromised.
  • Always ensure you are connecting over secure traffic, look for sites that show a lock in the address bar.
  • Ask for and setup multi-factor authentication to help protect you, even if your password is stolen. Multi-factor authentication also called 2-factor authentication is when you have a username and password and a second factor such as your phone to confirm login attempts.
  • When using a software as a service company, ensure your password is hashed when stored. Flightdocs uses one way hashing to prevent decryption attacks.
  • Ask about encrypted data practices when moving data to the cloud. Not all data needs to be encrypted, but data that you consider sensitive should be.
  • Keep computers and browsers up to date with the latest patches.
  • Install virus and malware scanning software on your computer to help prevent attacks.
  • Always set a pin or login for your mobile phone. Phones are easy to steal and provide lots of information as we adopt mobile access strategies.
  • Be careful with roaming settings on your phone due to wireless hijacking.
  • Backup your data, but also be careful to encrypt and protect backups as they can become vulnerable sources of data.

Conway’s Game of Life in Angular.js

I know there are a lot of posts for Angular, so I will spare everyone a rehash of setup and Hello World. Instead, I thought it would be fun to show a simple example of Angular recreating Conway’s Game of Life.

This example will use the following technologies:

  • HTML
  • CSS
  • Angular.js
  • Bootstrap

If you would like to see a sample of the working application, click here:

What is Conway’s Game of Life

Check out Wikipedia: a more in depth definition and origin of the game, but in short, it’s a simulation that allows you to observe evolutions based on an initial starting configuration while applying the following four basic rules at each round:

  1. Any live cell with fewer than two live neighbors dies, as if caused by under-population.
  2. Any live cell with two or three live neighbors lives on to the next generation.
  3. Any live cell with more than three live neighbors dies, as if by overcrowding.
  4. Any dead cell with exactly three live neighbors becomes a live cell, as if by reproduction.

The game can continue indefinitely resulting in either a repeating pattern or a “still life” where no more moves can occur.

Since this article is more about Angular, I’ve simplified the game a bit as well to randomly select the starting positions and limited the board to 30 by 30 cells, but feel free to improve the code to allow the gamer to specify starting positions or infinite space. All source code can be found here:

Let’s set up the UI

To start, let’s set up a form and board to play on. The form is pretty straightforward and allows you to specify the number of starting life forms, how many generations to simulate and finally, a button to start the game.

            Enter the number of spontaneous lifeforms:
            Enter the number of generations to simulate:

Notice, that we’re using a few Angular tags to collect the data and fire off the game.

First, the data to interact with is wrapped with a div that specifies an ng-controllerattribute. This attribute is used to specify what controller will be used to execute logic against the HTML DOM elements. It is common to place this controller logic in another Javascript file.

Next, ng-submitis used to specify what function will be called on the controller when the form is submitted. When we wire up the controller, this is the method that will start iterating over generations in the game.

Finally, ng-modelis used to bind data values from the input form fields into variables that can be accessed in the controller. When the values on the form are changed, the variables backing them are automatically notified of the change and updated.

Now that we have a form created to gather some basic information about starting the game, let’s create the board that the game will actually play on.

<strong ng-show="rows.length > 0">Generation </strong>
        <table id="board" class="table table-bordered">
            <tr ng-repeat="row in rows">
                <td ng-repeat="cell in row">
                    <i class="glyphicon glyphicon-fire" ng-show="cell.isAlive == true"></i>

In this code snippet, we see a few new Angular components used for controlling presentation of data.

First, ng-showallows us to toggle the visibility of DOM elements by evaluating a true/false statement. Essentially, when the expression is true, we’re setting a CSS style “display: block”, and when false, setting “display: none”.

Next, we get our first look at the mustache-inspired template rendering ( used by Angular. Notice the double curly braces surrounding the variable name in . This allows rendering of this variable and is automatically updated whenever the value of the variable changes.

The Angular control we have not yet covered is ng-repeatand is used when building out the table as we create rows and cells based on the number of items in the “rows” variable. This simply iterates over the collection and continues to generate the content where the attribute is specified and all information that is a child within it.

Finally, we revisit the ng-show attribute to show a small icon in the cell based on whether it is alive or dead. The “== true” is a bit redundant (and admittedly, should be “===” if used anyway to strictly check the value).

Wire up the Controller to Play

The controller is just a function that sets up all of the code to interact with the UI and exposes the necessary variables through a special parameter called $scope. You can read quite a bit more on $scope through the Angular documentation for simplicity, it’s a way to expose variables to binding in the UI.

If the UI is going to use a variable or call a function, it must be attached to $scope through the following syntax:

$scope.myVariable = ‘Some value’;
$scope.myFunction = function(param1, param2) { return param1 + param2; };

For brevity sake, I will just link to the file hosted on Github since its code is not truly Angular specific and mostly controls running the game. I’ve attempted to comment the rules fairly well so it is evident what is happening in each “generation” of the game.

Take Away

At my company, we’ve adopted Angular to use every day in production development and haven’t looked back. The benefits of creating a single page applications (SPA) which limits full trips back and forth on the server has allowed us to provide a more native experience over the web while reducing our server load by pushing some of the processing back onto the client.

The example shown in this article is by no means production code and is structured all in one file, which is usually not appropriate for production use. Enterprise level applications need to fully utilize separation into various modules that are comprised of controllers, views, partials, directives, services, and so on.

I’ve learned to stop promising future blog posts since I tend to write in short waves and then neglect my blog for months at a time; however, I think it would be great to write several posts on architecting large Angular web applications and some of the challenges we have faced. Stay tuned (but don’t hold your breath)!

Resetting your defaults

My blog at pretty much died. It had a decent run from 2008–2013, but then a sudden death. I stopped writing, forgot about controlling all of the SPAM comments, and even forgot to update my credit card which subsequently caused my custom css theme to disappear.

I’m not quite ready for a blog eulogy, so time for a reboot and let’s see if we can salvage the remains.

Looking back at why I started my blog, I remember how I wanted to share what knowledge I had and help strengthen what topics I was learning. This was much easier to do when I was focused purely on technical topics. Programming languages, frameworks, libraries, and databases are so much easier to identify learning milestones and gain that feeling of accomplishment. They are black and white, either you know it, and the application you’re writing works, or you don’t and you continue learning more (and look it up on Stack Overflow).

In 2013, I spent the vast majority of my occupation in meetings. Some of my time went to architectural design and creating technical solutions, but most of my time went to project management, scheduling, explaining issues, and rehashing the same thing over and over. Though I complained at times, it wasn’t bad. In fact, I’m pretty sure I learned just as much, if not more during that year than ever before in my career. However, the accomplishments of that kind of learning aren’t black and white, and they can be sneaky at teaching you more abstract lessons.

One of those lessons learned was about how much impact you can have on people in ways you usually don’t even know. There have been many people I worked with directly who I really focused on to try and help and others I feel like I was nothing more than a casual acquaintance. To my surprise, months or years later, it has been the casual acquaintance that I hear from out of the blue to tell me that I made a difference some small (and on rare occasion, big) in their life. It doesn’t happen often, but when it does, it’s quite an experience. First I feel flattered, then a bit confused because what may have been an important conversation or action at the right time for them might have been casual and fleeting for me, and in some cases, I may not even remember it. Sometimes that leads to guilt not intentionally creating a relationship with them as I may have had with others. However, I realize that is how things work, and the impact others have had on me happen in much the same way. Some are direct and built over hundreds or thousands of interactions, others just happen to strike the right cord at the right time.

These interactions give me an occasional reminder of how important it is to set your default to being the kind of person you want to be remembered as, because when your guard is down may be the time you’re making an important impression.

That’s one MEAN stack

On Thursday (6/26/2014), we had a nice meet up for the Southwest Florida [.net] Developers Groupwhere I was happy to see some old friends and get the opportunity to present on the MEAN stack. This is a little out of my comfort zone since I am just learning this stack and am by no means an expert on it, but it was fun nonetheless.

This blog post is a bit of a recap on what we covered with some follow up links for more information.

What is the MEAN stack?

The MEAN stack is Mongo as the database, Express as a web server framework, Node as the underlying server, and Angular as the client-side framework. Let’s take a minute and briefly discuss each of these technologies.


Mongo DB is a NoSQL document database that uses Javascript syntax and stores data as BSON (binary json). It’s not a Mickey Mouse database; it’s actually quite powerful, and it’s free.

Some of the highlights of Mongo are:

  • Document database (NoSQL)
  • Javascript syntax
  • Stored as BSON (binary JSON)
  • Collections instead of tables
  • Single instance or sharded cluster
  • Replicated servers with automatic master failover

You can learn more about Mongo through 10gen’s introduction.

Also, take a look at comparing SQL to Mongowhich is a great article if you’re already experienced in relational databases.


Express is a web-server framework that sits on top of node. It’s very lightweight and just makes node a little easier to use for web-based activities.

It’s not the only web framework for node, but it certainly is the most popular. Learn more about express.


Node is server-side Javascript which focuses on non-blocking IO and is uses an event driven model. At first, the notion of writing Javascript to run the server-side code seemed a bit odd to me, but once I got over my old preconceptions of the limitations, I really embraced it.

The “hello world” of node looks a bit like this:

var http = require('http');
http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
}).listen(1337, '');
console.log('Server running at');

You can learn more about node by visiting the official website.


Angular is a front-end framework for Javascript web applications which is supported by Google. Angular has the following benefits (among others):

  • Creation of new directives which allow you to augment HTML controls.
  • Clean separation of view, controllers, and services.
  • A simple to use binding mechanism for updating the view based on changes in the controller.
  • Testable using the IoC pattern.

More information can be found on the official website.

Try it out

You can try out the MEAN stack in several ways. First, as we did during the meeting, install each component manually by first installing node, and then using npm and bower to install the other packages. You can follow the public Trello boardfor simple steps that we followed during the presentation.

Two additional ways, which are quite a bit quicker and include additional libraries not covered during the presentation, are to use mean.ioand Both of these are scaffolding tools to get you up quickly and provide a solid foundation to work from.

This blog post is a bit of a recap on what we covered with some follow up links for more information.

Paralysis Analysis and the Paradox of Choice

I am at the point where I visibly cringe at the mere mention of corporate buzzwords such as “paralysis analysis”, “single source of truth”, or “low hanging fruit”. However, all of these phrases are rooted in tried and true explanations of important and complex situations. I think it may be simple for communication to summarize these situations with a succinct two or three word phrase, but I also believe it is important to first understand what you may be short handing when you employ a phrase like this.

I can think of no better example than “paralysis analysis” (or “paralysis by analysis” for the longer form). Even now, I struggled with picking up my laptop and writing a draft of this blog post because I’m thinking of the following choices:

• It’s 3:30AM and I need to get rest to watch the kids tomorrow morning (Saturday).
• I really should be working on my project at work to meet this crazy deadline and the project accounts for significant impact to the company and potentially my career there.
• I could be working on my side projects to start a company.
• I could be attempting to drum up side work or consulting time.
• I really should watch Pluralsight or some other educational material to continue learning and stay sharp.
• I should put together a presentation for work.
• I should put together a presentation for the developer group.
• Etc.

You may look at this and say, this is just a task list; but it’s a conscious decision that I have to make before I take action to write this post and for the entirety of writing the post until completion to decide whether this is the best use of my time right now (or whether I want to do it the most). I find this incredibly difficult the longer the task is, for example, reading a book is excruciating to me lately because the pace is so much slower and I wonder if I should be doing something more “productive” which in many cases leaves me with not finishing the book and not doing a good job at the next task because I am thinking about having not finished the book.

A really strong light was recently shined on this for me when I stumbled across a TED talk by Barry Schwartz titled The Paradox of Choice. It’s fairly short at only 19mins and it’s been around for a couple of years now, but I really applaud the content. In the talk, Mr. Schwartz discusses many points regarding the simple fact of having so many choices that it provides less pleasure and focus on any one decision made and produces a sense of buyer’s remorse of allocated time. Wow, talk about your first-world problems, but this hits home for me like few others I’ve read or seen in the past year!

When thinking back of examples on when I generally write (decent) blog posts, learn new topics, write my best code, or actually solve difficult problems, it’s often at times like this: late at night, everyone else is asleep, sitting in the dark – relatively single focused. It’s not that I am anti-social or a recluse. It’s not that I can’t prioritize and push through distractions. It’s because my options are limited at 3:30AM, the pressure of so many activities or demands are slightly farther away right now, and there is some higher percentage of my consciousness that is focused on less problems.

It seems obvious as well that a lack of focus directly corresponds to a lack of quality in executing any one task and I’ve noticed that as my responsibilities in life and work become more diverse and when I can only allocate small tidbits of time across a great many activities, I sometimes look back and feel unsatisfied with the job that I’ve done. What’s interesting about this to me, is that usually I’ve done a good job; I met the goal, the customer is happy, some positive outcome – but I know I could have done better.

Now, with everything there are tradeoffs, and my focus on individual tasks currently is being supplanted with an opportunity to discover a much vaster array of different experiences that one would hope would have a synergy in and of itself. Perhaps in a couple of years I will be able to write a similar post on whether at the time of this writing I actually understand “synergy” or if I am merely using another buzzword.