You have all these engineers, now what?

It’s widely said that good people are hard to find, and in the tech industry, where a gap still exists between skills needed and qualified candidates, this is even more pronounced. Companies are competing with each other offering higher salaries, more interesting work, upward mobility, and benefits that range from your standard health insurance and 401k, to in-office beer and foosball (apparently all software engineers decided college should extend indefinitely). With such a demand market, it can be challenging to find and successfully recruit the right talent to your company. But what happens when you finally do?

Managing 20, 30, 50+ engineers

At a certain point, the number of engineers you have can’t fit onto one team. Administrative burden, communication, and collision become too difficult and lead to frustration and inefficient development. This is easy to spot; as you add more people, even after the ramp up period ends, you’re slower than you were before. That feeling of moving so fast with a small team is replaced with this heavy burden of making sure everyone is moving in the same direction and it simply feels like you should be getting more done than you are.

It’s at this point, that you begin breaking the group into teams. Some companies choose to split by role (putting all of the web developers on one team), others by business domain (for example, an Order-To-Cash business process), and still others choose a product or module to focus on. I’ve had a lot of conversations around which is best and I have my opinions, but one thing is for certain; you must avoid isolationism.

Getting it wrong

Let’s talk about some of the problems that can happen when you’ve constructed these teams:

  • Fiefdoms: The team becomes the sole owners of whatever they are in charge of. They become territorial.
  • Tribal Knowledge: Because the team works so closely with this one focus, they stop documenting and knowledge simply exists within the group.
  • Echo Chamber: Decisions and approaches to tackling problems don’t expand outside of the group, and yet the group continues to believe they are always coming up with the best way to do it.
  • Slowed Growth: A lack of new ideas slows growth of the product as well as the career development of those working in it.
  • Terrorists: This is a bit of dramatic wording, but one strong individual in the team, may capitalize the direction of the entire team and therefore the direction of the product/business/etc. Worse still, the team and business believes losing this individual would be so detrimental and costly there is nothing they can do (fear cycle).
  • Production Support: The team can’t move quickly on project work and misses dates, but it’s because they’re getting pulled into too many directions, specifically Production Support or Hotfixes.

Getting it right

In my opinion, there are a few simple fixes (at least conceptually) which can resolve most, if not all, of these problems.

Direction: All of us want to be successful, and most engineers I know, want to know what the end goal is and then be allowed to determine the best way to get there. Keeping a pulse of how the teams are doing, as well as checking in on individuals about what they are actually working on (even reviewing code or discussing architecture), helps to guide or refine direction as the team learns new information.

Empowerment: Software development has become much more complex than it ever has been in the past. It takes teams to build great software in most cases, and in order for those teams to be successful, they need to be empowered to make decisions. Engineering teams should not only make technical decisions, they should also help shape the product or business. Don’t be just order takers, be contributors of great ideas and allow your engineers to have this say.

Silver Bullet: Rotate. Yes, if there is one silver bullet to building highly functional teams, it’s to introduce a methodical rotation of fresh ideas, talent, skills, and personalities.

Team Assembly and Rotation

Here’s my go-to recipe for assembling teams, which requires a minimum of six engineers (but I usually prefer eight) and can scale as large as you like. For this example, I’ll assume the following: we’re a company which offers products (either internally or externally) that is consumable via web and mobile, and we prefer to build centralized API’s for our own applications as well as third-party integrations.

My technical role requirements are that we need a frontend engineer (web), a mobile engineer, a backend engineer, and I believe strongly in at least one quality engineer per team.

Feature Team 1: 1 web, 1 mobile, 1 API, 1 QA.

However, I do not want this feature team to be distracted with production support, or ad-hoc small feature requests while they are trying to get the big project work done. So I immediately add another team to focus specifically on support (questions, hot fixes, small features) and I model it with the same exact roles as the Feature Team.

Support: 1 web, 1 mobile, 1 API, 1 QA.

We’re now at eight individuals, all of which are able to perform a specific role on the team to keep them specialized and fast. However, if they run out of work to do (which almost never happens), then they can always jump over and help in an area they are not expert in (giving them a chance to learn new technologies).

Imagine that you have a well defined roadmap of all the projects you plan on doing. In each of these projects that you define, you create epics (small mini-projects that have defined conclusions), and then you assign the Feature Team to work on this epic. At the completion of each epic, you simply rotate a member with your support team that has a matching role (the frontend engineer on the Feature Team swaps with the frontend engineer on the Support Team). After the completion of the next epic, a different role swaps (the backend engineer for example).

As a business, let’s decide that we want to capitalize on more of the opportunities in our roadmap. We’d like to be able to do more than one epic at a time. This becomes simple, and with a known cost. We recruit a new Feature Team which will later become indoctrinated into the rest of the product offerings through their rotation in and out of support and feature teams. Individuals may start in Feature Team 1, move to Support, and then rotate into Feature Team 2.

The following diagram helps to illustrate the assembly of teams I’ve used successfully in the past.

What does this do?

By rotating a member who actively built the new epic into support, you put someone who truly understands the newly built project into supporting it. Even if bugs pop up in areas outside of this individual’s direct technical skillset, they will have enough knowledge of how it’s supposed to work to help guide the right technical skillset on the support team to resolve it.

At the same time, you’ve rotated someone off of support and allowed them to now focus on creating something new, which helps reduce burnout for your support staff. This new individual to the team can now bring a fresh perspective into the next epic.

The team is working closely with a small number of people at a time (maintaining focus and reducing administrative or communication burden), they are also breaking down walls and establishing one large community as they continue to work with each other.

Let’s revisit our list of problems and see if we’ve solved them:

  • Fiefdoms: Teams get continuity because the majority stay together after each epic, yet rotation causes ownership to be democratized across the entire organization.
  • Tribal Knowledge: Since new members will need to be introduced to the project, documentation and consistency of approach becomes more important.
  • Echo Chamber: Fresh ideas are constantly being introduced into the teams and approaches are constructively challenged.
  • Slowed Growth: Both the product and individual team members are introduced to new ideas and varying levels of expertise, allowing for a sharing of knowledge on each project.
  • Terrorists: No one person has the ability to remain on a project indefinitely, which limits a multitude of risks for products and business processes.
  • Production Support: Support is more skilled and involved with the building of each project, while the feature teams are no longer distracted. Estimates become closer to reality.

Running the Microsoft .NET Stack on a Fresh MacBook Pro

Well, after parting ways with Flightdocs, one of the first things I needed to do was get my own laptop and get back to work. The days of spending $2,500–3000 on a nice machine under the generosity of the business were over, and I settled for a slightly smaller, less powerful, but still pricey little MacBook Pro for around $1,800. I suppose there are people in this world that dread setting up a new computer, but for me, it’s one of the best feelings. A brief pause of tranquility and then the rush of excitement that comes with a fresh start and new possibilities.

Since 2005, I’ve worked with .NET, and though at times I’ve cursed Microsoft for so many things, I always come back to C#. Yes, I proudly proclaim myself a polyglot developer who loves new languages, but there is comfort and confidence in the familiar. So, with that in mind, let’s get setup to build enterprise-grade .NET software on our sleek little budget-friendly-not-so-friendly MacBook Pro!

Installation Disclaimer

Not all of the following installations are required, but these are my recommendations for getting setup to be able to cover a variety of common development tasks.

Node.js and NPM

Package managers are fantastic, it’s weird to think that many of us didn’t use them hardly at all even a couple of years ago. I tend to use Node.js for a variety of web development tasks, but even if you’re not going down that route, having NPM is a great way to pull down web-frontend libraries now that bower is deprecated. Head to https://nodejs.orgto download a fresh copy.

XCode

Why XCode when installing Visual Studio? First, Visual Studio for Mac uses many of the Xamarin components which tied in to the development tools for XCode, creating a dependency. Second, you might as well sharpen those mobile skills or at least have a sandbox-knowledge of Swift/Objective C if you are a developer and own a Mac.

Microsoft Visual Studio for Mac

Look at that, I finally got to the part where we install Visual Studio. Head over to https://www.visualstudio.com/vs/visual-studio-mac/and get the bits. There should be a Community Edition that you can start with before shelling out any money. Also, many developers may not realize, but Microsoft does now offer monthly MSDN subscriptions which includes licensing for Professional/Enterprise versions of Visual Studio. Of course, I’m in favor of not spending any money at all if possible.

If you’re not familiar with Visual Studio, it’s a fairly rich (bulky) IDE for development, but the Mac version is quite a bit lighter than it’s Windows counterpart. In my opinion, it’s not as powerful or stable on Mac, but has quite a bit of charm. Since we’re in the Visual Studio for Mac section of this post, let’s briefly talk about Visual Studio Code as well. I recommend also installing Visual Studio Code from https://code.visualstudio.comeven if you don’t write in C# or another similar Microsoft-dominated language. It’s a solid code editor that rivals Sublime, Atom, or Brackets. Visual Studio Code is lightweight, very stable, and extremely extensible. Also, I frequently write my complex API code in the full blown Visual Studio IDE while writing my web application code in Visual Studio Code. Personal preference, I’m sure but I don’t think I’m alone in this combination.

At this point, you’ve got most of what you need to develop .NET applications on a Mac. However, if you actually want to persist any data in your application, you’ll likely need to setup a database.

It is pretty amazing how database offerings have changed in the past five or so years. You’ve got so many options on Linux/Unix based operating systems such as Postgres, MySql, Mongo, and tons more. But, if you’ve been working in MSSQL for the past decade+, you may be blown away to know you can run MSSQL on your Mac as well! Now would be a good time to check outside your window and see if you actually spot pigs flying.

SQLite and MS SQL

SQLite DB is available to run locally on your machine and it’s straight forward. If you’re using Entity Framework, simply point your connection string to a local file and initialize the database. However, if you want an IDE for accessing the data I tried DB Browser for SQLiteand it worked well for me.

Now, onto the really fun stuff — MSSQL on your Mac. You’ll need to install Dockerand a specific image for SQL Server which you can pull using the following command:

docker pull microsoft/mssql-server-linux

Once you’ve got Docker up and running and pulled down this image, you will need to step through a bit of configuration. Microsoft did a nice job of documenting this in the following link: https://docs.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker. I’ll wait while you spend the next 15 minutes working it through.

… Intermission …

Done? Fantastic, at this point you should be able to debug a nice WebAPI through Visual Studio for Mac pointing to a full MSSQL database running on Docker, and call everything from your web application that you’re editing through the lightweight Visual Studio Code. Did a few of you cringe at how many Microsoft products you used? Don’t stress! In a few years, the young new developers will start telling all of their Ruby and Python friends about this hot new open source language called C#.

Happy coding my fellow evil-empire-turned-friendly-open-source-contributerfriends.

Tech Talk: NBAA

Recently, I began preparing a session for the NBAA conference in Orlando targeted at the aviation industry. I struggled identifying the needs of the audience since they were a bit different than I usually have the opportunity to speak to. The following article was not the presentation I gave, but an early direction to introduce several technology concepts to the group and help them understand how these topics could improve their business processes and general operations. The primary topics included:

  • Electronic Data
  • Specialized Software (and SaaS)
  • The Cloud
  • Mobile
  • Security Tips

Though I ultimately presented a slightly different direction for this material, I think it still has some value to be presented here.

Migrating from paper to electronic systems is a challenge. Technically, the data needs to be structured in a way that computer systems understand, but the real challenge comes from user adoption. Paper let’s you write anything you want, change workflow however you need at that moment, and is comfortable to some workforces that are still not yet at ease with computers.

However, the move to electronic data allows for real time validation of the data to significantly reduce mistakes, gives us visibility into trends that may be occurring, and allows us to access the information by several people at a time.

Take reporting aviation discrepancies for example; at Flightdocs, we have seen a number of operators switch from a paper based write-up to electronic. Images and video can be captured and attached to the discrepancy for later evaluation and over time, we can begin to track trends in part failure or unexpected use.

As you begin to move your data to electronic systems you may be tempted to move to Excel or other similar general use software. This is a great start, but it has many drawbacks. Data is typically not validated and doesn’t reflect the real use of the data such as proper tolerances or required fields.

Collaboration becomes a real issue as emailing files back and forth is fraught with errors and typically if the file exists on a network it can only be in use by one person at a time.

Look for specific software that solves an important problem for you. Whether it is maintenance, flight scheduling, inventory, or accounting — find an expert company that can help you tackle these problems in a purpose driven way.

In most cases, I advise against buying on-premise software if possible. This is software that you have to install and maintain at your company and comes with all kinds of hidden costs and complexity. At Flightdocs, we both use software as a service as customers, and providers as our business. This is software that is hosted by someone else, often in a cloud, and is accessed by the internet for a monthly or yearly fee.

The cloud, at it’s simplest form is a way of renting servers from another company with quite a bit of magic thrown in to handle massive scaling. However, this over simplification shouldn’t belittle how important this shift in technology is and all of the tremendous opportunities it now gives us.

In the past, it was incredibly difficult to scale quickly and across continents. It meant purchasing servers, setting up data centers, staffing the appropriate IT resources to manage the hardware and software, and keeping everything up to date and running smoothly.

The cloud allows all of that scale and complexity to disappear and you to simply use or develop applications and has led to more innovation in mobile device software and internet enabled embedded devices.

Now that you’ve moved from paper to electronic, selected the right targeted software for your operations, and have access to that data anywhere through the internet, look to mobile for access away from your desktop.

Imagine each leg of your flight updating compliance metrics and aircraft times in real time to help keep your due list in check or notify home base of necessary maintenance or inventory orders.

You could even dispatch work to individuals who can follow up with their mobile devices and keep getting the latest information throughout the day.

Now that we’ve built up this discussion with all of the good things you can do with technology, let’s share a couple of important drawbacks.

When you go to software as a service or to the cloud, you intentionally give up a lot of responsibility. This can also work against you in that you may not have as much control if there is an issue. In computers, we all know that things aren’t perfect. Outages do happen, hardware does fail, and mistakes are made. If you are already outfitted with the best experts in supporting a production quality network and application, then it may not make sense for you to give over this control.

Security is a double edged topic. If the data that you are storing is of significantly high security, such as weapons systems, or medical patient information, then you may want to reconsider a cloud provider. This is not to say that a cloud provider is going to be necessarily less secure, but you have less direct oversight and therefore are unable to answer some specific security requirements for certain certifications.

  • Home Depot — 56m credit cards potentially stolen through installed malware on cash register machines.
  • JP Morgan — Month long attack stealing 76m names, email addresses, addresses, and phone numbers of account holders.
  • Ebay — 145m user accounts potentially compromised by hackers stealing employee accounts.
  • Adobe — 152m credentials accessed and sensitive information erased.
  • Target — 70m records stolen from compromised magnetic strips on card readers.

When you opt to start moving more and more data to the cloud, you’re making your information more accessible. This is a good thing, but it needs to be controlled for the right people. There are several steps that you can take to further protect yourself and your data.

Here are a few helpful tips:

  • Always use a strong password. These are passwords that can be harder to remember but provide much better security.
  • Never use the same password on more than one system. By enforcing this, you limit your exposure if by some chance a password is compromised.
  • Always ensure you are connecting over secure traffic, look for sites that show a lock in the address bar.
  • Ask for and setup multi-factor authentication to help protect you, even if your password is stolen. Multi-factor authentication also called 2-factor authentication is when you have a username and password and a second factor such as your phone to confirm login attempts.
  • When using a software as a service company, ensure your password is hashed when stored. Flightdocs uses one way hashing to prevent decryption attacks.
  • Ask about encrypted data practices when moving data to the cloud. Not all data needs to be encrypted, but data that you consider sensitive should be.
  • Keep computers and browsers up to date with the latest patches.
  • Install virus and malware scanning software on your computer to help prevent attacks.
  • Always set a pin or login for your mobile phone. Phones are easy to steal and provide lots of information as we adopt mobile access strategies.
  • Be careful with roaming settings on your phone due to wireless hijacking.
  • Backup your data, but also be careful to encrypt and protect backups as they can become vulnerable sources of data.

Conway’s Game of Life in Angular.js

I know there are a lot of posts for Angular, so I will spare everyone a rehash of setup and Hello World. Instead, I thought it would be fun to show a simple example of Angular recreating Conway’s Game of Life.

This example will use the following technologies:

  • HTML
  • CSS
  • Angular.js
  • Bootstrap

If you would like to see a sample of the working application, click here: http://www.nicholasbarger.com/demos/conways-game-of-life.

What is Conway’s Game of Life

Check out Wikipedia: http://en.wikipedia.org/wiki/Conway%27s_Game_of_Lifefor a more in depth definition and origin of the game, but in short, it’s a simulation that allows you to observe evolutions based on an initial starting configuration while applying the following four basic rules at each round:

  1. Any live cell with fewer than two live neighbors dies, as if caused by under-population.
  2. Any live cell with two or three live neighbors lives on to the next generation.
  3. Any live cell with more than three live neighbors dies, as if by overcrowding.
  4. Any dead cell with exactly three live neighbors becomes a live cell, as if by reproduction.

The game can continue indefinitely resulting in either a repeating pattern or a “still life” where no more moves can occur.

Since this article is more about Angular, I’ve simplified the game a bit as well to randomly select the starting positions and limited the board to 30 by 30 cells, but feel free to improve the code to allow the gamer to specify starting positions or infinite space. All source code can be found here: https://github.com/nicholasbarger/conways-game-of-life.

Let’s set up the UI

To start, let’s set up a form and board to play on. The form is pretty straightforward and allows you to specify the number of starting life forms, how many generations to simulate and finally, a button to start the game.


            Enter the number of spontaneous lifeforms:
            
            Enter the number of generations to simulate:
            
            Start

Notice, that we’re using a few Angular tags to collect the data and fire off the game.

First, the data to interact with is wrapped with a div that specifies an ng-controllerattribute. This attribute is used to specify what controller will be used to execute logic against the HTML DOM elements. It is common to place this controller logic in another Javascript file.

Next, ng-submitis used to specify what function will be called on the controller when the form is submitted. When we wire up the controller, this is the method that will start iterating over generations in the game.

Finally, ng-modelis used to bind data values from the input form fields into variables that can be accessed in the controller. When the values on the form are changed, the variables backing them are automatically notified of the change and updated.

Now that we have a form created to gather some basic information about starting the game, let’s create the board that the game will actually play on.

<strong ng-show="rows.length > 0">Generation </strong>
        <table id="board" class="table table-bordered">
            <tr ng-repeat="row in rows">
                <td ng-repeat="cell in row">
                    <i class="glyphicon glyphicon-fire" ng-show="cell.isAlive == true"></i>
                    
                </td>
            </tr>
        </table>

In this code snippet, we see a few new Angular components used for controlling presentation of data.

First, ng-showallows us to toggle the visibility of DOM elements by evaluating a true/false statement. Essentially, when the expression is true, we’re setting a CSS style “display: block”, and when false, setting “display: none”.

Next, we get our first look at the mustache-inspired template rendering (http://mustache.github.io) used by Angular. Notice the double curly braces surrounding the variable name in . This allows rendering of this variable and is automatically updated whenever the value of the variable changes.

The Angular control we have not yet covered is ng-repeatand is used when building out the table as we create rows and cells based on the number of items in the “rows” variable. This simply iterates over the collection and continues to generate the content where the attribute is specified and all information that is a child within it.

Finally, we revisit the ng-show attribute to show a small icon in the cell based on whether it is alive or dead. The “== true” is a bit redundant (and admittedly, should be “===” if used anyway to strictly check the value).

Wire up the Controller to Play

The controller is just a function that sets up all of the code to interact with the UI and exposes the necessary variables through a special parameter called $scope. You can read quite a bit more on $scope through the Angular documentation https://docs.angularjs.org/guide/scopebut for simplicity, it’s a way to expose variables to binding in the UI.

If the UI is going to use a variable or call a function, it must be attached to $scope through the following syntax:

$scope.myVariable = ‘Some value’;
$scope.myFunction = function(param1, param2) { return param1 + param2; };

For brevity sake, I will just link to the file hosted on Github since its code is not truly Angular specific and mostly controls running the game. I’ve attempted to comment the rules fairly well so it is evident what is happening in each “generation” of the game. https://github.com/nicholasbarger/conways-game-of-life/blob/master/game.html

Take Away

At my company, we’ve adopted Angular to use every day in production development and haven’t looked back. The benefits of creating a single page applications (SPA) which limits full trips back and forth on the server has allowed us to provide a more native experience over the web while reducing our server load by pushing some of the processing back onto the client.

The example shown in this article is by no means production code and is structured all in one file, which is usually not appropriate for production use. Enterprise level applications need to fully utilize separation into various modules that are comprised of controllers, views, partials, directives, services, and so on.

I’ve learned to stop promising future blog posts since I tend to write in short waves and then neglect my blog for months at a time; however, I think it would be great to write several posts on architecting large Angular web applications and some of the challenges we have faced. Stay tuned (but don’t hold your breath)!

Resetting your defaults

My blog at nicholasbarger.com pretty much died. It had a decent run from 2008–2013, but then a sudden death. I stopped writing, forgot about controlling all of the SPAM comments, and even forgot to update my credit card which subsequently caused my custom css theme to disappear.

I’m not quite ready for a blog eulogy, so time for a reboot and let’s see if we can salvage the remains.

Looking back at why I started my blog, I remember how I wanted to share what knowledge I had and help strengthen what topics I was learning. This was much easier to do when I was focused purely on technical topics. Programming languages, frameworks, libraries, and databases are so much easier to identify learning milestones and gain that feeling of accomplishment. They are black and white, either you know it, and the application you’re writing works, or you don’t and you continue learning more (and look it up on Stack Overflow).

In 2013, I spent the vast majority of my occupation in meetings. Some of my time went to architectural design and creating technical solutions, but most of my time went to project management, scheduling, explaining issues, and rehashing the same thing over and over. Though I complained at times, it wasn’t bad. In fact, I’m pretty sure I learned just as much, if not more during that year than ever before in my career. However, the accomplishments of that kind of learning aren’t black and white, and they can be sneaky at teaching you more abstract lessons.

One of those lessons learned was about how much impact you can have on people in ways you usually don’t even know. There have been many people I worked with directly who I really focused on to try and help and others I feel like I was nothing more than a casual acquaintance. To my surprise, months or years later, it has been the casual acquaintance that I hear from out of the blue to tell me that I made a difference some small (and on rare occasion, big) in their life. It doesn’t happen often, but when it does, it’s quite an experience. First I feel flattered, then a bit confused because what may have been an important conversation or action at the right time for them might have been casual and fleeting for me, and in some cases, I may not even remember it. Sometimes that leads to guilt not intentionally creating a relationship with them as I may have had with others. However, I realize that is how things work, and the impact others have had on me happen in much the same way. Some are direct and built over hundreds or thousands of interactions, others just happen to strike the right cord at the right time.

These interactions give me an occasional reminder of how important it is to set your default to being the kind of person you want to be remembered as, because when your guard is down may be the time you’re making an important impression.

That’s one MEAN stack

On Thursday (6/26/2014), we had a nice meet up for the Southwest Florida [.net] Developers Groupwhere I was happy to see some old friends and get the opportunity to present on the MEAN stack. This is a little out of my comfort zone since I am just learning this stack and am by no means an expert on it, but it was fun nonetheless.

This blog post is a bit of a recap on what we covered with some follow up links for more information.

What is the MEAN stack?

The MEAN stack is Mongo as the database, Express as a web server framework, Node as the underlying server, and Angular as the client-side framework. Let’s take a minute and briefly discuss each of these technologies.

Mongo

Mongo DB is a NoSQL document database that uses Javascript syntax and stores data as BSON (binary json). It’s not a Mickey Mouse database; it’s actually quite powerful, and it’s free.

Some of the highlights of Mongo are:

  • Document database (NoSQL)
  • Javascript syntax
  • Stored as BSON (binary JSON)
  • Collections instead of tables
  • Single instance or sharded cluster
  • Replicated servers with automatic master failover

You can learn more about Mongo through 10gen’s introduction.

Also, take a look at comparing SQL to Mongowhich is a great article if you’re already experienced in relational databases.

Express

Express is a web-server framework that sits on top of node. It’s very lightweight and just makes node a little easier to use for web-based activities.

It’s not the only web framework for node, but it certainly is the most popular. Learn more about express.

Node

Node is server-side Javascript which focuses on non-blocking IO and is uses an event driven model. At first, the notion of writing Javascript to run the server-side code seemed a bit odd to me, but once I got over my old preconceptions of the limitations, I really embraced it.

The “hello world” of node looks a bit like this:

var http = require('http');
http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');

You can learn more about node by visiting the official website.

Angular

Angular is a front-end framework for Javascript web applications which is supported by Google. Angular has the following benefits (among others):

  • Creation of new directives which allow you to augment HTML controls.
  • Clean separation of view, controllers, and services.
  • A simple to use binding mechanism for updating the view based on changes in the controller.
  • Testable using the IoC pattern.

More information can be found on the official website.

Try it out

You can try out the MEAN stack in several ways. First, as we did during the meeting, install each component manually by first installing node, and then using npm and bower to install the other packages. You can follow the public Trello boardfor simple steps that we followed during the presentation.

Two additional ways, which are quite a bit quicker and include additional libraries not covered during the presentation, are to use mean.ioand yeoman.io. Both of these are scaffolding tools to get you up quickly and provide a solid foundation to work from.

This blog post is a bit of a recap on what we covered with some follow up links for more information.

Paralysis Analysis and the Paradox of Choice

I am at the point where I visibly cringe at the mere mention of corporate buzzwords such as “paralysis analysis”, “single source of truth”, or “low hanging fruit”. However, all of these phrases are rooted in tried and true explanations of important and complex situations. I think it may be simple for communication to summarize these situations with a succinct two or three word phrase, but I also believe it is important to first understand what you may be short handing when you employ a phrase like this.

I can think of no better example than “paralysis analysis” (or “paralysis by analysis” for the longer form). Even now, I struggled with picking up my laptop and writing a draft of this blog post because I’m thinking of the following choices:

• It’s 3:30AM and I need to get rest to watch the kids tomorrow morning (Saturday).
• I really should be working on my project at work to meet this crazy deadline and the project accounts for significant impact to the company and potentially my career there.
• I could be working on my side projects to start a company.
• I could be attempting to drum up side work or consulting time.
• I really should watch Pluralsight or some other educational material to continue learning and stay sharp.
• I should put together a presentation for work.
• I should put together a presentation for the developer group.
• Etc.

You may look at this and say, this is just a task list; but it’s a conscious decision that I have to make before I take action to write this post and for the entirety of writing the post until completion to decide whether this is the best use of my time right now (or whether I want to do it the most). I find this incredibly difficult the longer the task is, for example, reading a book is excruciating to me lately because the pace is so much slower and I wonder if I should be doing something more “productive” which in many cases leaves me with not finishing the book and not doing a good job at the next task because I am thinking about having not finished the book.

A really strong light was recently shined on this for me when I stumbled across a TED talk by Barry Schwartz titled The Paradox of Choice. It’s fairly short at only 19mins and it’s been around for a couple of years now, but I really applaud the content. In the talk, Mr. Schwartz discusses many points regarding the simple fact of having so many choices that it provides less pleasure and focus on any one decision made and produces a sense of buyer’s remorse of allocated time. Wow, talk about your first-world problems, but this hits home for me like few others I’ve read or seen in the past year!

When thinking back of examples on when I generally write (decent) blog posts, learn new topics, write my best code, or actually solve difficult problems, it’s often at times like this: late at night, everyone else is asleep, sitting in the dark – relatively single focused. It’s not that I am anti-social or a recluse. It’s not that I can’t prioritize and push through distractions. It’s because my options are limited at 3:30AM, the pressure of so many activities or demands are slightly farther away right now, and there is some higher percentage of my consciousness that is focused on less problems.

It seems obvious as well that a lack of focus directly corresponds to a lack of quality in executing any one task and I’ve noticed that as my responsibilities in life and work become more diverse and when I can only allocate small tidbits of time across a great many activities, I sometimes look back and feel unsatisfied with the job that I’ve done. What’s interesting about this to me, is that usually I’ve done a good job; I met the goal, the customer is happy, some positive outcome – but I know I could have done better.

Now, with everything there are tradeoffs, and my focus on individual tasks currently is being supplanted with an opportunity to discover a much vaster array of different experiences that one would hope would have a synergy in and of itself. Perhaps in a couple of years I will be able to write a similar post on whether at the time of this writing I actually understand “synergy” or if I am merely using another buzzword.

Learning Knockout JS – Crazy Mom Baby Tracker Demo

I’m thrilled to be able to report my wife and I had our second daughter on May 8th, 2012. Vanessa was 7 pounds even and very healthy. Due to the birth, I took a few days off of work to help out (in many ways, I think I was more work for my wife being home). Most of my time has been spent getting acquainted with my new daughter, but occasionally, I’ve grabbed an hour here and there, usually in the middle of the night after a feeding, to read about knockout and even write a small demo as a learning exercise.

This little app is not meant to serve any commercial value and is very simplistic, but given the current situation I felt it was a fun and fitting topic.

To any mom’s out there, I mean the title to be lighthearted – no offense intended!

Ok, on with the article.

Knockout Demo Screenshot

What’s It Made Out Of?

The Crazy Mom Baby Tracker is intended to exercise the following technologies:

At the Southwest Florida .NET User Group, we recently had a Battle of the UI Frameworks which highlighted a general movement towards JavaScript-centric applications in the .net community. I thought we were aggressive at work, perhaps even cutting edge, but alas – it turns out we’re about where everyone else appears to be right now.

HTML5 is, of course, the latest version of HTML (at the time of this blog post) and all the rage. Though I will display HTML5 semantics, there are not any earth shattering HTML5 snippets throughout this demo.

jQuery is found throughout, as it has become the de facto standard for working with JavaScript these days.

Bootstrap was shown to me by our non-.NET marketing development team and it has been a nice addition for standardizing the HTML structure, CSS (or LESS) classes, and general user experience.

Knockout is very recent for me and the main purpose of this educational demo. It is a responsive MVVM JavaScript implementation that binds the UI to the underlying view model.

jqPlot is a jQuery charting library that I added into the project to visualize the data more interestingly.

A Little Prep Work Before We Get Building

This application is pure HTML, CSS, and JavaScript. As such, there is no need for the .net framework or Visual Studio. However, it certainly is a nice IDE to work in and by using the Nuget Package Manager you can get up and running very quickly. Therefore, all screenshots will be provided using Visual Studio, but this is not a requirement and you can ignore these references if you like.

First, let’s create a new Web Site. In Visual Studio, click File > New Website. Then under the template selector, choose ASP.NET Empty Web Site. By selecting this, you get a pretty bare web site that does not need to contain any ASP-related tags or code. After selecting an appropriate location to store the files you should be ready to begin.

Next, right click the project and select Manage Nuget Packages. You will want the following packages: jQuery, Bootstrap from Twitter, Knockout JS, and jqPlot.

Nuget Packages

Let’s also add a few placeholder files that we’ll work with later. Please create the following:

  • /Content/my.css
  • /Scripts/my.js
  • /index.html

Let’s Start Building

At this point, everything should be ready for us to start getting to the good stuff. Let’s open the index.html (our main application page) and add the css references for the selected Nuget packages, as well as our custom css file.

<head>
    <title>Crazy Mom Demo</title>
    <link rel="stylesheet" type="text/css" href="Content/bootstrap.min.css" />
    <link rel="stylesheet" type="text/css" href="Content/bootstrap-responsive.min.css" />
    <link rel="stylesheet" type="text/css" href="Scripts/jqPlot/jquery.jqplot.min.css" />
    <link rel="stylesheet" type="text/css" href="Content/my.css" />
</head>

Next, let’s add our script tags to bring in the code for the Nuget packages, and our own placeholder js file which we will use to add all of our custom logic to drive the application.

<body>
    <script type="text/javascript" src="Scripts/jquery-1.7.2.min.js"></script>
    <script type="text/javascript" src="Scripts/bootstrap.min.js"></script>
    <script type="text/javascript" src="Scripts/knockout-2.1.0.js"></script>
    <script type="text/javascript" src="Scripts/jqPlot/jquery.jqplot.min.js"></script>
    <script type="text/javascript" src="Scripts/my.js"></script>
</body>

Now that the references are in place, we need to build the structure of the html. Since we’re using bootstrap, we’re going to use the fixed grid layout they provide (hence the css classes “row” and “spanX”). The basics are below.

<div class="container">
	<h1 data-bind="text: title">Title</h1>
	<div class="row">
		<!—-panel for data entry -->
		<div class="span8">
                
		</div>

		<!—-panel for my cute kids picture -->
		<div class="span4">
			<!-—we won’t cover adding this in the blog post -->
		</div>
	</div>
</div>

We need a container wrapping the layout, and a few other layout related div’s to format the page. Notice the h1 tag which has our first knockout data-bind attribute. This is going to look for a property on the viewmodel called “title” and bind the innerText to it’s value.

Next, inside the data entry panel, let’s add two textboxes and a pair of buttons to control adding baby weight entries.

<form class="form-inline well" data-bind="submit: addItem">
	<h3>Enter the babies weight below</h3>
	
	<label>Pounds</label>
	<input id="pounds" type="text" class="input-mini" data-bind="hasfocus: true" />

	<label>Remaining ounces</label>
	<input id="ounces" type="text" class="input-mini" />

	<button type="submit" class="btn btn-primary"><i class="icon-ok icon-white"></i> Add Baby Weight</button>
	<button type="reset" class="btn btn-danger" data-bind="click: clearItems"><i class="icon-remove icon-white"></i> Start Over</button>
</form>

The css classes for the form are also from bootstrap and help to stylize the form. You can view the bootstrap documentation for more details.

The form has a knockout binding for submit to call the function on the viewmodel “addItem”. There is also a binding for the click event of the reset button to clear all data in the viewmodel (not just the form fields as normal).

Directly below the form, let’s add a section for displaying notifications and data validation. We’ll again use knockout to bind the results of the messages based on what is happening in the viewmodel.

<div id="alert" class="alert" 
	data-bind="
		visible: msg().length > 0, 
		css: { 
			'alert-success': msgType() == 'success', 
			'alert-error': msgType() == 'error', 
			'alert-info': msgType() == 'info' }"<
                    
	<a class="close" href="#" data-bind="click: hideAlert">×</a>
	<p data-bind="text: msg"></p>
</div>

Let’s now add the final pieces to allow for a bit of data visualization. We’re going to use a chart control from jqPlot and a table displaying the individual entries.

<div id="resultsChart" data-bind="chart: items()"></div>

<table class="table table-striped" data-bind="visible: items().length > 0">
	<thead>
		<tr>
			<td>Weight</td>
			<td>Total Pounds</td>
			<td>Total Ounces</td>
		</tr>
	</thead>
	<tbody data-bind="foreach: items">
		<tr>
			<td data-bind="text: display()"></td>
			<td data-bind="text: totalPounds()"></td>
			<td data-bind="text: totalOunces()"></td>
			<td><a href="#" data-bind="click: $parent.removeItem"><i class="icon-remove"></i></a></td>
		</tr>
	</tbody>
</table>

The chart is interesting, as it will be a custom binding we create for knockout to work with jqPlot. Data within the table are bound to an array of items and looped through using the foreach knockout binding. I’ve also added a remove button next to each entry to allow for the removal of entries added by mistake. Notice the scoping when specifying the knockout binding; while looping through the items, we’re at the individual item level – therefore, we must move up one level to access the viewmodel directly ($parent) and call the removeItem function.

jqPlot Chart Screenshot

Wiring Up the Logic with Knockout and jQuery

Now that we have a clear picture of what we want this application to look like, let’s wire up the viewmodel and make it actually perform.

Open up your my.js file and begin by creating a good old jQuery ready event:

$(function () {
});

We’ll put our code in here. Let’s also create our own namespace with the following code to avoid any collisions.

// global namespace
var my = my || {};

We now need a model to structure the baby weight entries. Let’s create it as follows:

// models
my.BabyWeight = function(pounds, ounces) {
	var self = this;

	self.pounds = pounds;
	self.remainingOunces = ounces;

	self.totalOunces = function () {
		return (self.pounds * 16) + (self.remainingOunces * 1);
	};
	self.totalPounds = function () {
		return (self.pounds * 1) + (self.remainingOunces / 16);
	};
	self.display = function () {
		return self.pounds + 'lbs - ' + self.remainingOunces + 'oz';
	};
};

This could contain knockout observables and computed values, but it’s not particularly necessary the way the demo is set up. The BabyWeight model has two properties: pounds and remainingOunces which together make up for the entire weight of the baby. I’ve also added a few calculated properties to add to the tabular data for each entry.

Let’s now create the viewmodel which will contain the bulk of our knockout observables.

// view model
my.vm = function(existingItems) {
	var self = this;

	// properties
	self.items = ko.observableArray(existingItems);
	self.msg = ko.observable("");
	self.msgType = ko.observable("info");
	self.title = ko.observable("Crazy Mom Baby Tracker v.001");

	// methods
	self.addItem = function () {
		var pounds = $('#pounds').val();
		var remainingOunces = $('#ounces').val();
		var itemToAdd = new my.BabyWeight(pounds, remainingOunces);
		
		// validate
		if (itemToAdd.pounds == "" || itemToAdd.ounces == "") {
			self.msgType("error");
			self.msg("Oops, either the baby has become weightless or you didn't enter any data.");
			return;
		}
		else {
			self.msg("");
		}

		// add to items array
		self.items.push(itemToAdd);

		// update msg
		self.msgType("success");
		self.msg("You've successfully weighed the baby in at " + itemToAdd.display());
	},
	self.clearItems = function () {

		// clear items
		self.items([]);

		// update msg
		self.msgType("info");
		self.msg("All weight entries have been cleared.");
	},
	self.hideAlert = function () {
		self.msg("");  //clearing the message will auto-hide since it's bound
	},
	self.removeItem = function (item) {

		// remove item from items array
		self.items.remove(item);

		// update msg
		self.msgType("info");
		self.msg("The weight entry has been successfully removed.");
	}
};    

Note the use of this line:

var self = this;

This helps to maintain reference to the proper this when inside callbacks from anonymous functions.

The observables ensure that changes to their values will be automatically reflected in the UI for any bindings. For example, as an item is added or removed from the items array, the UI for the chart and table will automatically be updated, well as soon as we add the custom binding to the chart that is. Let’s add that now:

// kick off knockout bindings
ko.applyBindings(new my.vm([]));
    
// add custom binding for charting
ko.bindingHandlers.chart = {
	init: function (element, valueAccessor, allBindingsAccessor, viewModel) {
		// empty - left as placeholder if needed later
	},
	update: function (element, valueAccessor, allBindingsAccessor, viewModel) {
		// prepare chart values
		var items = ko.utils.unwrapObservable(valueAccessor);
		var chartValues = [[]];
		for (var i = 0; i < items().length; i++) {
			chartValues[0].push(items()[i].totalOunces());
		}

		// clear previous chart
		$(element).html("");

		// plot chart
		$.jqplot(element.id, chartValues, {
			title: 'Baby Weight'
		});
	}
};

The custom binding simply updates the chart on any change to the passed in valueAccessor, which we specified in the html as the items array in the viewmodel. jqPlot uses the element.id, in our case a div tag, to act as the placeholder container to drop the chart into. See the jqPlot documentation for much more detail on creating significantly more elaborate charting capabilities.

Some Odds and Ends

I didn’t cover the my.css file but I used this to add some very minor additional styling to the page. Most of the styles though do come “out of the box” from bootstrap. I hope someone finds this useful and please feel free to correct any mistakes I’ve made – this is certainly meant to reinforce my own pursuit in working with knockout and I welcome any advice.

Fun and Struggles with MVC – No Parameterless Constructor Defined

It’s taking a little while, but I’m starting to understand the magic behind model binding in MVC. It’s fairly simply to try it out while watching videos and tutorials that are out there; but when I applied it to our enterprise application with a fairly large collection of domain models that already exist, I had less than ideal results.

Here’s the struggle I was having…

The Problem

I have an entity called a SalesRep which has several properties including a few complex object properties such as EmailAddress , PersonName (struct), and Address (street1, street2, city, state, etc.) as well as many primitives.

I had both GET and POST Create actions defined in the SalesRepController as follows:

/// <summary>
/// Add a new sales rep.
/// </summary&rt;
/// <returns&rt;</returns&rt;
public ActionResult Create()
{
   return View(new SalesRep());
}

/// <summary>
/// Save a new sales rep.
/// </summary>
/// <param name="salesRep"></param>
/// <returns>If successful, the SalesRep/List/ View.  If not successful, the current SalesRep/Create/ view.</returns>
[HttpPost]
public ActionResult Create(SalesRep salesRep)
{
   var logic = new SalesLogic();
   logic.SaveSalesRep(ref salesRep);

   return RedirectToAction("List");
}

However, when calling the Create action on the SalesRepController, I would get the following Parameterless constructor error and could never even enter into a breakpoint.

No Parameterless Constructor Defined for this object

The Solution

After spending quite a bit of time experimenting and searching the internet, I couldn’t readily resolve the issue. Feeling as though I was never going to grasp MVC, and cursing the videos that looked so ridiculously easy, I spun up a new trivial sandbox project. I created new simplistic models, new controllers, and everything worked perfectly. So what made my old enterprise entity classes different? The answer was… the constructors.

As you can see in the above error message it is properly reporting that it could not find a parameterless constructor for the object; but which object? I had been looking at the SalesRep object and even the View and Controllers, but what I should have been looking at was the complex properties within the SalesRep as MVC recursively reflected all of the properties and created them with a parameterless constructor. In my case, we had an EmailAddress which specifies a single constructor:

public EmailAddress(string value) {
   //our code
}

It was while MVC was auto-magically wiring up the form elements to the email address object that the action came tumbling down.

A Simple Recreation

I’ve recreated this scenario using a simpler class slightly modified from Scott Allen’s pluralsight demo’s.

Below is a screenshot of my newly created Movie class which contains three properties; Name, Year, and Studio. Name and Year are both primitives, while Studio references another entity.

Movie Class

Here is the Studio class with the pertinent name parameter constructor only. There is no parameterless constructor in this class and because we have specified a constructor the default is overridden.

Studio Class

When we now reference this property within the Create View to allow for model binding we will encounter the error.

Movie View

We can correct this by adding a parameterless constructor in the Studio class and all is well again.

Catching up to MVC

Contrary to my nature, I’ve been reluctant to adopt the “latest and greatest” from Microsoft for the past twelve months or so. A good deal of my tech lag time has been due to my primary position leading an Oracle Enterprise Business Suite (EBS) project which has put me in the world of red, not blue. It’s been the unfamiliar – Oracle DB, Jdeveloper, and putty – that I’ve been working in more than anything made in Redmond lately. However, there is an equally valid reason – I’ve lost a bit of my Microsoft faith over the countless hours of podcasts, reading articles, and attending events which had left me completely unsure what Microsoft was doing (and questioning whether Microsoft knew as well). For quite a while it felt as though everything Microsoft created was tossed into the eco-system somewhat half-baked and what received the most buzz stuck. Perhaps this is Microsoft adopting some of the open source mentality that has been so hard on Microsoft in the past; perhaps it was simply to create a competitive nature internally at Microsoft; but to some critics, and even a handful of die-hard developers like myself, it seemed scattered and without direction.

There’s a bit more of a problem that I now understand more fully than when I was younger; time is not an infinite resource. Before I would jump whole-heartedly into a new technology learning with vigorous disregard of whether that technology would rise to the top or fall by the wayside. The cost was minimal; it only meant time away from things I probably shouldn’t be doing anyway.

Now that I have a daughter (as well as another on the way) and other family demands, I have to choose very carefully what my training time can go to. Should the focus be MVC, WP7, Silverlight, BizTalk (or gasp, Oracle SOA Suite), Entity Framework, jQuery, HTML5/CSS3, Azure, Denali, Kinect SDK, or any of the now vast array of Microsoft offerings available to developers?

So, as I’ve found myself occupied in other technologies, I rode the fence to see what shook out. It looks like Microsoft is finding their groove again and it’s time to brush aside the fallout and introduce the winner(s).

Welcome MVC to My Toolbox

As most of you know, MVC as a pattern has been around for quite a while. It is this reason that I originally waited to see if Microsoft’s implementation would survive or if the community would look to past MVC frameworks and decide to go back to something from the open source community or an established Java MVC implementation that would be ported over.

Obviously, that does not appear to be the case and Microsoft’s MVC implementation has been a huge success.

So now it’s time to get to work. I’ll be periodically posting about my learning process with MVC if any other developers are interested. There is a huge list of resources now available for MVC and as I post (or you comment), I will try and steer developers to what I consider the most useful.

Disclaimer and decisions made about the example material

Please note that there is still a plethora of examples showing simple MVC apps which I am intentionally choosing to not use in these blog posts. I am starting with a quite large project to accurately compare MVC to “the real world of enterprise applications”. I’ve also decided to employ logic and models via a service layer instead of directly in the MVC project. P.S. – way to go MVC for allowing this, it should be played up more in examples!

Also, in case you didn’t pick up on this earlier, but this is a learning journey for me with MVC – I am by no means an expert… yet.

Without further ado, here are my notes from the first foray into MVC with an enterprise application.

Project Orange (the anti-apple)

It’s not important what this project is, but it is intended to have multiple tiers where the web tier will be MVC. Let’s look at the setup of the projects:

Project Orange Projects

BusinessDataLogic is a combined BLL and DAL using entity framework.
Models are the business entities and DTO’s.
Services are WCF services which wrap the business logic and expose DTO’s.
Utilities are just helper classes that are common throughout the tiers (extension methods, etc.)
Web is MVC.

MVC Presentation Tier

Let’s take a closer look at the web application and the asset structure:

Project Orange MVC

As you can see, I’ve added a service reference for my WCF Inventory service. I’ve also added an ItemController and a corresponding View directory for Item.

I’ve added one new method within the ItemController for searching:

//
// GET: /Item/Search/had

public ActionResult Search(string text)
{
   var client = new InventoryService.InventoryClient();
   var items = client.SearchItems(1, 1, text);
   return View(items);
}

I’ve created a Search view which was auto-scaffolded based on the strongly typed model. The scaffolding is a nice feature and I can see this being extensible in the future.

Two things that I did get tripped up on for just a short while which required some trial and error was that I needed to create a new routing rule in global.asax to handle my Search action url format:

//Search Item
routes.MapRoute(
   "SearchItem",
   "{controller}/{action}/{text}",
   new { controller = "Item", action = "Search", text = UrlParameter.Optional }
);

I also discovered that _Layout.cshtml is essentially the masterpage (called a layout in MVC terms) and is auto-loaded by default from _ViewStart.cshtml.

That’s actually not a bad start for just opening up a project template and getting going, the wizard-style adding of files was very intuitive. Now it’s on to the resources to start learning more.

What Resources I’m Using This Week

Pluralsight: I’m finishing up the free MVC video on ASP.net and like the quality so much I’ve requested for our entire team to purchase seats through work. The catalog is certainly Microsoft-based but also has some “fringe” technology courses as well.

I’m not paid for any recommendation or traffic to Pluralsight, I just like their material.

HTML5 by Bruce Lawson and Remy Sharp: In line with using MVC, I think this is also an appropriate time to embrace HTML5 (as the rest of the world has gone crazy over it) as some of the HTML generated appears to be doctype’d for HTML5 in MVC as well now.

I did enroll in Amazon’s referral program, so there is a slight compensation for purchasing the book after clicking the image below.