Docker: VMs, Code Migration, and SOA Solved

It’s rare that a piece of software as new as Docker is readily adopted by startups along with huge, well established companiesdotCloud, the company that created and maintains Docker, recently nabbed $40 million in funding. So what is all the hype about?

Docker solves two of the most difficult problems in deploying software: painlessly spinning up VMs, and bundling together application code with the deployment environment.

Spinning up new, customized instance is as easy as a click of a button. Migrating code between platforms is trivial because our application code is packaged with its environment. We at Keyhole have been seeing a lot of traction around Docker in the past few months. We currently use it in one of our applications to manage our deployment process. The reasons we’ve seen the flurry of activity of it will become clear after I go into what separate Docker from other hypervisors and deployment tools.

VMs on Steroids

Virtual Machines (VMs) are an amazing tool that has helped further abstract the runtime environment from the physical hardware. VMs, unfortunately, come with a pretty steep performance penalty on startup and execution.

The reason for most of the problems in VMs is a duplication of work. To understand this duplication, think of the structure of the Linux operating system. There is a clear separation between the Linux kernel, which manages deep-level tasks like networking and threads, and user space, which is everything outside of the kernel.

linux os

Traditional VMs like VirtualBox and VMWare run their VMs in the user space. When a traditional VM starts an instance of the machine, it spins up a Linux kernel and user space inside of an existing user space.

linux os with vm

This is where the duplication comes into play. Why should the Linux kernel be inside of a user space when there is already a Linux kernel for it to use? It doesn’t. That is what the makers of Docker realized. As long as the Linux kernel of the VM matches that of the host machine, there is already a clear separation that the VM user space can take advantage of.

linux os with docker vm

When a Docker VM starts up, it attaches the VM user space to the host Linux kernel. This means that boot happens in a manner of milliseconds. The performance is 97% of software running on the host machine. Docker has all of the advantages without any of the drawbacks. Plus…

Deployment Solved

A Docker VM is generated from a well-defined script called a Dockerfile. The Dockerfile specifies what flavor and version of Linux to use, what software to install, what ports to open, how to pull the source code in, etc. Everything you need is bundled together in one file. This is an example Dockerfile from a project I did a few months ago:

FROM ubuntu:12.04
MAINTAINER Zach Gardner <zgardner@keyholesoftware.com>

# Update apt-get
RUN apt-get update

# Create container
RUN mkdir /container
RUN mkdir /container/project

# Install NodeJS
RUN apt-get --yes install python g++ make checkinstall fakeroot wget
RUN src=$(mktemp -d) && cd $src && \
    wget -N http://nodejs.org/dist/node-latest.tar.gz && \
    tar xzvf node-latest.tar.gz && cd node-v* && \
    ./configure && \
    fakeroot checkinstall -y --install=no --pkgversion $(echo $(pwd) | sed -n -re"s/.+node-v(.+)$/\1/p") make -j$(($(nproc)+1)) install && \
    dpkg -i node_* && \
    rm -rf $src

# Install NPM
RUN apt-get --yes install curl
RUN curl --no-check-certificate https://www.npmjs.org/install.sh | sh

# Install Bower's dependencies
RUN apt-get install --yes git

# Install PhantomJS dependencies
RUN apt-get install --yes freetype* fontconfig

# Move source code to container
ADD / /container/project

# Install NPM dependencies
RUN cd /container/project/ && npm install

# Install Project's Bower dependencies
RUN cd /container/project && (echo -e "n" | ./node_modules/bower/bin/bower install --allow-root)

# Compile code
RUN cd /container/project && ./node_modules/grunt-cli/bin/grunt build

# Start server
CMD /container/project/node_modules/grunt-cli/bin/grunt --gruntfile /container/project/Gruntfile.js prod

The first thing I do in this script is define that I’m using Ubuntu 12.04. I install NodeJS, NPM, and git. I copy my source code from my repository, download the runtime dependencies, compile my code, and start my server.

When you pass a Dockerfile to Docker, it generates a Docker image. The best way to think of a Docker image is that it is a self-contained zip file that contains everything an application needs to run.

docker image

The source code and execution environment being combined is a complete paradigm shift from traditional deployment methods. Rather than moving code, having a human execute shell scripts to update the environment, and wishing for the best, you can instead transition fresh Docker images to the different platforms. There is virtually no human intervention required, which reduces the chance of mistakes. Best of all is that once the QA’s have signed off on a specific version of the application, you can be sure that application won’t change as you migrate it up platforms.

Interestingly, the Docker paradigm of transferring Docker images falls inline with some of the research that’s been done in Reactive programming. Managing state is one of the most difficult things to do in an application. Mutability makes creating thread-safe code a non-trivial job. By shifting the thinking over to process immutable pieces of data, the CPU is allowed to optimize threads and operations in a much easier manner than before. Docker follows that paradigm by getting rid of the concept of software updates on platforms. It is much easier to scrap the running Docker image and replace it with a new one than to worry about how to upgrade the current running image. No scripts are necessary to upgrade things like Java if the version is out of date, or other worrisome things like that. Docker takes the guesswork out of what software is running on your platforms: it’s what ever the developer specified in the Dockerfile.

Migrating the image across platforms is a trivial job. Docker images can be pushed to a Docker registry (public or private), and pulled down onto the desired platform. The syntax is very similar to git:

On a development platform

docker push zgardner/myapp

On a production platform

docker pull zgardner/myapp
docker run -i -t zgardner/myapp

In the example above, I first pushed myapp up to a Docker registry. I then pull it down from a higher platform, and run it.

docker dev to prod

The term for a running Docker image is a Docker container. There are some steps that I omitted to show the shutting down of the existing container, how to specify the port on which the Docker container should communicate through, etc. Those are things though that the organization that uses Docker can specify for themselves.

The idea of being able to migrate code so easily is a welcome revolution compared to every in-house, proprietary, home-brewed solution I’ve ever seen. Combining the power of a Linux-kernel attached VM with a simplified migration process has some pretty powerful ramifications. We at Keyhole have been experimenting with CoreOS and Fleet to deploy new servers, set them up with Docker, and download Docker images all from the Amazon AWS console. We’re also starting to experiment with…

Service Oriented Architecture: Easy as Sliced Bread

Docker is the first true DevOps tool. It allows developers to easily specify the environment in which their code should be executed. It also removes the stress of worrying about upgrading the environment.

Docker images being static containers means they need to offload persistent data outside of the application itself. This is commonly done by mounting an AWS drive when defining the Docker container. This also means that the code contained inside of the application needs to be small, concise, and very focused. Because the application will run inside of an isolated environment, it needs to be defined as if it can be ran on an island by itself.

That tends to lend itself very well with a SOA (Service Oriented Architecture). A SOA is a different way of thinking about an API and how applications in general are composed. The traditional way of thinking of an application is that it is a system composed of smaller technical parts. These technical parts may be things like Products or Customers or Funds or cron jobs. Some of these technical parts may needed in other applications that a software company has written. The traditional approach is to either try sharing the code, which often doesn’t work because it was written with only the original application in mind, or copying the code all together, which really doesn’t work.

An application written with a SOA in mind is designed with a completely different goal in mind. In a SOA, applications are pieced together by composing business needs together with application code. Some of the technical parts mentioned above are actually business needs. These tend to be the same across applications, though they may be used in different ways depending upon the UI. If each of these business needs can be siloed into a well-defined, consistent interface, they are reusable by design.

docker soa

Making a SOA a primary focus of an organization allows new applications to be pieced together quickly and effectively. Amazon was one of the first companies to pioneer this approach. SOA is catching on with a lot of other small and large companies because it works so well.

Docker lends itself very nicely to a SOA. Each service can be conceived of as a separate Dockerfile. Migrating the services across platforms is as easy as pushing a static Docker image and pulling it down. They are isolated by their very nature of running in an independent VM. Using API documentation tools like Swagger can help make the service even more well defined.

At Keyhole, we’re working to rewrite our Q&A system along with our Timesheet application to use a SOA with Docker. The results have been very promising so far. We have future blog posts that will detail different things we’ve found out and experienced during this process.

To Wrap it All Up

Docker is a very powerful tool that we believe will be an industry standard within the next few years. It flips the VM and application migration process on its head. It will be exciting for us to help clients implement it in their stack, and see serious savings in time and energy. Docker will allow developers and enterprise system engineers to stop worrying about build and deployment issues, and instead focus on what matters: building beautiful applications.

Managing Delegated Asynchronous Work

The human brain was built to be an amazing machine. We take for granted just how much it allows us to do. Often, we take it for granted at our own peril. We try to use our brain to do things it simply wasn’t designed to do. We set ourselves up for failure with no corrective action taken.

One of the most common brain-fails involves delegation of tasks. The 70% rule provides a great guideline on when to keep something or when to delegate it. What it does not mention is everything that has to happen after that task has been delegated. The person it was delegated to must see the communication, must prioritize and order, must work on the task, and must get confirmation that the task has been completed to satisfaction.

The micromanager in me wants to check in with that person on the hour every hour until the task has been completed. This tendency has lead to some pretty bad friction, and has stunted my growth when it comes to successfully leading a team.

So, in the interest of self-accountability, I have come up with a proposed solution. When ever I delegate something, when I send out an email that needs a timely response, or anything that has a definite due date, I will create an event in my Google calendar to remind me to check up.

Part of my micromanaging comes from my inability to remember things in an asynchronous manner. It’s very difficult for me to say to myself, “OK, in three hours from now I will remember to text message someone to see where they’re at on this project.” I may remember two hours from then, or five hours from then. The difficulty in remembering things has made me go to the other extreme by constantly check in more than necessary.

I Need a Lightweight, Unopinionated, NodeJS, MongoDB CMS

Said no one ever.

Seriously people. If I see one more NodeJS+MongoDB anything, I will find you and shake you until you understand why that is a bad idea.

You have to be a fairly intelligent person to develop software. Keeping multiple things in your mind, remembering algorithms and best practices, and being able to think at a high conceptual level are some of the biggest barriers to entry to our field. Developing software is a very difficult job, which is why it amazes me when a developer commits the cardinal sin of software.

It’s easy to do. Everyone does it. Even Hollywood:

When you hear that your coworker is building the next great NodeJS static site generator, you will hear the voice in the back of you head say something is wrong but you may not know exactly what it is. If Kevin Costner is the posterboy, you know something is wrong.

Does your code solve a real problem?

Think about it for a second. Out of the seven billion people alive today, is there one person other than you who says, “Yes, I need another CMS to choose from.” If you can find that person, let me know because I don’t believe you.

When I decided that I wanted a website, I didn’t go off the deep end and try to write my own CMS. Writing a good CMS is hard work, and I have something more important I want to do with my time: I want to share my thoughts with the world. Time is a limited resource, and I wanted to minimize the amount of time I had to spent to share my first blog. So I did the logical thing by choosing WordPress, one of the most popular pieces of software in the entire world.

Millions of other people have been testing WordPress for years. I didn’t have to do a single thing to make sure that it worked. All I had to do was install, set it up, and start writing. And it was great.

Sure, I could have written my own CMS in Node with a service-oriented API. I didn’t though because that was not my problem.

Software developers jump too quickly into “I can build this piece of software.” They miss “does this piece of software solve a real problem that real people have?” Paul Graham hit the nail on the head perfectly:

Why do so many founders build things no one wants? Because they begin by trying to think of startup ideas. That m.o. is doubly dangerous: it doesn’t merely yield few good ideas; it yields bad ideas that sound plausible enough to fool you into working on them.

For example, a social network for pet owners. It doesn’t sound obviously mistaken. Millions of people have pets. Often they care a lot about their pets and spend a lot of money on them. Surely many of these people would like a site where they could talk to other pet owners.

The danger of an idea like this is that when you run it by your friends with pets, they don’t say “I would never use this.” They say “Yeah, maybe I could see using something like that.” Even when the startup launches, it will sound plausible to a lot of people. They don’t want to use it themselves, at least not right now, but they could imagine other people wanting it. Sum that reaction across the entire population, and you have zero users.

Listen to PG people. Stop building things that no one wants, and start building things that solve actual problems.

IE AJAX POST Requests

I’ve seen some pretty strange things with IE in my day as a web developer, but what I’ve seen this past week takes the cake. We all know IE does some strange stuff, but this goes way beyond what any of us would consider “normal” or “acceptable” behavior for a browser. I’m not exaggerating: IE has a huge, obvious performance issue, but no one is talking about it. Yet.

Imagine in a function you make an AJAX request, then do something to the screen like show a Please Wait message. That kind of behavior is pretty standard, pretty vanilla. Right? Nothing bad should happen with something so simple, right?

for (var i = 0; i < 10; i++) {
    // Do an AJAX post
    $.ajax(document.location.href, {
        data: {
            name: ‘Zach Gardner’
        },
        method: ‘POST’
    });
    // Do some expensive DOM stuff
    for (var j = 0; j < 100; j++) {
        var el = document.createElement(‘div’);
        document.body.appendChild(el);
        for (var k = 0; k < 100; k++) {
            var child = document.createElement(‘div’);
            el.appendChild(child);
            el.removeChild(child);
        }
    }
}

This is a simplification of something pretty common in a Single Page Application. You have a bunch of modules that act independently of each other, do some AJAX request when an event is fired, and do some updates to the screen. The updates to the screen make take some time to do, but that’s fine because the AJAX request has been sent on its merry way to the server. Right.

Right?

This is what it looks like in Chrome:

Chrome AJAX POST

It’s exactly what you would expect: the first request is sent to the server, the thread keeps processing, the server responds, the thread has already started the next request, etc. The client and server are happily independent of each other.

Now, you may want to sit down if you’re at a standing desk. I’m not kidding. This chart may be NSFW, so make sure you don’t say any bad words when you see it.

Last chance. Don’t say I didn’t warn you.

This is what IE looks like:

ie

 

OH MY GOD, I AM ABOUT TO HAVE A HEART ATTACK LOOKING AT THIS!

Seriously, look at all that blue. That sweet nectar of Hades blue. What is that blue, you ask? Why, that is all the time that IE is holding on to your AJAX request.

I’m serious. Take a look at one of the requests:

IE - Fiddler Trace

 

Does something stand out to you, other than the 04.263 seconds it took to serve up a freaking static file from jsfiddle?

IE Fiddler Trace

Let me blow that up even more:

IE Fiddler Trace Larger

ClientBeginRequest is when IE created the AJAX request. This corresponds to the moment when jQuery called the send() method of the xhr object.

ClientDoneRequest is when IE actually send the AJAX request to the server for processing.

It took IE 4.178 seconds to actually send the request to the server.

During that 4.178 seconds, no work was being done on the server. IE was just sitting there, creating AJAX requests, updating the screen, doing nothing on the server.

NOTHING.

Oh, and did I mention that this only happens with a POST?

WTF??????????

Fiddler is not the only one that shows this behavior. If you use the IE dev tools on the JSFiddle link at the bottom of this post, you will see the following:

ie dev tools

This corresponds to the Timeline in Fiddler. Clicking on one of the requests will also show the gap:

ie timings

The first Start is when the browser transmits the URL to the server. The second Start is when it sends the request body. Does anyone honestly think it should really take 4.32 seconds to send a POST with my name to a static server?

Think about all of this for a second. How many corporations use IE? Like all of them. How many of them have web applications? Like all of them. How many of them use AJAX requests? You know how many by now.

How many seconds, minutes, hours, days, weeks, months, and even years are wasted every single day because IE will not send the AJAX request to the server until the current thread is done?

The humanity of it just astounds me. And guess what: this still happens in the IE 12 developer preview.

!!!

Don’t believe me? That’s fine, I didn’t believe me either until I spent a week looking at the results, ran it by the smartest developers I know, and we all stood there and were like:

LOL WUT?

Fine, don’t believe me. Be like that. Run the test yourself.

First, open Fiddler. Then go to this fiddle in Chrome and run it:

http://jsfiddle.net/zgardner/tke04n51/

Run it, then look at the requests that were sent. Highlight them, then click on the Timeline tab. Do the same thing in IE. Click on one of the requests, then on the Statistics tab. You will see that ClientDoneTime for all the requests is nearly the same, and always correlates to the time that the current thread was done.

Here are my Fiddler traces so you can see where I got my numbers and pictures:

IE Fiddler Trace
Chrome Fiddler Trace
(Change the extension to .saz, then open in Fiddler)

Please, prove me wrong.

Defragmentation

If I’ve said it once, I’ve said it a hundred times: principles you can apply to discrete areas of life are worth their weight in gold. Who would think that I could tie together defragmenting a hard drive and work-life balance? I do, that’s who.

I first ran into defragmentation when I noticed my Windows XP machine was running slow. So I decided to open my IE 6 browser, go to Google, and figure out what was wrong with my computer. One of the first result said that defragmentation can cause everything I do on my computer to slow down. So I followed the instructions, defragmented, and like magic everything was faster.

I learned in college the theory behind why fragmentation occurs and how defragmentation ameliorates the performance issues. When you write a file to the disk, it has to find enough free space to fit that file somewhere on it. It writes the data into blocks, which means there can be some leftover space if for instance a file only needs two and a half blocks. If you have a file that only takes up half a block, it would fit perfectly into the remaining space.

Theory in this case is very different than reality. What ends up happening is that there are a lot of left over pieces of blocks that can’t be used. Fragmentation can come into play when the file has to be written to the disk in blocks that are some distance apart. This is bad, slow, and can be mitigated with defragmentation. Some additional time can be taken to move things around on the disk so that contiguous free space is maximized. New files can then take advantage of close spatial locations to increase performance.

Technical talk aside, the idea of defragmentation is a powerful one. I see many people give advise that uses the principle of defragmentation without explicitly mentioning it. Nearly every article I read on how to boost your performance at work suggest becoming more focused by removing distractions like Twitter, Facebook, Reddit, etc. The human brain was built to focus on a single task. Multitaskers are the exception. For the majority of us, myself included, we work best when we attack one problem at a time.

I’ve found huge performance boosts by intentionally taking away distractions while I program. I make myself conscious of when I need to be programming and when I can take a break. By never mixing the two, I’ve noticed my programming is better with my breaks leaving me better refreshed. I perform better at my job by turning my work day into discrete fragments. I can focus on the task at hand without distracting myself.

In keeping with principle ubiquity, I apply defragmentation to nearly every aspect of my life. One of the most obvious is the hours I work. Through some self analysis, I found out that my most productive hours are in the morning. I think it’s because will power is naturally high in the morning, and decreases as the day goes on and more choices need to be made. By maximizing the hours I spend working at my peak time, my work has become noticeably better.

Defragmentation comes into play because of how I spend my time before and after work. If I had it my way, I would start working 10 to 15 minutes after I wake up in the morning. That’s enough time for a quick shower, breakfast, and anything else that comes up. The pre-work fragment is necessary, so there is no way I can cut it out. I normally start working an hour and 20 minutes after I wake up, which I’m still trying to minimize. My optimal work fragment is eight hours with a five minute lunch break.

With around 10 hours spent in the pre-work, work, and commute home phase, that leaves me 14 hours for everything else in my life. If I sleep for seven hours, that leaves a seven hour “Zach” fragment. If I wake up at 5 AM, start work at 6:20 AM, leave work at 2:30 PM, and get home at 3 PM, that means the time between 3 PM and 10 PM is mine. My “Zach” fragment is as large as it can possibly be given the constraints of my system. (i.e. my life)

I noticed a direct correlation between being cranky and starting work late or leaving work late. My body was telling me I needed a larger chunk of time, it just took my mind while to realize my schedule needed defragmentation. Since I switched to this new schedule, I noticed that my work and my overall mood has improved. I have more energy than I did before even while getting slightly less sleep. By allowing myself to move from one fragment to another with as little overlap and mixing as possible, my life has become defragmented and overall better.

IE XML5633 Error when using jQuery.parseXML()

I saw something very interesting today while debugging an issue that a QA reported. They learn as part of their training to always keep Dev Tool open, and create a bug when a console error comes up. The bug I was looking at was created when a QA saw the following:

XML5633: End-tag name does not match the corresponding start-tag name. Line: 1, Column 10

I ran a Fiddler trace, and noticed an AJAX request was throwing a 401 right before this error was shown in the console. I put a console.log() before the request, and in the success callback. My two logs would show, then the XML5633 message would come up.

Uh, what?

My instincts told me that something was doing a setTimeout() and processing the XML outside of a try/catch. I started doing return statements in the methods that were called to isolate the general region this was coming from. I found the error did not show up if I put a return right before the AJAX request, and showed up when I did a return right after the request. This told me something was happening inside of the request that caused the error.

After some additional digging, I found that in the case of a non-200 HTTP status code there are some rules it goes through to determine how to handle the error response. If it is not a 404 or a 500, it tries to parse the response and look for a relevant error message. It uses jQuery.parseXML() to convert it from a string to a document it can use XPath on.

When I did a return right before the parseXML() call, no error showed up. When I did a return right after, the error showed up. This told me it was happening somewhere inside of the jQuery code. What’s strange is that the call to this method was wrapped in a try/catch, the method was throwing an exception, and the browser displayed the exception in the console even though it was explicitly caught.

In that method, jQuery uses the DOMParser object, and it’s method parseFromString(). It’s call to that method is also wrapped in a try catch, so it seems strange the error was still showing up in the console. This method is a version of my Bottom-Up debugging technique.

I created a Fiddle to test this situation:

try {
var dp = new DOMParser();
dp.parseFromString(‘<a><hr></a>’, ‘text/xml’);
alert(‘Parse successful’);
}
catch (e) {
alert(e);
}

The HTML returned in the AJAX request was actually invalid XML. The HR tag never closed itself off. All browsers should be able to convert <hr> to <hr/> just like <br> is handled like <br/>. Chrome’s DOMParser was able to figure this out, but IE’s threw an exception.

I’m fine with it throwing an exception. I’m not fine with it showing an error in the console even though it’s coming from a call that’s inside of two try/catch blocks. It’s very difficult to explain to QAs why an error should be logged in one situation but not another. Fixing the HTML to be <hr/> would fix the issue, but I’m sure this will pop up again.

This happened in IE 9, 10, and 11. :(

KCDC 2014 Post-mortem

KCDC 2014 was my first conference. As a complete outsider to the conference world, I found KCDC absolutely fascinating. I have never seen so many developers in one place before. Being able to talk with other people in my field was such an amazing experience.

The diversity of expertise is one of the biggest selling points for me. I got to talk with people who had complete different skill sets and objectives than I do. It is a very humbling feeling to know there are gobs of people smarter and more intelligent than you are. It’s already helped inspire me to become a better developer.

The most rewarding part of KCDC was giving my own presentation:

Advanced JavaScript Debugging for Agile Teams

I went at 3:20 on Friday, May 16th. I got to attend five presentations before mine, so I got a good feel o how other people presented their topics. Although my presentation was later in the day than I would normally like, seeing other people give their presentation really helped me get into the zone and find what worked with this particular crowd.

The best presentations I saw were when the presenters got conversational with the audience. Having a conversation with ~100 people is difficult, but it can be done. Telling anecdotes, funny stories about something shining spectacularly or failing miserably, etc. all contribute to feeling like a true part of the presentation. Moving around and telling stories with their hands also made for a memorable presentation.

When I think back on my presentation, there are some things I definitely want to change for next year. I think including a bit.ly link to my presentation on the printed presentation description is desperately needed. Having to frantically write down a URL to a person’s presentation is a pain point I saw in multiple presentations. I’m planning on including one when I submit my paper, then right before the presentation making the link live.

The second part of my presentation that needs to be changed is passing out business cards. Passing out business cards to everyone attending the presentation with the date, a link to my presentation, and a personalized thank you message would be an amazing personal touch. Building true relationships start with simple acts that lead to a true appreciation. Doing something that simple will make my presentation even more memorable.

KCDC 2014 was one of the most enjoyable three days I’ve had in a long time. I look forward to it next year, and am glad everyone involved in it spent the time to make my time there as positive as it was.

Dale Carnegie’s How to Win Friends and Influence People: Part 3

This is the third part in my series on Dale Carnegie’s How to Win Friends and Influence People. Part 1 talks about never criticize, condemn, or complain. Part 2 was about finding how a person feels important.

As I’ve been reading this book, I’ve found that there are many things I do subconsciously that Carnegie recommends. It’s good news that I’m already doing them, but bad news that I don’t understand the theory behind why they’re good. The next point Carnegie brought up was one of those I find myself doing all the time:

Lavishly give appreciation and encouragement

There are as many different ways to approach a problem as there are people that have ever lived. No two people are alike, so no two people will fix something in the same way. I’ve had the privilege of watching many different types of personalities attack extremely different problems.

I try to find common threads that help me in a broad range of situations. I must have subconsciously picked up during the process of being exposed to many different problem solvers the one thing they all had in common: a positive attitude.

No one wants to work in a negative environment. If I had to sit at my desk for eight hours a day surrounded by negativity, I would go crazy. Negativity spreads like a disease. People gravitate towards negativity because it is easy to be critical about a situation than try to fix it. Being negative towards something also helps break the ice, and gives people something common to talk about.

Unfocused negativity is a destructive force in the problem solving process. I have never solved a problem just by thinking about how much I didn’t want to solve it. The only way to effectively solve a problem is to believe you can sole it.

People that believe in themselves are infectious. If you had to spend 10 minutes around a miserable, dour fellow or a charismatic chap, the person who makes you feel happiest wins. The same is true for people you work with.

People know when someone is truly happy. A happy person radiates positive feelings to everyone around them. They help people get through the tough parts of problem solving. They let others know they appreciate them. They make sure people get recognized when they do something right. They are focused on the overall positivity of the team, not just their own feelings.

You can’t fake genuine happiness. Being happy comes from within. I am a very happy person, and make sure that everyone knows about it.

 

 

Dale Carnegie’s How to Win Friends and Influence People: Part 2

In my blog post yesterday, I talked about one of the first principles Carnegie brings up in his book:

Never criticize, condemn, or complain

My understanding of this principle is to try to understand where the other person is coming from when they made a mistake. Humans try to be rational with their choices. Understanding the context behind a decision allows lasting corrections to be made to a person’s behavior.

The next principle I found is spelled out in a few different ways. The way I think of it is the following:

Identify what makes a person feel important

Dale Carnegie argues that every action a person takes can ultimately be traced back to what makes that person feel important. If a person wants to be admired, they will put themselves out in the open more often than someone who feels important by making silent contributions to society. If a person feels important by pointing out flaws, they will not get along well with someone who finds their importance in perfectionism.

Our zeitgeist of this is the 5 Love Languages. There are so many things that go on in a relationship that it seems to be more of an art than a science. The love languages help characterize the way a person expresses their love to help the other person in the relationship understand their actions.

My love language is Acts of Service. I find myself taking care of chores without being asked, then expect to be thanked. I get frustrated when I am not thanked or recognized when I go above and beyond expectations. Because I know my love language, it helps me realize that my frustration comes from my own beliefs rather than the actions of those around me. Knowing how you feel important is critical to keeping yourself in check.

The same is true for those around you. Every single person is different in some way when it comes to what makes them feel important. Some people feel important when they ask a question and they get an immediate response. Some people’s importance comes from giving feedback, so I let them talk without interrupting them.

Having a legitimate desire to make a person feel important is critical to being successful in the work place. I am constantly in the process of figuring out how a person wants to feel important. By making them feel important, I end up creating a more positive workplace than if I just focused on making myself feel important.