Is it actually possible to have an empty inbox? Try this and find out!

I’ve developed a system over the years that has kept my inbox mostly empty all of the time. It has worked for me even when I was getting 100+ emails/day, so I’d say it scales fairly well. It also works well even in the absence of Gmail’s additional feature set (I use Office 365 personally, but this worked when I used Gmail, Apple Mail and my own mail servers back in the day.), which is nice should you ever choose to use a desktop mail client.

This might not work for you. You might even be doing some variation of this
already. If that’s the case, feel free to tell me off!

Finally, if you don’t want to worry about any of this stuff and don’t ever see yourself having to use Outlook or Mail.app ever again, try Google’s
Inbox
and tell me that all of this is useless in the comments!

Without further ado, this is how I email:

  • I use folders to categorize my mail. I used to abuse folder structures by having folders for particular events, purchases, conversations, etc, but I’ve found that it hasn’t provided me with a lot of value and was really difficult to re-assemble after email migrations, so I now keep a minimal top-level directory structure instead. The folders that I use most often are:

  • Services/{Added, Removed}: For keeping track of new and deleted accounts I make (and I make a lot)

  • Career/{Accomplishments, Failures}: For keeping track of things I’ve done right and wrong
  • Events: Self-explanatory
  • Responded: Emails I’ve responded to
  • Responded/Sent To Me Directly: Emails sent just to me, see below
  • Purchases: Self-explanatory
  • Receipts: For receipt tracking
  • Team: Important emails from or about my team
  • Personal Messages: Important, yet personal, messages
  • Tasks: I’ll explain this below
  • Timing: I’ll explain this below

This is more useful in the presence of Gmail labels where you can mark
something as being in a particular folder without having to physically move it. It still works well for me without that feature, however.

  • My inbox is my to-do list. This is why I said I keep my inbox “mostly” empty. If a message is in my inbox, it’s either something I need to follow-up on or it isn’t there at all.

  • Follow-ups are flagged (starred). Any email that requires an action from me is starred. Gmail has this neat feature where you can change the color of the star when you star an item. Outlook has this as well with its different types of flags as well as its color-coded categorization system (which is really neat but is a mondo pain in the butt to reconfigure after migrations)

    This feature of my system is really important to me, as it helps me keep track of what my schedule is even in the absence of calendar entries (which I sometimes forget to create). That said, it’s been a personal goal of mine to schedule things in emails as soon as I get them so that I don’t have to worry about forgetting later.

  • I ranked my emails using the “Eisenhower” Decision Matrix. I say “ranked” because this works much better with Gmail labels than a traditional IMAP client. I learned this system in some class about time management back in college (I think) and use it (along with scoring things from 1 to 10) for measuring the priority of things. This has also helped me with managing my email. Here’s how I do it:

    • Rank 0 (Important and Urgent): Needs to be attended to right away and is extremely time-sensitive. You shouldn’t have too many of these in your inbox! If you do, reconsider their importance and urgency.

    • Rank 1 (Important, but not Urgent): Needs to be attended to “soon” but is not time-sensitive. This gives a bit of a nudge to flagged inbox items.

    • Rank 2 (Urgent, but not Important): Doesn’t need to be attended to right away but is time-sensitive. These could be meetings or messages sent
      directly to you

    • Rank 3 (Not Important or Urgent): These messages can (should) be deleted or filed away, see below

  • I mark messages sent to me directly using Gmail labels or automatic color assignment with rules. It is usually the case that messages sent just to me (i.e. messages where my email address is in the To: field, not messages sent to a group) are urgent and need to be responded to quickly. I usually use a bright color that stands out so that I can quickly identify these messages and do something about them.

  • I action every single email right away. Action doesn’t necessary mean
    ‘immediate response’ (though if I can respond immediately, I will;
    “immediately” usually means 160 characters or less). This means that I either flag it for follow-up later, rank it for visibility, move/label it for archival or delete it. This is really important to me. The bigger my inbox gets with crap emails, the harder it gets to clean up, so I’m extremely strict about this.

  • I “delete” most things. I think that this is the hardest thing that keeps people from having clean inboxes (that and not caring enough, since most people don’t really care about this like I do lol). Everyone’s afraid of deleting something and needing it in the future, but out of the 10s or 100s of thousands of emails I’ve deleted over the years, I can count the number of emails I’ve needed to recover with two hands, and even fewer than that were critical messages.

However, Gmail provides way more space for inboxes than people will ever need in their lifetimes, so the smart way of dealing with this is to archive into the “All Mail” bin instead of delete. This way, they’re out of view but still there if they ever need to be recovered. This is the default action in just about every client out there, so you don’t even need to reconfigure anything!

That’s how I email! Here are some great plugins and add-ons that might help take this further:

  • Boomerang for Gmail. Delay sending emails until a certain time. Works really well for actioning on emails right away without having to wait. Get it here
  • Checker Plus for Chrome. Get rich notifications for every email. You can do just about everything I’ve typed above with this extension. It works great! Get it here
  • Multiple Inbox for Gmail. This is a Labs extension in Gmail that allows you to see more than one folder along with your Inbox. It’s really useful, especially if you rank emails. To enable it, go into Settings, then Labs, then check “Multiple Inbox” and Save. After Gmail reloads, you can configure the filters that you want to see in Settings > Multiple Labels. Get it here

I hope this helps! Let me know what you think in the comments below!

About Me

I’m the founder of caranna.works, an IT engineering firm in Brooklyn that builds smarter and cost-effective IT solutions that help new and growing companies grow fast. Sign up for your free consultation to find out how. http://caranna.works.

Technical Thursdays: DNS, or why using the Internet is kind of like going to Starbucks

This Thursday, we’ll talk about a system that has been extremely critical (and extremely taken for granted) for shaping the Internet as we know it: the domain name system, or DNS for short.

Before I explain what DNS is, I’ll talk about something I try really hard to hate but ultimately can’t: Starbucks.

I go to Starbucks at least once a day. Given that Google has more coffee machines (and baristas!) sitting idle than my handy downstairs Starbucks does on even their busiest days, this is slightly embarrassing to admit. I love their drinks, but as a recovering coffee snob, I passive-aggressively hate that I love their drinks. My relationship with that Seattle staple is kind of like how a lot of people feel about Taylor Swift: they’ll hate on her forever but will never admit to playing 1989 on repeat.

Wait, that’s just me?

Okay. I can live with that.

Anyway, what I find fascinating about Starbucks aside from their many variants of non-coffee coffee drinks (that are so good but so bad) is how baristas communicate drinks to each other. Somehow, someway, your order for a tall caramel-flavored latte with soy milk, whip cream and a double-shot of espresso is always a tall caramel whip redeye latte to every Starbucks barista on the planet, but trying that on a barista at Cafe Grumpy will usually get you banned for life.

What’s even more fascinating about this is that DNS works “exactly” the same way when you go to BuzzFeed.com on your phone or computer to endlessly browse lists of cat pictures and gifs of people doing funny things.

(Don’t pretend like you don’t.)

You probably know that underneath the the lists and relationship videos, BuzzFeed is really a ton of servers doing lots of hard work to deliver this quality content, and buzzfeed.com is just one of the servers that shows them to you.

What you might not know is that the name of that server isn’t buzzfeed.com; it’s actually: 54.241.35.79. That’s it’s IP address.

If you type in those four (or eight) numbers into Chrome (or whatever your browser of choice is; I use Safari for reasons that won’t be discussed here to avoid an intense holy war), it’ll take you right to BuzzFeed.

How does your computer know that these two things go to the same place? The answer is DNS.

What Is This DNS Magic That You Speak Of?

DNS is a system that maps names like buzzfeed.com or Wikipedia.org to IP addresses. It was created in the early 1980s when the Internet was much much MUCH smaller and has been iterated and improved upon significantly since then. Here’s the original RFC that describes how it works, and surprisingly, a lot of it has held up over time!

These mappings are stored in records. There are several kinds of them. The name-to-IP mapping that I described earlier is stored in an A record, but a DNS can also have records for other mappings to things like shortcuts to A records (CNAME records), mail servers on the network to which that IP address belongs (MX records) or random data (TXT records).

When your computer attempts to find the IP address for a web site, its DNS client (also called a resolver) performs a DNS query. The response it gets back is the DNS response.

So original, I know.

Dots and zones

The dots in a website URL are very important. Every word behind each dot is called a DNS domain, and every one of those words maps to something.

The last word in the URL, i.e. the .com, .org and .football, is called a top-level domain or TLD. Every single one is maintained by the Internet Assigned Numbers Authority, or the IANA. In the early days of simple Internet, this used to give you an idea of what the website was for. .coms were for commercial use or companies, .orgs were for non-profits and foundations, .net were for personal websites and country-specific TLDs like .us or .it were for government-run websites.

However, like most things from that time period, that’s gone completely out the window (do you think bit.ly is in Libya?).

Records within a DNS are broken up into zones, and servers within the DNS are responsible for upholding their zone. These zones are usually HUGE text files that get stored completely within that server’s memory for really fast access. When your computer sends a DNS query, the DNS server you’re configured to use will ask for this server if it doesn’t have the record it’s looking for stored anywhere. It does this by asking for a special record called the State-Of-Authority, or SOA, which tells it where to go next in its search.

DNS is so hot right now

Almost every single web site you’ve visited within the last 20 years or so has likely taken advantage of DNS. If you’re like me, that’s probably a lot of websites! Furthermore, many of the assets on those web sites (think: images and code for all of those fancy site effects) are referred to by name and resolved by DNS.

The Internet as we know it would not function without DNS. As of yesterday, the size of the entire Internet was just over 1 BILLION unique web sites (and growing! exponentially!) and used by over 3 BILLION people.

Now imagine all of that traffic being handled by a single Dell server somewhere in this vast sea of Internet.

You can’t? Good. Me neither.

DNS at WEB SCALE

So how does DNS manage to work for all of these people for all of these web sites? When it comes to matters of scale, the answer is usually: throw a metric crap ton of servers at it.

DNS is no exception.

The Root

There are a few layers of servers involved in your typical DNS query. The first and top-most layer starts at the DNS root servers. These servers are ran by the Internic and are used to tell you which servers own what TLDs (see below).

There are 13 root servers throughout the world, {A through M}.root-servers.net. As you can imagine, they are very, very, very powerful clusters of servers.

The TLD companies

Every TLD is managed by a company. The DNS servers run by these companies contain the records for every website that uses those TLDs. In the case of bit.ly, for example, the records for bit.ly will live on a DNS server managed by the IANA, whereas the records for stupidsiteabout.football will be managed by Donuts.

Whenever you buy a domain with GoDaddy, (a) you are doing yourself a disservice and need to get on Gandi or Hover right now, and (b) your payment gives you the ability to create records that eventually land up on these servers.

The Public Servers

The next layer of servers in the query are the public DNS servers. These are usually hosted by either your ISP, Google or DNS companies like Dyn or OpenDNS, but there are MANY DNS servers available out there. These are almost always the DNS servers that you use on a daily basis.

While they usually have the same set of records that the root servers have, they’ll refer to the root servers above if they’re missing anything. Also, because they are used more frequently than the root servers above, they are often more susceptible to people doing bad things, so the good DNS servers will implement lots of security enhancements to prevent these things from happening. Finally, the really big DNS services usually have MANY more servers available than the root servers, so your query will always be responded to quickly.

Your Dinky Linksys

The third layer of servers involved in the queries most people make aren’t actually servers at all! Your home router most likely runs a small DNS server to help make responses to queries a lot faster. They don’t store a lot of records, and they are typically written pretty badly, so I often reconfigure these routers for my clients so that use Google or OpenDNS instead.

Your job probably has DNS servers of their own to improve performance and also upkeep internal and private records.

Your iPhone

The final layer of a query ends (well, starts) right at your phone or computer. Your computer’s DNS resolver will often store responses to common queries for a short period of time to avoid having to use DNS servers as often as possible.

While this is often a very good thing, this often causes problems when records change. If you’ve ever tried to go onto a website and were unable to, this is often one reason why. Fortunately, fixing this is as simple as clearing your DNS cache. In Windows, you can do this by clicking Start, then typing cmd /c ipconfig /flushdns into your search bar. Use these instructions to do this on your Mac or these instructions to do this on your iPhone or iPad.

This is starting to get long and I’m in the mood for a caramel frap now, so I’m going to stop while I’m ahead here!

Did you learn something today? Did I miss something? Let me know in the comments!

Technical Thursdays: Calculate Directory Sizes Stupidly Fast With PowerShell.

Scenario

A file share that a group in your business is dependent on is running out of space. As usual, they have no idea why they’re running out of space, but they need you, the sysadmin, to fix it, and they need it done yesterday.

This has been really easy for Linux admins for a long time now: Do this

du -h / | sort -nr

and delete folders or files from folders at the top that look like they want to be deleted.

Windows admins haven’t been so lucky…at least those that wanted to do it on the command-line (which is becoming increasingly important as Microsoft focuses more on promoting Windows Server Core and PowerShell). `

dir sort-of works, but it only prints sizes on files, not directories. This gets tiring really fast, since many big files are system files, and you don’t want to be that guy that deletes everything in C:\windows\system32\winsxs again.

Doing it in PowerShell is a lot better in this regard (as written by Ed Wilson from The Scripting Guys)

function Get-DirectorySize ($directory) {
Get-ChildItem $directory -Recurse | Measure-Object -Sum Length | Select-Object `
    @{Name="Path"; Expression={$directory.FullName}},
    @{Name="Files"; Expression={$_.Count}},
    @{Name="Size"; Expression={$_.Sum}}
}

This code works really well in getting you a folder report..until you try it on a folder like, say, C:\Windows\System32, where you have lots and lots of little files that PowerShell needs to (a) measure, (b) wait for .NET to marshal the Win32.File system object into an System.IO.FIle object, then (c) wrap into the fancy PSObject we know and love.

This is exacerbated further upon running this against a remote SMB or CIFS file share, which is the more likely scenario these days. In this case, Windows needs to make a SMB call to tell the endpoint on which the file share is hosted to measure the size of the directories you’re looking to report on. With CMD, once WIndows gets this information back, CMD pretty much dumps the result onto the console and goes away. .NET, unfortunately, has to create System.IO.File objects for every single file in that remote directory, and in order to do that, it needs to retrieve extended file information.

By default, it does this for every single file. This isn’t a huge overhead when the share is on the same network or a network with a low-latency/high-bandwidth path. This is a huge problem when this is not the case. (I discovered this early in my career when I needed to calculate folder sizes on shares in Sydney from New York. Australia’s internet is slow and generally awful. I was not a happy man that day.)

Lee Holmes, a founding father of Powershell, wrote about this here. It looks like this is still an issue in Powershell v5 and, based on his blog post, will continue to remain an issue for some time.

This post will show you some optimizations that you can try that might improve the performance of your directory sizing scripts. All of this code will be available on my GitHub repo.

Our First Trick: Use CMD

One common way of sidestepping this issue is by using a hidden cmd window running dir /s /b and doing some light string parsing like this:

function Get-DirectorySizeWithCmd {
    param (
        [Parameter(Mandatory=$true)]
        [string]$folder
    )

    $lines = & cmd /c dir /s $folder /a:-d # Run dir in a hidden cmd.exe prompt and return stdout.

    $key = "" ; # We’ll use this to store our subdirectories.
    $fileCount = 0
    $dict = @{} ; # We’ll use this hashtable to hold our directory to size values.
    $lines | ?{$_} | %{ 
        # These lines have the directory names we’re looking for. When we see them,
        # Remove the “Directory of” part and save the directory name.
        if ( $_ -match " Directory of.*" ) { 
            $key = $_ -replace " Directory of ",”" 
            $dict[$key.Trim()] = 0 
        } 
        # Unless we encounter lines with the size of the folder, which always looks like "0+ Files, 0+ bytes”
        # In this case, take this and set that as the size of the directory we found before, then clear it to avoid
        # overwriting this value later on.
        elseif ( $_ -match "\d{1,} File\(s\).*\d{1,} bytes" ) { 
            $val = $_ -replace ".* ([0-9,]{1,}) bytes.*","`$1” 
            $dict[$key.Trim()] = $val ; 
            $key = “" 
        }
        # Every other line is a file entry, so we’ll add it to our sum.
        else {
            $fileCount++
        }

    }
    $sum = 0
    foreach ( $val in $dict.Values ) {
        $sum += $val
    }
    New-Object -Type PSObject -Property @{
        Path = $folder;
        Files = $fileCount;
        Size = $sum
    }

}

It’s not true Powershell, but it might save you a lot of time over high-latency connections. (It is usually slower on local or nearby storage.

Our Second Trick: Use Robocopy

Most Windows sysadmins know about the usefulness of robocopy during file migrations. What you might not know is how good it is at sizing directories. Unlike dir, robocopy /l /nfl /ndl:

  1. It won’t list every file or directory it finds in its path, and
  2. It provides a little more control over the output, which makes it easier for you to parse when the output makes it way to your Powershell session.

Here’s some sample code that demonstrates this approach:

function Get-DirectorySizeWithRobocopy {
    param (
        [Parameter(Mandatory=$true)]
        [string]$folder
    )

    $fileCount = 0 ; 
    $totalBytes = 0 ; 
    robocopy /l /nfl /ndl $folder \localhostC$nul /e /bytes | ?{ 
        $_ -match "^[ t]+(Files|Bytes) :[ ]+d" 
    } | %{ 
        $line = $_.Trim() -replace '[ ]{2,}',',' -replace ' :',':' ; 
        $value = $line.split(',')[1] ; 
        if ( $line -match "Files:" ) { 
            $fileCount = $value } else { $totalBytes = $value } 
        } ; 
        [pscustomobject]@{Path=',';Files=$fileCount;Bytes=$totalBytes} 
    }
}

The Target

For this post, we’ll be using a local directory with ~10,000 files that were about 1 to 10k in length (the cluster size on the server I used is ~8k, so they’re really about 8-80k in size) and spread out across 200 directories. The code written below will generate this for you:

$maxNumberOfDirectories = 20

$maxNumberOfFiles = 10
$minFileSizeInBytes = 1024
$maxFileSizeInBytes = 1024*10
$maxNumberOfFilesPerDirectory = [Math]::Round($maxNumberOfFiles/$maxNumberOfDirectories)

for ($i=0; $i -lt $maxNumberOfDirectories; $i++) {
    mkdir “./dir-$i” -force

    for ($j=0; $j -lt $maxNumberOfFilesPerDirectory; $j++) {
        $fileSize = Get-Random -Min $minFileSizeInBytes -Max $maxFileSizeInBytes
        $str = ‘a’*$fileSize
        echo $str | out-file “./file-$j” -encoding ascii
        mv “./file-$j” “./dir-$i"

}
}

I used values of 1000 and 10000 for $maxNumberOfFiles while keeping the number of directories at 20.

Here’s how we did:

1k files 10k files
Get-DIrectorySize ~60ms ~2500ms
Get-DirectorySizeWithCmd ~110ms ~3600ms
Get-DIrectorySizeWithRobocopy ~45ms ~85ms

I was actually really surprised to see how performant robocopy was. I believe that cmd would be just as performant if not more so if it didn’t have to do as much printing to the console as it does.

/MT isn’t a panacea

The /MT switch tells robocopy to split off the copy job given amongst several child robocopy instances. One would think that this would speed things up, since the only thing faster than robocopy is more robocopy. It turns out that this was actually NOT the case, as its times ballooned up to around what we saw with cmd. I presume that this has something to do with the way that those jobs are being pooled, or that each process is actually logging to their own stdout buffers.

TL;DR: Don’t use it.

A note about Jobs

PowerShell Jobs seem like a lucrative option. Jobs make it very easy to run several pieces of code concurrently. For long-running scriptblocks, Jobs are actually an awesome approach.

Unfortunately, Jobs will work against you for a problem like this. Every Powershell Job invokes a new Powershell session with their own Powershell processes. Each runspace within that session will use at least 20MB of memory, and that’s without modules! Additionally, you’ll need to invoke every Job serially, which means that the time spent in just starting each job could very well exceed the amount of time it takes robocopy to compute your directory sizes. Finally, if you use cmd or robocopy to compute your directory sizes, every job will invoke their own copies of cmd and robocopy, which will further increase your memory usage for, potentially, very little benefit.

TL;DR: Don’t use Jobs either.

That’s all I’ve got! I hope this helps!

Do you have another solution that works? Has this helped you size directories a lot faster than before? Let’s talk about it in the comments!

About Me

I’m the founder of caranna.works, an IT engineering firm in Brooklyn that builds smarter and cost-effective IT solutions that help new and growing companies grow fast. Sign up for your free consultation to find out how. http://caranna.works.

One weird trick that might make your MacBook less janky

A Macbook In Perfect Condition

I was trying to put a bunch of slides together today but had a lot of trouble doing it because my Mac would freeze up every minute or so for about 10-15 seconds. If you’ve ever tried mowing a lawn with no gas, you kind of know how this feels. It was infuriating.

In search of anything that might improve the state of things, I stumbled upon this interesting solution that seems to have made the slowness go away!

If your Mac is freezing up or acting slow in general, give this a try:

  1. Open a Terminal by holding Command (⌘) and Space, typing “Terminal” then hitting Enter.

  2. When the Terminal starts up, type in (or copy and paste): sudo rm /Library/Preferences/com.apple.windowserver.plist. Type in your password when prompted; this is safe.

  3. When that finishes, type in (or copy and paste): rm ~/Library/Preferences/ByHost/com.apple.windowserver*.plist. The terminal might say that there is “no such file or directory;” that is normal (this means that it couldn’t find some files).

  4. When that finishes, shutdown your MacBook then turn it on again but press and hold Command (⌘), Option (⌥), P then R before the Apple logo comes up. This will reset some hardware configuration data, which isn’t critical. (None of your files are affected.) If you did it right, your screen might flicker once. After that happens, press the Power button.

Try it and let me know what you think!

About Me

I’m the founder of caranna.works, an IT engineering firm in Brooklyn, NY that employs time-tested and proven solutions that help companies save lots of money on their IT costs. Sign up for your free consultation to find out how. http://caranna.works.

Technical Tuesdays: Powershell Pipelines vs Socks on Amazon

In Powershell, a typical, run-of-the-mill pipeline looks something like this:

Get-ChlidItem ~ | ?{$_.LastWriteTime -lt $(Get-Date 1/1/2015)} | Format-List -Auto

but really looks like this when written in .NET (C# in this example):

Powershell powershellInstance = new Powershell()
RunspaceConfiguration runspaceConfig = RunspaceConfiguration.Create()
Runspace runspace = RunspaceFactory.CreateRunspace(runspaceConfig)
powershellInstance.Runspace = runspace
try {
    runspace.Open();
    IList errors;

    Command getChildItem = new Command("Get-ChildItem");
    Command whereObjectWithFilter = new Command("Where-Object");

    ScriptBlock whereObjectFilterScript = new ScriptBlock("$_.LastWriteTime -lt $(Get-Date 1/1/2015)");
    whereObjectFilter.Parameters.Add("FilterScript", $whereObjectFilterScript);

    Command formatList = new Command("Format-List");
    formatList.Parameters.Add("Auto", "true");

    Pipeline pipeline = runspace.CreatePipeline();
    pipeline.Commands.Add(getChildItem);
    pipeline.Commands.Add(whereObjectFilter);
    pipeline.Commands.Add(formatList);

    Collection results = pipeline.Invoke(out errors)
    if (results.Count & gt; 0) {
        foreach(result in results) {
            Console.WriteLine(result.Properties["FullName"].toString());
        }
    }
} catch {
    foreach(error in errors) {
        PSObject perror = error;
        if (error != null) {
            ErrorRecord record = error.BaseObject as ErrorRecord;
            Console.WriteLine(record.Exception.Message);
            Console.WriteLine(record.FullyQualifiedErrorId);
        }
    }
}

Was your reaction something like:

WUT

Yeah, mine was too.

Let’s try to break down what’s happening here in a few tweets.

Running commands in Powershell is very much like buying stuff from Amazon. At a really high level, you can think of the life of a command in Powershell like this:

  • You’re in the mood for fancy socks and go to Amazon.com. (This would be equivalent to the runspace in which Powershell commands are run.)

  • You find a few pairs that you like (most of them fuzzy and warm) and order them. (This would be the cmdlet that you type into your Powershell host (command prompt).)

  • Amazon finds those socks in their massive warehouse and begins packaging them. (This is akin to finding the definition of Get-Command in a .NET library loaded into your runspace and, when found, wrapping it into a Command object, with the fuzziness and color of those socks being its Parameter properties.)

  • Amazon then puts that package into a queue in preparation for shipment. (In Powershell, this would be like adding the Command into a Pipeline.)

  • Amazon ships your super fuzzy socks when ready. (Pipeline.Invoke()).

  • You open the box the next day (you DO have Prime, right?!) and enjoy your snazzy feet gloves. (The results of the Pipeline get written to the host attached to its runspace, which in this case would be the Powershell host/command prompt.)

  • If Amazon had issues getting the socks to you, you would have gotten an email of some sort with a refund + free money and an explanation of what happened (In Powershell, this is known as an ErrorRecord.)

And that’s how Microsoft put the power of Amazon on your desktop!

Has the Powershell pipeline ever saved your life? Have you ever had to roll your own runspaces and lived to talk about it? (Did you know you can use runspaces to make multithreaded Powershell scripts? Not saying that *you would…) Let’s talk about it in the comments below!*

About Me

I’m the founder of caranna.works, an IT engineering firm in Brooklyn, NY that employs time-tested and proven solutions that help companies save lots of money on their IT costs. Sign up for your free consultation to find out how. http://caranna.works.

If Your Business Still Uses Servers, You’re (Probably) Doing It Wrong

Your servers are useless, and you should sell them.

Many businesses small and large buy servers for many wrong reasons. Some businesses want a server for an application they wrote. Some others want to keep their data “private.” Others still want servers for “better speed.”
All of these reasons are wrong. There are only three reasons that I can think of that justify the purchase of physical servers (feel free to list more in the comments!):

  1. A regulator your business is beholden to requires it,
  2. Your app really does need that kind of performance (read on to find out if this is you), and
  3. You have a strong passion for burning money.


You see, when you buy servers from Dell or the like, you’re not *just* buying servers. Servers come with a ton of overhead that’s hard to see coming if you don’t buy them often enough:

  1. You’ll need to buy a support plan for when those servers decide to go on vacation during your business hours (which they will), or you pay people like me to support them (which I’m happy to do! http://caranna.works for a free consultation!),
  2. Servers need to be stored in a cool place that isn’t too dusty, and, more importantly, they need to be kept cool if you get several of them.
  3. Servers need A LOT of power (though they use less power than they used to), and ideally that power is clean (which most office buildings have, which is good)



The Cloud is not a fad.

A lot of people make fun of “the cloud,” and rightfully so; drinking games have been made out of keynotes that abused the word endlessly. Debauchery aside, “the cloud” as we know it is, from a 35,000 foot view, a collective of servers that themselves host hundreds of virtualized servers of varying sizes created by millions of people and companies. (Curious about virtualization? Keep an eye out for my post “Yes, you can have a computer in your computer” coming out tomorrow!). Instead of buying a server from Dell or HP and worrying about the above, you create a virtual server on a cloud, do what you need to do and pay for the time, storage and network bandwidth that you use.
Servers in a cloud usually cost between $0.02/hr for really basic machines to over $2/hr for really, really fast workhorses with tons of memory. What’s more incredible than these incredibly-generously prices is what you get with your purchase:

  • Your servers are backed up and “copied” between many other servers in the same region (nearly every cloud service has datacenters spread out across the world), which nearly guarantees that it will always be available when you need it,
  • 24/7 monitoring of nearly anything you can think of,
  • Programming libraries that make it extremely easy for your developers to create new servers in minutes instead of days,
  • Extremely fast networking that you never need to worry about or take care of, and
  • Handfuls of additional services that save you a LOT of time and money, like
    • Create databases for your app or business that are instantly available 24/7,
    • Create web services for hosting your apps on that can handle one user or 10 million users with ease, or
    • Create clusters of extremely fast storage for things like photos and videos that will nearly always be available



The Cloud Saves You Money

To drive the point home, let’s run through a real-life example of a use case where the cloud might be an appropriate fit.
Let’s say that you run a small individual accounting firm. Your six accountants are dependent on QuickBooks, TurboTax, Office and Windows. Business is doing well and you’d like to plan for an upcoming expansion.
In most cases, this will require putting all of the machines behind Active Directory (it is significantly difficult to manage individual Windows machines without it), putting your printer(s) behind a print server and putting your TurboTax and QuickBooks customer files on some kind of storage that’s easy for everyone to access.
To do everything in house, you’ll need:

  • One machine to serve as a domain controller and key management (license) server for new Windows installations,
  • One machine to serve as the print server (you could use the domain controller as the print server, but this will cause problems later down the road), and
  • Two cheap (but not too cheap) network-accessed storage (NAS) devices for that shared storage (one for backup)

To do this, you should plan on spending, at minimum:

  • $1500 for a Dell PowerEdge R220 (which will host the domain controller and your print server) +
  • $200 for a switch to connect those servers and your machines to (your $50 Linksys will not cut it for your expansion) +
  • $600 for one Windows Server 2012 standard license (which will cover the server and the two virtual machines hosted on top of it) +
  • $800 for the two NAS devices =
  • $3100 total + power costs


This doesn’t factor the costs of email or computers; we’ll assume that the computers are sunk costs and you’re already paying for Google Apps or Office 365.
This may not be a lot depending on how well your business is doing, but let’s compare the cost of doing this on Microsoft Azure or Amazon Web Services:

  • $30/month for the domain controller (assuming an A3 instance, which should be enough for a domain controller and a few hundred machines in a single site) +
  • $15/month for the print server (assuming an A1 instance, since print servers don’t require +
  • $25/month for 1TB cloud storage +
  • $400 for one NAS device =
  • $70/month ($840/year) + $400 one-time cost

(Prices for resources on Amazon Web Services are similar.)

Moving this business into the cloud will not only save them hundreds of dollars per month in power costs, but will also save them thousands of dollars per year in hardware repair and depreciation costs! Another good thing about cloud services is that they are all pay-as-you-go; if you ever decide that cloud isn’t for you, you can cancel whenever you want with no early termination fees.

Trying It Out Is Risk-Free

Microsoft and Google give new users $200 and $300 in credits to try their services out with no limitations. Amazon offers a year-long free trial, but only for their most basic service level (which I’ve found inadequate for all but the most basic workloads). All of them are great, and getting started on any of them is pretty easy.

Try Azure here: https://azure.microsoft.com/en-us/pricing/free-trial/
Try AWS here: http://aws.amazon.com/free/
Try Google Cloud Platform here: https://cloud.google.com/free-trial/index

What was your physical to cloud transition story? Is there anything holding you back from trying the cloud? Leave a comment below!