Matchbox Twenty at Jones Beach

It’s okay to make fun of Matchbox Twenty. Although they were originally described as radio-friendly grunge, they’re a mainstream rock band. In fact, they named their greatest hits album Exile on Mainstream — both a nod to the Rolling Stones double album Exile on Main Street and a joke about how they’ve been “banished” to middle-of-the-road massive commercial success.

I’ve always loved Matchbox Twenty, but I didn’t really think about why until Niki and I saw them at Jones Beach last week.

I’ve met Rob Thomas twice — both times eating in Westchester. I’ve seen his solo tours at Live 8 which we live-blogged in 2005 for AOL, at the Beacon Theater from the front row and again at Jones Beach. And we’ve seen Matchbox Twenty three or four times. Rob’s duet with Jewel on Stop Dragging My Heart Around and his surprise appearance with Santana for Smooth were two of my all-time favorite Jones Beach memories — and I’ve seen more than 50 shows there.

But when Rob said he was going to take us all back to 1996 for a few songs from their debut album it finally clicked.

Along with Alanis Morissette and Gin Blossoms, Matchbox Twenty was the soundtrack of the year I met Niki.

Seventeen years of Niki coincides with seventeen years of Matchbox Twenty.

That’s a great start.

August 17, 2013 setlist:

Parade
Bent
Disease
She’s So Mean
How Far We’ve Come
3 AM
Real World
If You’re Gone
Long Day
I Will
Unwell
Radio
So Sad So Lonely (my favorite song of the night)
English Town
Bright Lights

Encore:
Jumpin’ Jack Flash (it was better than you’d think)
Back 2 Good (where drummer Stacy Jones switched instruments with guitarist Paul Doucette who played drums on the first 3 Matchbox Twenty albums)
Push

Scalped!

Yesterday morning I had an appointment with a surgeon to check out a lump on the top of my head. For some reason I figured the doctor would inspect me and then schedule its removal for a later date. So I was in a bit of shock when he wheeled over a tray of tools, had me change into a hospital gown and started cutting my hair.

He said he’s 99% sure that it was just a harmless sebaceous cyst, but they’ll biopsy it just to be sure.

When I got home, my 7-year-old was eager to check out my new bald spot and stitches. After checking them out, he told me I should wear a hat so I don’t scare his brother and sister.

Update: It was harmless. I’m all good. Thanks.

Pivoting

My 16-month absence from blogging coincided with a major career transition.

At the end of 2011, I decided to change my startup in three major ways. I will explain what those three ways were soon, just know that to accomplish this big change we acquired another company, merged our companies together, kept their name and kept their CEO.

It might surprise people who work in big, slow companies to know that changing the course of a 35-person startup isn’t easy — especially when you’ve got famous customers who rely on your product and your team and they’ve got a daily audience of millions of people.

The good news is that our pivot was a success.

The bad news is that I miss being a startup CEO, actually building something instead of helping my old team from the outside. I’ve been describing this situation to people as being a ghost in your own house.

But let’s end on some good news: very soon, I won’t be a ghost anymore.

Pixar Does Marvel

C.K. Sample linked to this gorgeous picture of Tony Stark which showed how the Iron Man movie would look if it had been done by Pixar instead of Marvel. It made me wonder what Marvel movies would be like if Pixar made all of them, so I speculated in a bunch of tweets using the hashtag #PixarDoesMarvel.

Here they all are:

  • THOR STORY: When Beta Ray Buzz crash lands on earth, Thor and his warrior friends help him defeat evil emperor Grog.
  • A BUG’S WIFE: When Spider-Man is captured by the Sinister Six, Mary Jane Watson worries that he’ll miss their wedding.
  • MUTANTS, INC.: When Magneto captures Banshee to harness the power of his screams, Professor X’s students must save the day.
  • FINDING BARON NEMO: Captain America and amnesia-stricken Prince Namor search the globe for Baron Nemo during World War 2.
  • THE INCREDIBLES: It’s a movie about the Fantastic Four and this time Pixar doesn’t have to pretend that it isn’t.
  • SECRET CARS: Heroes and villains become cars and are forced to race each other on a planet created by the Toyota Beyonder.
  • GAMMATOUILLE: Chef Bruce Banner has a secret. He can only make his famous soup when he turns into the Hulk.
  • WOLV-E: He’s the best there is at what he does, and what he does is carve Sentinels into scraps in this alternate future.
  • UP NORTH: Canadian super team Alpha Flight travels to Paradise Falls after Snowbird is kidnapped by a discredited explorer.

Why yes, I do have three young children. How did you know?

Get out of my dreams, get into my car

This morning I turned off my alarm and fell back asleep for about 20 minutes. I snapped awake after a crazy dream and was lucky to make it to a 9am meeting.

First, I’d like to thank my subconscious for waking me up without my alarm.

Second, I’d like to tell my subconscious that this was a bizarre way to wake me up.

In my dream, Jason Calacanis was standing next to me in his usual pink collared dress shirt while I sat in front of a giant media PC and edited audio files. We were working on the latest audio track for one of our customers. Jason and I were back in a startup again and this time our business was making jingles for radio commercials. Our customer was a skin cancer clinic.

Jason: Think of some popular Spin Doctors songs. We can use one of those.

Me: So like Spin Doctors instead of Skin Doctors?

Jason hated when I made obvious puns, so why was he making one? [Editor’s note: this was your mind telling the story here, Brian.] Jason had preferred the name “Engadget” over my “Gear Eye” as Queer Eye for the Straight Guy was popular then, but he said that one day it would be an obscure reference. He was right.

Jason: Right. We could use a Spin Doctors song. I know the lead singer! He and I…

Me: …played chess on a cruise around New York City for an hour. Like twenty years ago. I know. I don’t like it. The best “skin” angle I can think of is “Two Pinches” instead of “Two Princes.” That sucks.

Jason: What if we do it like they’re the A-Team? A bunch of mercenary doctors in scrubs with skin cancer blasting weapons.

Me: Maybe. “I love it when a cure comes together!” Hey. What about Ghostbusters? There’s a team with weapons. And we could use the Ghostbusters theme! “Who you gonna call? Growthbusters!

Jason: I love it.

Me: “I ain’t afraid of no growths!” Wow. I love it too.

Jason: The song is old. We can get rights to use it cheap. We just have to find Billy Ocean.

Me: Billy Ocean?

Jason: He did Ghostbusters.

Me: No he didn’t.

Jason: Yes he did. He did the Ghostbusters theme and the one about Get Into My Car and Private Booty Queen.

Me: Private Booty Queen?

Jason: Yeah (singing) “Private Booty Queen, now we’re sharing the same dream…”

Me: That’s Caribbean Queen!

Jason: Sounds like Private Booty Queen to me.

[Editor’s note: Seriously, listen to the song again. This is an easy mistake to make.]

Me: I don’t even have to Google it. It’s Ray Parker, Jr. He did Ghostbusters.

And then suddenly I was awake. [Editor’s note: You wake up suddenly?]


UPDATE: I have been informed that I have blogged about my Jason dreams before. Very oddly, I used the exact same Billy Ocean song as inspiration for the title of that post too: Get out of my dreams, get into my church. It was nearly five years ago and somehow the awesome domain name Kabbalahster.com is still available.

Quitting web design

Jeffrey Zeldman posted an interesting reader letter, called Letter of the Month.

The author wrote Jeffrey to explain that his Designing With Web Standards book had changed his entire career.

I remember what I now refer to as a pivotal moment in my web career. I was sitting in bed reading your book. I knew nothing really about CSS (other than for setting fonts/colours and I couldn’t see what was wrong with the old way of doing things). However as I read, it was like a slow realisation. I remember vividly turning to my wife and saying “This book is amazing. I am going to have to relearn everything I know about building websites”. I didn’t know whether to laugh or cry. On one hand the enormity of what you were suggesting was overwhelming but on the other hand it was just the injection I needed in my own career.

I reached the end of the book and made a decision. I was going to move the whole of Headscape across to standards based design. Not only that was I was going to do it as soon as possible. By 2005 we had made the transition and have never looked back.

The author was Paul Boag, whose Boagworld podcasts I’ve enjoyed. As the author was revealed it felt like reading some of Brando’s fan mail — signed by a young Al Pacino.

It reminded me that I had the opposite reaction with Jeffrey.

When I’ve talked to people recently about working on several projects with Jeffrey when Happy Cog was a one-man shop, I’ve said something like “I was a designer and Zeldman was a designer. But he had a book, so I did the server-side work.” That’s an oversimplification of course. Front-end web design work in the days of Netscape 4 and the Great Browser Wars was a nightmare. Working in SQL stored procedures and ASP and VB Script was harder than HTML in many ways, but 100 times easier than dealing with browser compatibility. All the code I wrote was guaranteed to run the same way over and over, regardless of the visiting browser.

So thanks in part to working with Jeffrey, I quit doing web design and never looked back.

Surviving Amazon’s Cloudpocalypse

Two weeks ago, our Crowd Fusion team was right in the middle of the big cloud outage at Amazon. All of the big brands using our platform run on Amazon servers.

George Reese from O’Reilly had the best early recap and perspective of the dozens of stories I read:

If you think this week exposed weakness in the cloud, you don’t get it: it was the cloud’s shining moment, exposing the strength of cloud computing.

In short, if your systems failed in the Amazon cloud this week, it wasn’t Amazon’s fault. You either deemed an outage of this nature an acceptable risk or you failed to design for Amazon’s cloud computing model. The strength of cloud computing is that it puts control over application availability in the hands of the application developer and not in the hands of your IT staff, data center limitations, or a managed services provider.

Here are some excerpts from our story in the trenches:

After informing the Amazon representative that we had failed over to the west coast and that we no longer needed this running instance, he urged us to decommission all the US East instances that we were not using in order to free capacity in that region.

He was impressed that we had successfully failed over to the US West region when so many others were still down and said: “You were one of the very few to have a west coast contingency plan and recover quickly. Bravo.”

Read our very detailed Crowd Fusion cloudpocalypse story here.

Weathering The Storm In Amazon’s Cloud

[This post originally appeared on the Crowd Fusion website, which has been replaced by ceros.com.]

As a company who works with several enterprise customers in Amazon’s cloud, Crowd Fusion would like to remind everyone of what life outside of the cloud can be like. Remember, in 2007 when a Texas pickup truck rammed into a Rackspace Datacenter and took a large part of the internet offline for three hours? That was the result of one truck.

We are much happier living in the cloud than we were back in the days we were using traditional servers. There are multiple benefits to the cloud that are attractive to both us and our customers, including the ability to launch a multitude of servers when traffic starts to heat up for a breaking news story.

However, we also are intimately aware that to live in the cloud, you actually have to be more diligent about architecting and planning for failure. There aren’t any trucks that are going to run into your data centers, but there are days like Thursday April 21, 2011, when Amazon Web Services experienced escalating problems in one of their regions.

The Morning

We experienced EBS failure in the early AM on Thursday. As luck would have it, this only affected 4 of our 28 EBS-based instances, and none of those instances were single points of failure. As a result, none of our clients (like TMZ, Tecca, and News Corp’s The Daily) experienced any downtime during the early parts of Amazon’s issues. We run multiple accounts and Amazon rotates the availability zone names per account, so it’s unclear how many of those instances were in the one affected availability zone. We suffered degraded performance temporarily while we were able to remove those instances from our application’s connection pool.

At roughly 10:30am EDT, an Amazon representative via Amazon Gold support indicated that we would be unable to provision new instances in any US-east availability zone due to EBS API queues being saturated with requests. We asked if we could expect our currently running EBS instances to fail, or were we simply unable to use the EBS API to create/restore/backup volumes? Our rep answered, “Currently running instances are not affected. This only affects the ability to restore and launch.” We were still 100% up but on less hardware, so we accepted temporary higher utilization of our hardware while the EBS API issues were being resolved at Amazon. Amazon asked us to disable all our EBS API calls in order to help alleviate their queue problem.

The advice we received from our Gold Support was corroborated by an Amazon Health Status update at 11:54am EDT:

We’d like to provide additional color on what were working on right now (please note that we always know more and understand issues better after we fully recover and dive deep into the post mortem). A networking event early this morning triggered a large amount of re-mirroring of EBS volumes in US-EAST-1. This re-mirroring created a shortage of capacity in one of the US EAST-1 Availability Zones, which impacted new EBS volume creation as well as the pace with which we could re-mirror and recover affected EBS volumes. Additionally, one of our internal control planes for EBS has become inundated such that it’s difficult to create new EBS volumes and EBS backed instances. We are working as quickly as possible to add capacity to that one Availability Zone to speed up the re-mirroring, and working to restore the control plane issue. We’re starting to see progress on these efforts, but are not there yet. We will continue to provide updates when we have them.

Contingency Planning

In the past, we have seen Amazon EBS volumes suffer degraded performance, often during EBS snapshot operations. During those times, we have had to disable EBS snapshots, and sometimes pull MySQL slave databases from our application’s connection pool or promote a MySQL slave to a MySQL master. After the second time this happened, we decided it was prudent to have a replicating MySQL slave in the US West region as a contingency plan.

We had a disaster recovery process in place for spinning up our entire infrastructure on Amazon’s US West region in under an hour.

Loss of Master Database

At 12:45pm EDT, one of our customers reported having problems posting in their CMS. At 12:50pm EDT, their MySQL master databases went to 100% CPU i/o wait and was unavailable. For a customer whose business is publishing stories ahead of their competition, this outage was mission critical.

Normally, our immediate response would be to promote a slave and continue to operate in the US East region. But because there was no indication from Amazon that these issues weren’t spreading across all EBS volumes in all US East availability zones, we decided the best course of action was to failover to the US West region. Less than 45 minutes later, our customer was back online. Roughly an hour after that, things were stable enough where they were able to continue posting breaking news posts.

Amazon Recovery Call

The next day, on Friday at 3:30pm EDT, our client’s master database in the US East region finally recovered. It was at 100% CPU i/o wait for over 26.5 hours. That’s 25 hours after we had our customer back up and running on the West Coast. Two hours after the database recovered, we received a follow-up phone call from Amazon support informing us that our EBS volume had recovered.

In an attempt to confirm the functionality of their recovery process, we were asked if our volume had been recovered to the state it was in before it was lost. It was. We had not lost any data.

After informing the Amazon representative that we had failed over to the West coast and that we no longer needed this running instance, he urged us to decommission all the US East instances that we were not using in order to free capacity in that region.

He was impressed that we had successfully failed over to the US West region when so many others were still down and said: “You were one of the very few to have a West coast contingency plan and recover quickly. Bravo.”

Plan for Failure

When designing large-scale web applications, if you are not designing for failure at every piece of infrastructure, it’s not a matter of if you’ll fail, it’s a matter of when. This is not specific to the cloud, but the cloud makes planning for failure more essential.

As much as we hate to admit it, the cloud is simply more susceptible to failure than dedicated hardware. There are many reasons for this, but the most important one is complexity. There are just more moving parts. There are layers of virtualization, there are multiple tenants, and there are APIs developed by the cloud providers for the purpose of programmatically controlling hardware resources. The major advantages of this complexity far outweigh the drawbacks: more flexibility and more cost efficiency.

We were downright lucky we had a US West contingency plan. It is an expensive endeavor to have multiple mirrored instances running on the more expensive coast just in case everything goes down. We hoped we’d never have to actually use it, but in hindsight, it was the best possible solution to the situation, and a solution we will continue to use in the future. And like most companies affected by this outage, we already have many planned improvements to our application’s tolerance for failure.

Amazon EBS

Amazon’s EBS technology continues to be the best AWS cloud solution for MySQL database storage. For larger instances, the performance difference compared to an instance’s local drive is substantial. The EBS snapshot feature allows us to create backups without putting additional strain on our instances, and the restoration period is shorter.

The major disadvantages to using EBS are degrading performance and reliability concerns. Like all things in the cloud, as long as you plan for failure of an EBS volume, it is still the best possible option.

It’s been argued that the sites that didn’t go down don’t rely on EBS, but relying on any one piece to be 100% available is still a single point of failure you have to plan for. During one of our EBS failures, we were actually able to restore functionality within minutes by restoring data to the local disk instead of EBS.

We’re sticking with Amazon

Amazon is the only cloud provider that allows us to failover to another region without extensive effort, and they have growing geographical coverage for even more failover options.

Amazon is also leading the pack with more cloud services like SQS, CloudFormation, S3, SimpleDB, SNS, EMR, RDS, etc. As other cloud providers advance their features and options, the choice won’t be as easy.

Update: Amazon finally issued their apology with a summary of the outage details.