Monthly Archives: August 2010

And once again codebits is being prepared. For those that don’t know it, codebits is an event my company has organized for 3 years. It’s a very special event. It’s main feature is a 24 hr programming contest. It’s not meant for marketing or people in management, although they are also welcome, but it’s main target are the young developers, the hackers, the one’s that are innovating and pushing the technological envelope 🙂

It will be held in Lisbon, on the 11th, 12th, and 13th of November, and it’s a non-stop event. The organization provides food, and drink, and lot’s of activities to keep you busy.

During the first afternoon and the morning of the second day we will have workshops, and talks, and then at noon on the 12th the programming contest will start. It’s a team effort, you are supposed to team up with 1 or more elements and come up with something. Then you have to implement it during the next 24 hrs 🙂

Sounds fun? Yes it does, and I can tell you, on my own personal experience that it is fun!

The call for presentations was published a few days ago and the community is already buzzing with enthusiasm.

There are already 29 talks proposed, and I expect more to be submitted…

Hey, even if you’re not Portuguese, you can still attend. We have some guest speakers that will be presenting in English, and most attendees  have at least a reasonable grasp on the language.

In the past we’ve had talks on many topics, from the very technical, like specific languages, or methodologies to more common stuff like sessions on usability, or on best-practices. We’ve had sessions on databases, traditional sql and also non sql ones, and on becoming an entrepreneur. We’ve even had a workshop on lockpicking 🙂

Has all this caught your attention? I hope so…

So head on over to codebits and check the site. All this information and much more, is there for you to read, and perhaps we’ll get a chance to meet at codebits.

If you’d like to make a presentation, you’d be welcome to. Please read the guidlines presente on the site, and submit your draft.

Cheers.

I’ve been taking a look at our resolvers and I was surprised by some of the results I found.
I ran a tcpdump for 10 minutes capturing packets sent to one of our resolvers and extracted the names being queried.
During those 10 minutes that particular resolver answered 1,25 million queries for 250 thousand distinct names.
Looking through the list there were many names that result from mis-configured equipment and other mistakes, but that’s on the low end of queries. It’s the higher end, with the most commonly resolved names that actually interests us.
The list is topped by a name that is hard-coded into some of our clients routers. Having several hundred thousand of those devices out in the open making queries does skew the results so I ignored those queries, and just looked at the rest of the names. I ordered them by frequency and here is a brief analysis of the top 50 names.

As one might expect, at the top of the list comes ‘www.facebook.com’ but I was actually surprised to find so many names related to facebook. There are also ‘static.ak.fbcdn.net’, ‘apps.facebook.com’, ‘profile.ak.fbcdn.net’, ‘pixel.facebook.com’, ‘creative.ak.fbcdn.net’, ‘platform.ak.fbcdn.net’, ‘external.ak.fbcdn.net’, ‘static.ak.connect.facebook.com’, ‘photos-g.ak.fbcdn.net’, ‘photos-b.ak.fbcdn.net’, ‘photos-e.ak.fbcdn.net’, ‘static.ak.facebook.com’, ‘photos-c.ak.fbcdn.net’, ‘photos-a.ak.fbcdn.net’, and if I had dug deeper, I would certainly have found more names.
In case you haven’t figured it out fbcdn stands for facebook content delivery network, and ak means Akamai.
Out of the top 50 names queried, 15 belong or are related to Facebook. That is impressive.

The second most popular name being queried was a root server. Not sure I understand why, but there were many, many queries resolving ‘a.root-servers.net’. A close third was Google’s ‘www.google-analytics.com’. No surprise here, as it is probably the most widely used analytics solution today.
Fourth place was used by our own voip proxy, which is always nice to see 🙂
In fifth place we have ‘google.com’, followed by ‘www.youtube.com’, and ‘www.google.com’. Funny that our local ‘www.google.pt’ only made 13th place.
Also related to youtube are some names like ‘i1.ytimg.com’, ‘i2.ytimg.com’, ‘i3.ytimg.com’, ‘i4.ytimg.com’ that show up at the lower end of the 50.
There is also ‘googleads.g.doubleclick.net’, and ‘pagead2.googlesyndication.com’ which are self-explanatory.

Then there are a couple of ntp servers, and at least 1 anti-virus name I recognize.

This was just a trial run, and I found the results pretty interesting.
Maybe I can automate this, and see what other surprises hide lurking in the data.

Last Friday after lunch, I got an alarm that a certain system was down. It wasn’t one that had direct impact for clients, as it’s a backend system mostly used for running scripts and collecting application info. An antique, so to speak…I checked our nagios and also cacti and sure enough, we had lost contact with that system. I finished what I was doing and went to the datacenter to check it out, thinking it would be a quick fix.I got there and after connecting to the console I got the expected kernel dump. The words “out of memory” immediately came to mind.The system was completely unresponsive, so I rebooted it. And that’s when the fun began…

It needed a file system check. Said file system check aborted at about 80% prompting me to enter single user mode and run the fsck myself, which I did. I confirmed the device’s name, entered the command with the ‘-y’ option to answer yes to every prompt and pressed enter. It started to chug along and spat out the usual messages about fixing inodes, and then it crashed again. It just said “e2fsck exited with signal 11”. Things were not looking good. Signal 11 is a SEGFAULT, and that usually involves memory…

Since the system had 2 disks in RAID 1, I broke the mirror, and removed one of those disks for backup in case it got even messier. I booted with just one disk and tried again. Still no luck. fsck still segfaulted, which is something I had never seen before. I googled for it and all I got were a few old pages (dating from 1999 to 2002 if I remember correctly). Some pointed to memory problems, others suggested disk corruption. At that time I was getting pissed. This was taking too long. I returned upstairs, got a Finix CD from a colleague and this time I brought 2 new disks similar to the ones installed in the server. I used these disks to build a clean mirror, and then fooled the system into booting from one of the problematic discs. It started to mirror that disk onto the blank one. I now had a backup of sorts 🙂

Then I continued trying to  boot from the CD, and running fsck from there… it still broke. With help from my colleague which was also curious as to what had happened, we tried a couple of options. It kept breaking. it would start running the fsck, fixing lots of errors then segfault. Then out of sheer curiosity we tried to mount the disk. It mounted! We tried reading from it and it looked gibberish. We agreed that disk was destroyed. I was already considering reinstalling the system when my colleague tried booting from cd and trying to read the other disk. I wasn’t too happy. That was my backup. I didn’t want to mess with it.He argued he’d mount it read-only and we went ahead. This copy also mounted as before, but now we could actually see lots of files. Apparently everything was there… We quickly used the information on that disk to configure and mount one of the nfs mounts the system was already allowed to access. We then copied everything onto that storage. It took a little time, but finished without a single error. I checked several files and they all looked perfect.I rebuilt the mirror using this copy and one of the blank disks, and then ran another fsck. Once again I got the usual screenfuls of errors but this time it finished without errors… I admit I was a little suspicious as I removed the CD and booted the system, but it worked.

I was ready to trash the system, and reinstall it from scratch, but a little patience and some stubbornness sometimes pays off. It was Friday, and if I had reinstalled, it would probably not be ready before everyone left, and I’m not sure if I could find out everything needed to restore the system to it’s production state.

Lessons learnt:

  • Make sure every system is fully documented. This one wasn’t, and that led to us being more willing to try harder to recover the disks
  • Make sure you don’t panic, and take the extra seconds to make sure you don’t mess up. Our priority was to salvage the data. I tried to achieve that by breaking the mirror and keeping a copy. Turns out that one of the disks was really messed up. By breaking the mirror we preserved a salvageable copy.
  • Don’t try to do it all alone. By working with a colleague, you can discuss important steps and minimize risk.
  • Lastly .. don’t just give up.

Of course things could have turned out worse. The disks could have been irrecoverably damaged, and the only way out would have been installing new disks and reinstalling from backup…But then again when I first went downstairs, I was just expecting a 10 min stay, and I think I stayed there for over 3 hours.

I’ve been reading a copy of Web Operations by John Allspaw, Jesse Robbins and a bunch of other equally knowledgeable people.

My boss lent me his copy, and I found it so good that less than halfway through the book, I decided I had to get my own copy. Yes, I think it’s that good!

If you’ve been paying any attention to how any reasonably large company creates and deploys services for the web, you probably have heard about devops. The concept isn’t new, but lately it has been getting more attention. It has found some success in bridging the gap between developers and operations people. It’s about time we stop blaming one another 🙂

The book is great, especially in the way it is written, with lots of real life stories, some good, some bad, and you can actually empathy for some of the problems they faced.

Most of the stuff in the book (up to what I’ve read) isn’t new. But seeing it there, printed in paper, allowing you to read about other people’s hard earned lessons is pretty darn good. I can relate to some of those issues as I’ve lived through similar problems, even if at a smaller scale.

Seeing that our solutions were similar to the pros does make one warm and fuzzy.

This is one book everyone working in operations should read. And also most of the guys developing for the web.

The good news is that my copy arrived today, so I’ll be returning my boss’ copy tomorrow. Let’s see what he has to say when he starts to dig into it 🙂