Monthly Archives: July 2010

100 million Facebook profiles is what Canadian security researcher Ron Bowes says he has collected from Facebook and published to some P2P sites.

Once again people rise in anger at Facebook, accusing it of not properly protecting their account information. Oh what evil creatures could cause such horror…

 

This man created a crawler that harvested information on 100 million Facebook accounts. There is no mention of a hack, or any privileged information being discovered. Just information that was already there… Probably already accessed and processed by every search engine in existence.

Facebook is a social network site. Hell, it is The Social Network site, and the most common trait of a social network site is to promote contacts between people. That means there has to be some amount of information visible for every user. How else can we know if Johnny User is really the guy or gal we want to befriend? I have a Facebook account and I have protected some of my information, but short of deleting my account there is always some information available.(1) My name, my picture if I have one uploaded, and any other information I did not choose to protect, will be visible.

If Mr Ron Bowes, or anyone else comes along and finds that information about me, it’s not a hack, not a security flaw, just the normal process of checking out someone’s account. It’s been done millions of times every day. I do it sometimes. Facebook even suggests we do it. It presents us with 2 or 3 profiles we might be interested in. Clicking on these users gets us access to their profile. Depending on their privacy settings we can see some information about these accounts. At the very minimum we can see their name, and little else, if they have their privacy settings set that way, or if they haven’t bothered, we can see most of their information, with fotos, and videos and whatever else they’ve uploaded. Nothing surprising there…

The only added value for Mr Ron Bowes is that he automated the process… No big deal.

And then he published that info. Here is the big problem. Did he have a right to publish that data? I don’t think so, and even if he did I still think he shouldn’t have done it. Of course I know nothing of Mr. Bowes intentions, or motives, but still it’s debatable.

Let’s face it, Facebook doesn’t have a good rep when it comes to security, but let’s not exaggerate our criticism. Let’s save it for matters that really matter, not some nonsense like this.

 

(1) There are allegations that Facebook does not delete accounts and maintains user data even when the user has asked his account to be deleted. This is much more serious that the above “problem”, because if these allegations are true, it means data is being kept against the user’s wishes. Dangerous and probably illegal…

One of the things I have on my (short) list to blog about is SNMP.

I’ve been thinking about how to approach it, and today, one of the sysadmins I follow on twitter (@standaloneSA) tweeted that he had written an entry about SNMP. I went to his blog to check it out, and I highly recommend it. It’s much better than what I would have written, so I’ll just point people his way.

You can reach his blog entry here.  Well done, Matt.

Today I glanced across a ITWorld newsletter with some Unix tips. I knew them all except one.

Well, actually it’s not exactly a tip or even a new command. It’s just a new way of using a command that is quite well known. We’re all familiar with the echo command, I’m sure, but I was quite surprised when I say this:

$ echo *

The wildcard will be expanded and all the files in the local directory will be printed out on one single line separated by spaces.

I stared at it for a moment. It is so brilliantly simple that it’s really amazing.

Of course, you won’t see any files that begin with a ‘.’ and any symbolic links will be listed along with your other files. The biggest problem is if you have filenames with spaces in their name. Not that any unix or linux guy would do that right? But if you do, they’ll show up mixed with the rest of the files.

It’s not perfect, but it’s just so amazingly simple. The echo command has been along since the early versions of unix, and so have shell wildcards, I jest never thought of putting the two together. Old dogs can still learn old tricks after all.

As every sysadmin knows, the systems we manage are subject to change. New applications are installed, existing applications have their workload changed, and other similar events are all part of an evolving system. I’d venture that very few systems don’t grow or evolve during their lifetime. Sometimes you know what the expected evolution is right at the beginning of the system’s life, but most often, things aren’t so well planned to give you that foresight. We need a way to keep up with the the system’s behavior, and use this knowledge to forecast how it will behave in a month or in a year’s time.

There are a variety of tools out there, but my favorites are Nagios and Cacti. Both are well known in the industry, are actively developed, and both can use SNMP to collect their information. For this article I’m mostly referring to Cacti as it allows us to gather information and display it as a graph. Nagios is used to generate alarms when (some of these) values exceed certain thresholds, but that’s an all different article…

From the very beginning of the system’s life, we should start collecting performance values. These include the basic information such as CPU load, Memory usage, Network traffic (for every interface), Disk occupation (for every partition in use) and also information that is related to the system’s use. Depending on what applications we have running, this can be a mix of metrics to keep up with.For example, for a webserver we should monitor the number of processes, number of requests/s, average time per request, and any other value that can help understand the system’s performance.For a database system, we should monitor the number of queries/s the number of open files, the average time per query among others. Database systems have a number of items that we can monitor, and the more we use, the better picture we will be able to have of our system.

Mixing system information with application performance is very useful as we’ll see next. At first this information doesn’t seem to help much, but this is the system’s baseline, akin to it’s heartbeat showing you how the system behaves as a whole, and how it evolves as you add new clients or new functionality.

As you gather data you can see some trends emerging from the graphs. In all probability you shouldn’t expect a lot of flat lines. You’ll have a lot more sloped ones. These allow you to forecast your system’s behavior into the future. This in invaluable information, allowing you to pinpoint when you will have capacity problems, and give you data to back up your request for new servers. You can also show management what will happen if you don’t get them.
Correlating application with system metrics is what allows you to really make an informed decision regarding capacity. You can ask questions like “what will happen if the requests per second increases by 20% or 40%?” This is the main advantage of baselines for management. They allow you to have a visual representation of how your system’s behave and how you stand in capacity terms.

Baselines also serve another purpose, one much dearer to us sysadmins. They show us what our system’s are like under “normal” usage. If there is a problem somewhere, you will probably see a deviation in the metrics you are collecting. Sometimes it will help you pinpoint the exact cause of the problem. Have you been slashdotted? That would show up as a significant increase in TCP connections, with higher network traffic, leading to higher system load, and very probably using up all your memory.Having a ‘picture’ of the system under normal load is something you truly appreciate once you’re in trouble wondering what is happening. But then, it is too late to collect data to analyze. So start now, start building your baseline today, and I assure you it will help you in the future.

As per my previous entry, I decided to clean up my PC and removed a ton of dust. I also removed the CPU cooler in order to clean it. After I put it back together, I started thinking about the thermal compound that was on the CPU. It looked good, but since I’ve had the PC for 2 years and never changed it, even if I removed the cooler a couple of times. I thought it might be a good idea to get some new compound, and see what effect it had.

So today I bought a syringe with 2.5 g of Artic Silver Céramique. I got home (through the hottest day of the year) and opened up the box again and replaced the thermal compound.

There was a slight improvement. Honestly i was expecting a little more, but that’s what the values read.
The system was left to run in idle in both measurements, just as I had done yesterday.

So repeat after me: A little maintenance can help your system. Clean those dustbunnies, and renew the thermal compound while you’re at it 🙂