How to find user information from an email address

Emailline
Photo by Mzelle Biscotte

I’ve had a lot of people ask me about the FindByEmail service I set up, so I’ve decided to release the code as open-source. You pass it an email address, and it queries 11 different public APIs to discover what information those services have on the user with that email address. Give it a try for yourself by entering an email address below:

Email address:

The code is under the 2-clause BSD license, to make it easy for commercial reuse. It’s all in PHP, and you’ll need to add your own API keys for some of the services to config.php before you can use it yourself. It’s up on github at

http://github.com/petewarden/findbyemail

If you do find more services that offer an email-to-user mapping, either let me know and I’ll add them, or fork the project and I’ll merge your changes back in. The module currently supports these services:

Gravatar
Yahoo
43things
Vimeo
Amazon
Brightkite
AIM
Friendfeed
Google Social Graph
Rapleaf
DandyID

The last four conglomerate information for multiple services, so it can sometimes retrieve Twitter, LinkedIn and Facebook account data. There’s also some code for querying Skype, but since that involves setting up a Skype client instance running inside a headless X-Window session, I’ve commented that code out for now.

C Hashmap

Knapping
Photo by crazybarefootpoet

I still remember my excitement when I discovered Google after years of struggling with awful search engines like Altavista, but every now and again it really doesn't find what I'm looking for.

I was just going to bed on Tuesday night when I remembered I had to start a job processing 500 GB of data, or it would never be done in time for a deadline. This process (merging adjacent lines of data into a single record) was painfully slow in the scripting languages I tried, so I'd written a tool in plain C to handle it. Unfortunately I'd never tried it on this size of data, and I quickly discovered an O(n^2) performance bug that ground progress to a halt. To fix it, I needed a hashmap of strings to speed up a lookup, so I googled 'c hashmap' to grab an implementation. I was surprised at the sparseness of the results, the top hit appeared to be a learning project by Eliot Back.

Before I go any further, if you need a good plain C hashmap that's been battle-tested and generally rocks, use libjudy. Don't do what I did, trying to build your own is a silly use of anyone's time! My only excuse is that I thought it would be quicker to grab something simpler than libjudy, and I'd had a martini…

I stayed up until 2am trying to get the hash map integrated, discovering some nasty performance bugs in the implementation as I did. For instance, the original code actually tried to completely fill the hash map before it reallocated, which means for a large map it often searches linearly through most of the entries if the key isn't present, since it only stopped when it found a gap. I also removed the thread primitives, and converted it over to use strings as keys, with a CRC32 hashing function.

I don't make any claims for the strength of the resulting code, but at least this version has a unit test and I've used it in anger. Thanks to Eliot for the original, here's my updated source:

http://github.com/petewarden/c_hashmap

Hopefully this will help out any other late-night coders like me searching for 'C hashmap'!

Is it time to use page-views as loan collateral?

Noddingdonkey
Photo by Joshua De Laughter

I recently finished The Big Rich, a history of Texas oil-men by the author of Barbarians at the Gate. It was striking how similar the early days of Texas oil felt to the current web startup world, full of skeptical old companies, a few new-born giants and a crowd of wild-catters convinced they were just one lucky strike away from riches.

One detail that really struck me was an innovation in financing that enabled the independent operators to build their businesses. Bankers in Houston began giving out loans with the collateral based on the estimated reserves underneath a wildcatter's oil wells. This was unheard of, but it made perfect commercial sense. As long as the banks could rely on a trustworthy geological report, the reserves represented a steady stream of cash to guarantee any loan. In return, the independents were able to re-invest in the gear and labor needed to sink new wells and expand.

This got me wondering if this is a better model than the current angel/VC equity standard for web financing? If you have a pretty reliable income stream from advertising on a site, are there banks comfortable enough scrutinizing audited visitor reports to lend you money against that? Nothing I'm working on fits that description, but I'm genuinely curious if we're at a stage of maturity in the industry where this sort of thing makes sense.

I see a lot of businesses out there that are never going to be the next Google but could be decent money spinners with some reasonable financing. The VC model relies on hitting for the fences, so most of the solid prospects I see end up either boot-strapping painfully slowly, getting angels and disappointing them with comparatively unexciting growth, or just hitting the end of the runway.

How to speed up massive data set analysis by eliminating disk seeks

Trafficjam
Photo by Pchweat

Building fanpageanalytics.com means analyzing with billions of pieces of information about hundreds of millions of users. At this sort of scale not only do traditional relational databases become impractical for my needs (even loading a few tens-of-millions of rows into a mysql table and then creating an index can take days), key-value stores also fail.

Why do they fail? Let's walk through a typical data-flow example for my application. I have an input text file containing new information about a user, so I want to update that user's record in the database. Even with a key-value store that means moving the disk head to the right location to write that new information, since user records are scattered arbitrarily across the drive. That typically takes around 10ms, giving an effective limit of around 100 users per second. Even a million users will take over two hours to process at that rate, with almost all the time spent tapping our toes waiting for the hard drive.

Stores like Mongo and Redis try to work around this by caching as much as they can in RAM, and using delayed writes of large sectors to disk so that updates don't block on disk seeks. This works well until the data set is too large to fit in RAM. Since my access locations are essentially random, the system ends up thrashing as it constantly swaps large chunks in and out of main memory, and we're back to being limited by disk seek speed.

So what's the solution? SSD drives don't have the massive seek bottleneck of traditional disks, but I'm still waiting for them to show up as an option on EC2. Instead, I've re-engineered my analysis pipeline to avoid seeks at all costs.

The solution I've built is surprisingly low-tech, based entirely on text files and the unix sort command-line tool. For the user record example I run through my source data files and output a text file with  line for each update, beginning each line with the user id, eg:


193839: { fanof:['cheese', 'beer'] }

I then run sort on these individual files, which since the command is very efficient and the individual files are only a couple of gigabytes in size, only takes a few seconds each. I can then take several hundred of these sorted sub-files and use the -m option on sort to very quickly merge them into an uber-file that's sorted, which avoids the thrashing you get when it tries to sort files larger than RAM.

What does this buy me? Within this uber-file, all the information related to a given user id is now in adjacent lines, eg:


193839: { fanof:['cheese', 'beer'] }
193839: { fanof:['hockey', 'ice fishing'] }
193839: { location:'Wisconsin' }
193839: { name:'Sven Hurgessoon' }

It's now pretty simple to write a script that runs through the uber-file and can output complete records containing all of a user's information from multiple source files without having to do any seeking, since you're just outputting each user to a new row or file, and all the source data is also local.

This same technique can be applied to any attribute you want to index in your source data. You can use the fan page name as the key in the first part of each line instead, which is how I'm assembling the data on each topic.

So in summary, I'm using sort to pre-order my data before processing to avoid seeks. I'm sure I'm not the only person to discover this, but it's not something that I've run across before, and it's enabled me to cope with orders-of-magnitude larger data sets than my pipeline could handle before.