October 2006

Big Brands Want Real Numbers

by Joey deVilla on October 30, 2006

As I've mentioned before, the online advertising world, even though it's leaps and bounds more accountable than the offline world, has a few challenges to address:

Internet companies have had great success selling advertising space, in part because the effectiveness of those ads is supposedly so easily measured. But marketers, even as they continue to push more of their ad budgets online, are starting to ask for better proof.

A group of large companies, including Kimberly-Clark, Colgate-Palmolive and Ford Motor have said that by the middle of 2007, they will demand that online publishers hire auditors to check their ad and viewer counts. And analysts say they believe that online ad growth over the long haul will depend on the eagerness of large advertisers like these to shift more dollars online.

Meanwhile, reacting to advertiser questions, online companies like Google, Yahoo and LookSmart have begun to meet with industry groups to answer basic questions on how click-based advertising works.

Nice of the Times to list LookSmart along with Google and Yahoo! One of these things/Is not like the others/One of these things/Just doesn't belong…anyway.

I'm not surprised that large brands want audited figures from the publishers (like, say, nytimes.com) and ad networks (like Google or DoubleClick). The trouble is, the state of the art in audience measurement relies on cobbling together IP addresses, logins and user accounts, user agent strings, and cookies to identify a unique visitor, all of which are difficult to actually correlate definitively to a single person. The unstable and temporary nature of the composition of a unique visitor is, after all, what makes it so difficult to clearly show clickfraud. Oddly enough, trying to pin down your audience by watching what happens on your website, simple as that sounds, may not be the best way to go.

With rigorous sample control, the panel-based services like comScore Media Metrix, or Nielsen/NetRatings should offer a far more accurate picture of traffic to a given network or site, and provide baselines for clickfraud detection, an issue that Fred Wilson (responding to a TechCrunch post) touches on here:

"One point of controversy was around Digg’s claim of 20 million unique monthly visitors and steep monthly growth, whereas the Comscore’s most recent September report shows only 1.3 million monthly unique visitors and flat growth since April (see chart below). Comscore is notoriously flaky, and these numbers are for U.S. households only. Comscore is almost certainly significantly under-reporting Digg traffic."

Michael is one of the best bloggers ever and I read Techcrunch every day. But I think he got this one wrong. Comscore is not "flaky". They are a third party measurement service. They don't always get everything right. None of the third party measurement services do. But they are the best of the lot in my opinion. Now I am biased as I have been an investor in Comscore since 1999 and have been on the board since then.

Even so, the sample may select to bias towards consumers versus, say, technology leaders. Men versus women, rich versus middle-income, etc. I'm not saying problems don't exist, but at least internet panels are based on more information than surveys and logbooks.

Link

Tags: , , ,

{ 0 comments }

TechCrunch notes more money going into Monitor110:

Monitior110, the pre-launch web monitoring service for hedge fund traders we wrote about in September, will announce on Monday that it has closed a Series C round of financing with $11 million from new and existing investors. The company, which will begin offering its product for general subscription early next year after three years of development, has now raised a total of $20 million.

Of course this product hasn’t come to market yet and it could be an abysmal failure. I don’t think it will be, however, because the opportunity to leverage new technologies (RSS most importantly) and the energy behind this startup in particular are too big to miss completely. Some one, if not a number of people, is going to nail the new real-time research of emerging social media.

I'd love for this category of useful intel aggregation to break out and be successful, but I have to wonder if it ever will.

It's not a question of demand: as long as there are organizations, like institutional investors, or hedge funds, that can make a buck on the slightest bit of information asymmetry, there will be a need for this type of service. I think the problem may simply be in the nature of information.

If you look at the company's view of where information advantage remains to be exploited, you'd have to infer that the most valuable information is being created early on, where few readers see it (at least compared to the mainstream media):

 

Almost by definition, those high-value sources are the hardest to qualify; either semantically as being relevant, or structurally as being authoritative. If you could, they would be known, and far further along the timeline, tending towards the "Historic Point of Investor Visibility."

So, Monitor110 has picked up a tough nut to crack. I have no doubt these are smart guys, and perhaps they have the relevance and ranking algorithms required to infer high value very quickly, and from very few data points, but it feels like believing that is asking me to believe they've out-thought, among others, Google. That makes me think twice.

Then again, I'm blogging here and toiling away at a salaried job, not making billions in a hedge fund, or even millions selling to them.

Link

Tags: , , , ,

{ 0 comments }

Eve isn't the Real Attacker!

by Joey deVilla on October 30, 2006

If you've read any of Bruce Schneier's books on computer security, you've read security scenarios featuring the character of Alice, who wants to send a message to Bob (Schneier borrowed these characters from Ron Rivest's security scenarios). Some of these examples use the character of Eve — as in “eavesdropper” — who wants to know the content of Alice's messages to Bob.

The “Alice and Bob” stories have become the source material for a number of stories, speeches, jokes and even songs [MP3 link] in the geek world, the latest of which is this comic in which Eve tells her side of the story. The technology may be new, but the backstory is one of the oldest in the world:


Click the picture to read the full comic.

Link

{ 0 comments }

CAPTCHAs: More Effective Than You've Been Led to Believe

by Joey deVilla on October 30, 2006

Every now and again, I read articles like this one that claim that CAPTCHAs — those “please enter the text from this image” tests meant to verify that a human is filling out a web form — are no longer effective, as spammers have come up with algorithms and countermeasures to defeat them.

Jeff Atwood of the programming blog Coding Horror argues the opposite; he says that they work, and you only have to look to the 'net for proof:

Although there have been a number of CAPTCHA-defeating proof of concepts published, there is no practical evidence that these exploits are actually working in the real world. And if CAPTCHA is so thoroughly defeated, why is it still in use on virtually every major website on the internet? Google, Yahoo, Hotmail, you name it, if the site is even remotely popular, their new account forms are protected by CAPTCHAs.

In the article, he runs a number of experiments in which he takes graphics of text with varying degrees of distortion and runs them through SimpleOCR's demo page. He found that only a slight bit of distortion — not enough to fool even a five-year-old — was enough to confound SimpleOCR.  He also found that the text distortion might not even be necessary: just a little “noise” added to the picture caused SimpleOCR to fail to recognize any of the characters in the text.

He also points to his own experience on his blog, which uses what he calls “Naive CAPTCHA”, in which the CAPTCHA text is the same every time, and he's still stopped 99% of his comment spam.

He provides a CAPTCHA recipe that he says is “more protection than most websites need. All it needs to do is combine these elements:

  • high contrast for human readability
  • medium, per-character perturbation
  • random fonts per character
  • low background noise

Here's an example of a CAPTCHA created following this recipe:

Sample of an effective CAPTCHA from 'Coding Horror'.

Jeff also debunks the scenarios in which spammers use “Turing Farms” — either “sweatshops” of low-paid people to respond to CAPTCHA challenges or the much-publicized trick of showing people porn in exchange for answering a CAPTCHA challenge. They're just too expensive to be worth the effort, which is why CAPTCHAs work: they hit spammers where it hurts — in the pocketbook.

Link

{ 0 comments }

Program Lets Anyone Print Boarding Passes…To Gitmo!

by Joey deVilla on October 30, 2006

A grad student in Indiana has created a boarding pass generator for NWA flights.

A 24-year-old computer security student working on his doctorate at Indiana University Bloomington has created a Web site that allows anyone with an Internet connection and a printer to create and print fake boarding passes for Northwest Airlines flights.

By entering your name and plugging in information about the flight — flight number, gate, seat number, departing city, destination, departure, and arrival times and class — the site generates a boarding pass the program's creator says will get you past security checkpoints, even without ID.

Christopher Soghoian, creator of "The Northwest Airlines Boarding Pass Generator," knew he would be opening up a can of worms by writing the program and creating the site, but says it's the only way to show people how deeply flawed airport and airline security are.

I completely disagree: everyone knows that the superficial airport security theater—badly designed as it is, and as dependent as it is on dubious information and proof of identity—is, at best, purely for show. Many serious security thinkers have made the point, over and over again, that the way we've designed security at our airports doesn't make us more secure at all. This "research" serves no one, and it doesn't advance our understanding of the problem one bit.

On the other hand, the publicity generated by this goof will probably cause a general security freak out among bureaucrats and politicos. The nearly-inevitable result will be yet more meaningless security ritual the next time you fly.

Thanks, buddy.

Link [via Interesting People]

Tags: , , ,

{ 0 comments }

Apple Shows .Mac Mail Some Love, Web 2.0-Style

by Joey deVilla on October 30, 2006

GigaOm has the lowdown on Apple's .Mac mail reno, with a 2.0 twist:

A few weeks ago we mentioned that Apple’s dot mac email service was getting a bit of a Web 2.0 makeover, one that was long overdue. Well, the new email is live now, and it is a perfect embodiment of how Apple would incorporate the Web 2.0 technologies such as Ajax.

Nice to see Apple give .Mac some much-needed love, and mail is a good place for Apple to work their "fast follower" (except without the "fast" this time) magic of refining a user experience we're all familiar with. After all, they weren't the first with an MP3 jukebox, with a portable MP3 player, or with photo managemement software, but they still managed to do it better, making it easier for users, than anyone else had up 'til then.

Even so, .Mac has a long way to go before it's the network hub of your Mac life—your identity in the cloud. Until then, the $99 for .Mac looks like something of a ripoff compared to what you can do with Google, Yahoo!, or MSN/Microsoft Live for free.

Link

Tags: , , , ,

{ 0 comments }

Google Ad Sales Reorgs Around the Customer

by Joey deVilla on October 30, 2006

According to Read/Write Web, Google's moving to reorganize the way they serve large advertising customers:

"Three of my most credible resources, including DM News’s Giselle Abramovich, are indicating plans for a significant re-organization at Google. On the re-org, says Ms. Abramovich,

“What this means is that there would be one global account director per account, that pulls in resources to sell as needed – PPC (pay-per-click), Print, Radio, Video, Display, etc.”

This means Google will utilize different types of ads (CPC, CPM, CPA, etc) over all media channels – search, mobile, video, audio, etc.

The benefit for Google's customers is that it enables them to target certain leads across different types of media. They can do that from one 'console' and they will work with 1 Google salesperson/account manager on their account. Of course will the large advertising agencies be happy with this scenario of Google providing a one-stop shop?

In some ways, it doesn't make a difference to the big agencies. They're still going to be the strategic adviser to the advertiser. Of course, they lose some of the advertising channel fragmentation that made their planning and trafficking services necessary, but having Google consolidate a bunch of channels may lower costs for everyone.

This is the way big brand advertisers want to do business—one point of contact to control their targeting and spend.

The big losers will be the smaller players in the search ecosystem with significant large-scale clients: the larger, dedicated search engine markeitng (SEM) firms. Their promise was to optimize search campagins horizontally across search engines, so an advertiser (or their agent) would go to them to spend across multiple search engines. Google can ace them out with this reorganization, based in part on their increasing dominance of search engine marketing, and assuming they can find the right inventory and technology to support a major advertiser's brand and rich media campaigns (ie, stuff that isn't search). You can bet YouTube figures into that thinking. For smaller-scale advertisers, where search is the the biggest, if not only, line item in their marketing budget, this doesn't mean much—they'll still need search marketing consolidators.

It's an interesting hint of a maturing Google.

Link

Tags: , ,

{ 0 comments }