Google Maps: Confusing Time & Place

One of the most incredible pieces of software developed in the past few years in Google Street View. The notion that you can use your computer to see what a neighborhood looks like is literally magic. What’s more, you can virtually drive through the neighborhood, experiencing it like a local.

I mean, a person had to drive a car to take that photo, a machine had to stitch all those photos together and it’s now available to you anywhere you can get a decent web connection.

The mind boggles.

What’s even more mind boggling to me is that sometimes your linear trip down the street turns into a trip down memory lane. As you seamlessly slide your “drive” from place to place along the street you suddenly find yourself whisked through time. Drivers change, leaves fall off trees – but more interestingly, sometimes entire cityscapes change.

A building disappears and its replacement reappears or you change an angle and suddenly there’s a construction site.

This happened to me recently looking at New York’s Astor Place.

I wanted to see the new building that’s going to house IBM’s Watson group. Here is it, looking from Astor Place & Lafayette:

Watson BuildingI drove my virtual car across Lafayette so that I could get a closer look:

Missing Building

But wait; what building is that? That yellow building wasn’t there a second ago…

Let’s drive a bit further and look backwards:

Watson Building Under ConstructionTime has shifted yet again and now we see the building under construction.

On the one hand the techie in me thinks there’s nothing interesting here: Google’s servers will update their imagery over time and eventually become consistent.

The romantic in me likes to imagine a scenario where the algorithm occasionally burps and historical images begin to appear when you least expect them. 25 years from now you’re trying to find a place on the map when suddenly a long-vanished building appears only to disappear when you turn the virtual vehicles view.

Or perhaps there’s a new feature that turns Google Maps into a virtual version of Ed Ruscha’s All The Buildings on the Sunset Strip.

He’s been going back every few years to the strip to update the photos but now Google will have done it globally on a regular basis. You’ll literally have a map of all places in the world for regular intervals since 2005.

Algorithms Without Feeling

I was playing around with the Google Maps API’s Autocomplete today. This is the feature that lets you type something like “Vancouver” and turn it into the corresponding city. As you type each letter, it guesses what place you’re looking for.

Google, being Google, has put a remarkable amount of intelligence into making this work.

For example, you can specify whether you want all types of places or just businesses or just city-like entities, etc.

The set of guesses is also dynamic. Google is famous for using data from one arm of the business to train other parts of the business. For instance, while taking Street View photos they’re also training driverless cars. And it looks like they use the frequency with which people search for location names to best determine which places you might be looking for.

How do I know this? Well, here’s the autocomplete result from this afternoon when I restricted it just businesses.

Type in the letter “s” and, awkwardly, the first result is Sandy Hook Elementary School – the site of last year’s horrific school shooting.

Autocomplete Example

Google’s unfeeling algorithm has no idea that Sandy Hook is infamous and almost certainly not the place people are looking for so it’s going to keep serving it up.

Here’s a Google Trend report on the term “Sandy Hook Elementary School”. You can see the spike and decay in “interest”. Apparently it hasn’t decayed enough to have been removed from the autocomplete:

Google Trends

 

There’s nothing nefarious about Google’s algorithm (and dealing with Black Swan events that skew your search data is tough) but they make us aware of the odd new world we live in.

Unemotional algorithms dictate what is topical; this shows up in unexpected locations and we can literally watch and measure the rate at which moments slip into our collective memory.

The Google is Doomed Meme (or How to Beat Facebook)

There’s a popular meme going around right now on the Internet about how Google is in trouble.

Much has been written (read the links within that last link) about how Google’s search quality is declining and it’s launched a slew of unsuccessful products (Wave, Buzz, TV). They’ve also got a new CEO and seem to be investing shareholder money in some weird things. And the future belongs to the two-headed social/mobile beast that is Facebook and Apple.

The “doom” meme usually extrapolates this to predict the end of the company. The story is that search quality declines, people start to go to other search engines and then advertising dollars follow. This becomes a positive feedback loop (the dreaded doom loop ) and the company now doesn’t have any excess cash to fund new products and can’t find that next billion dollar market.

Great story – and we’re humans so we need stories to make sense of our world – but is it true or is this just a narrative fallacy?

Like all great stories, I’d argue that there’s some truth and some fiction – with enough of both that we can’t resist talking about it endlessly.

So where to start?

Let’s begin with the new product development story. Google Wave and Google Buzz were complete and utter flops – and that’s perfectly fine.

People who complain about Google launching failed products are missing two points:

First, great engineering companies need to have a “ship it” culture. Products need to launch or you end up building Xerox PARC (and everyone knows how that ended). Google is going to ship new products and get them in front of users for feedback.

Secondly, Google’s a large company adopting a portfolio strategy for finding the next great market. Venture capitalists have done this for years; they build a portfolio of companies; a few return an integer multiple of the investment but most break even or lose money. A few winners make up for all the losers.

Most companies aspire to do this. They dream of being able to launch many different products, invest in the winners and cull the losers. It’s business doctrine that you should do this.

But the reality is that most companies simply can’t marshal the resources and build a culture to do so. Google is doing exactly that and, because they’re in one of the most over-analyzed industries on earth, there’s a lot of attention paid to their failures without considering the context.

In fact, if you were a senior manager at Google, you would probably be looking at your product portfolio and thinking it’s okay.

Search continues to throw off cash. You’ve found a few new billion dollar businesses in video (via YouTube) and display ads (via DoubleClick). People increasingly use you for all things map-related. And you’ve launched one of the most successful products ever in Android (it’s worth remembering that a few years ago people said that Europe and Japan were going to own mobile; Google and Apple have single-handedly undone that).

In this context, a few failed products are fine – in fact they’re expected and reinforce that you’re doing the right thing.

So let’s talk about the second complaint: the decline in search quality, errors in maps, AdSense, AppEngine, etc.

There’s definitely a nugget of truth here as a few issues come together at once.

The first issue is simply company scale.

Google’s got 25,000+ employees now and running a company that large requires a different approach than what was required to run the company 10 years ago. Most companies that size lose their way via a lack of focus. Management spreads themselves too thin trying to find the next sexy market while driving more cash out of existing ones.

A lot of the complaints about Google today suggest that there’s a distinct lack of focus going on. The little mistakes: things like places appearing incorrectly on maps or services working intermittently are characteristic of a company that lacks focus and grew too fast.

There’s nothing sexy to fixing this; it requires discipline and people who are willing to do the grunt work required to build out the right set of processes. This isn’t fun, but doing it builds the bedrock of the company and gives engineers more time to work on building the next billion dollar product.

So what about spam?

Google rose to power on the back of the PageRank algorithm which gave us better search results and initially punished spammers. However, whole industries and companies have grown around reverse engineering it to better promote their own agendas. Given 10 years, I’d say that people have done a pretty good job and three years ago was probably the point where the algorithm reached peak effectiveness.

The other trend is that we’ve gotten much more confident asking Google questions we wouldn’t before. When you’re thinking of buying something (“best iPhone case”) or doing something (“good restaurants in Chinatown”) you have probably typed that question into Google once or twice.

And you probably got spammed with results.

The reason isn’t so much that Google’s algorithm was wrong as they lacked the right context.

The reality is that we now routinely search for things that require context for an answer yet we don’t provide any context.

When you are looking for a good restaurant, you have a set of hidden assumptions that only you know.

For instance, that you don’t like pork, that you think the New York Times’ reviews are garbage, whatever.

Google doesn’t know this and so instead it provides you with some sort of context-free, lowest common denominator result. (in food, likely a link to a few ‘trusted’ local newspapers and reviews from spammers/people you’ve never met on Yelp).

The “search quality” here is impossible for someone at Google to objectively measure. Only you can know if the result is “good” or “bad” because only you know what you were looking for.

The geniuses at Google’ are highly aware of this problem and are working on tools to get you to give them context.

The most thinly-veiled attempt at this is HotPot (you literally rate restaurants; they find other people who rate like you and you get recommendations). More subtle examples were  Searchwiki and now Google Stars:

google_stars.png

When you star something, Google remembers what keyword you entered and what links you liked. They can use this to boost the type of results you receive in the future.

However, all of these attempts at generating context feel a lot like rearranging the deck chairs on the Titanic. They require a huge change in consumer behaviour in order to get a good result. In a world with too little time, its highly unlikely that most people are going to take the time to hand-annotate their search results.

Instead, people are going to expect that Google learn the right context for a search.

We are context creating machines and 500 million of us do it regularly at Facebook. All that friending, sharing, liking and commenting is nothing more than giving Facebook context: who we are, what we like, who are people who are like us.

We’re all scared they’re going to use it to send us freakishly tempting ads, but it could just as easily be used to give us freakishly accurate searches. (The jargon for this is “social search”.)

Zuckerberg et al. know that and they also know that they’re weak in the blood and guts of traditional search (things like indexing, crawling, etc.). Hence they’re jealously guarding the data and working with Microsoft’s Bing to try and come up with a solution. I have no doubt that dozens of Bing and Facebook engineers are currently building a search engine.

And that would be a real threat to Google.

No one has dethroned them as the king of search – spam and all – because their search satisfices. No search engine does a materially better job so users don’t switch and the world is littered with the detritus of search engines that were marginally better than Google (think Cuil). All had great technology but weren’t good enough to get users to change behaviour.

But a contextual search engine could be good enough to get people to switch.

And that could kick off the doom loop.

So how, short of buying Facebook (and they’re not for sale), could Google avoid this?

If I were Google, I’d do the opposite of Facebook.

Facebook is a classic walled garden where you can put data in but you can’t get it out and can’t share it with anyone. Moreover, rather than open up, they simply try to build whatever service they think consumers want.

Want to send a message to Facebook friend? You’ve got to use Facebook’s messaging platform. Share photos? Facebook’s photo app.

It’s the digital equivalent of Henry Ford‘s “Any customer can have a car painted any color as long as it’s black.

Contrast this with the Open Web. It’s full of lots of little sites that are good at one thing and generate lots of context about us. We review restaurants on Yelp. We mark things as worth reading at Instapaper. We listen to music on Last.fm. We write notes on SimpleNote.

Moreover, we frequently do this with other people, building out a graph of interesting people for each of these different services. One interpretation of Twitter is that who we follow is nothing more than a pure expression of our interests.

But to date, no one has been able to open up the value that’s locked inside both the data and networks of each of these services. This is partly because it’s a search problem and search is really hard.

It’s also partly because each of these services is small and can’t capitalize on their graphs/data.

Enter Google.

Imagine that Google decided to create a framework that allowed any third party service to dump your data and your graph into Google’s search results – if you chose to allow them.

When you performed a search in Google, they would mine your choice of services and friends to get you a contextual answer that was right for you.

Ask a question about Italian restaurants? Those starred recommendations from Yelp come along as do the opinions of your friends.

Looking for a good iPhone case? The tweets from that designer you follow suddenly come back.

Information on collaborative filtering? The search results also include information from the notes you made in SimpleNote.

Sounds interesting, but how to get this data from each of these small-ish companies?

1) Create the framework and open source it. Google’s part way there with Open Social (tech geeks: remember that?).

2) Align the incentives. Offer participating sites a cut of ad revenue from Google searches that reference their data/graph. And then turn around and offer them better ads on their sites because only you can aggregate people’s interests across multiple services.

The big fear of sites here would be disintermediation: that people go to Google instead of their site. However, this is unlikely to be the case. Mobile phones are showing us that people like to use best-of-breed apps for single tasks (like reading, note taking, reviewing, etc.); we don’t use one general-purpose app. This is also reinforced by the decline of the dashboard cum widget products like NetVibes or iGoogle.

Moreover, Google would only handle the ‘search’ part of the equation: all the content creation and browsing and checking updates, etc. would occur on the respective sites. And sites would get more money from better ads and searches on Google.

I for one hope this happens. As a consumer I love the thought that I’m free to choose the best services in the world and can harness the power that they each offer to create a sum that is great than the value of its parts. And then the doom meme can finally die.

NOTES:

This blog post is based in part on a lot of interesting thinking from several different people. I recommend reading each of their posts in their entirety.

A Glimpse of Google’s Future?

You may or may not know, but Google tests hundreds of different versions of their search service every day.  They’ve turned their users into a giant set of unwitting testers who are constantly providing them feedback on how to improve their product.  This unparalleled ability to conduct tests is one of the skills that makes them currently unsurpassed in search.

Yesterday I turned into one of those testers.  While searching for a particular term at work, I came across this design:

Here’s what the same search looked like when performed in a different browser:

What can we glean from this?  Well, a few things:

Google’s test index (the number of documents is queries against) may be much bigger than it’s current index.  The test page returned 5.2M documents vs. 1.3M for the normal version

Location is going to become more important in your search (no surprise in an increasingly mobile world).  Note that in the test version, I can change my location from NYC.  This is important, as if I search for “Zanzibar”, I get returned the bar in Hell’s Kitchen as the first result, not the beautiful island off the coast of Africa

Finally, Google thinks that they type of content you’re looking for is as important as what you’re looking for.  If you were to click “More” under “Everything” in the test version, a list showing Images, Videos, etc. would have opened up (this normally appears at the top of the page).  Just like my employer or Best Buy, Google’s trying to make it easier for you to find info using faceted search.

Why type “Zanzibar photos” when you could type “Zanzibar” and then click “images”.  While that takes two steps, it allows you to easily flip between different types of info about Zanzibar, rather than having to re-type your query.

This is an interesting way for them to start to integrate all their different search properties together (Google, images, YouTube, scholar, books, etc.), and I hope it makes it into the real world.

Can Someone Please Explain This To Me?

I was too young to appreciate the beautifully-lucid-yet-logically-untenable propaganda dreamed up by apparatchiks in the Soviet Union, but, fortunately, China is the new Russia.  Check out some quotes from the China Global Times’ response to the U.S. asking for freedom of info on the web:

The hard fact that Clinton has failed to highlight in her speech is that bulk of the information flowing from the US and other Western countries is loaded with aggressive rhetoric against those countries that do not follow their lead.

In contrast, in the global information order, countries that are disadvantaged could not produce the massive flow of information required, and could never rival the Western countries in terms of information control and dissemination.

I don’t really understand the logic in the above statement, but, hey-let’s see where this goes!

It is not because the people of China do not want free flow of information or unlimited access to Internet, as in the West. It is just because they recognize the situation that their country is forced to face.

Unlike advanced Western countries, Chinese society is still vulnerable to the effect of multifarious information flowing in, especially when it is for creating disorder.

Yikes.  China can create the second biggest economy in the world, send an astronaut into space, become the manufacturer for the world but it still needs its government to protect its people from themselves?  Because apparently despite all their achievements over the past few years they are incapable of determining what is ‘true’ and what is a ‘lie’?  What b.s.  Kudos to Google for threatening to pull out.