The Future of Twitter

While it’s true that a remarkable amount of information can be found within a tweet (URLs to similar topics, real-time commentary, and relating facts), how practical would the results of a Twitter-like search engine really be? As it stands today, the Twitter search feature leaves
much to be desired, but is a great way to monitor current events and topical trends from the fingers of the human conscious. When major news stories break (and sometimes even break on Twitter) you can see the near immediate reaction from the microblog scene.

However, there are problems with “real-time” search. “Real-time” must be balanced with relevancy. Searchers don’t always want the latest tweet because there could be a much more informative or relevant tweet that happened seconds before. There’s also the problem of
integrity; just because several people echo something, doesn’t make it true. People often question the integrity of Wikipedia content and many academic institutions ban it as a citable information source. The question of information integrity is a dilemma Twitter shares with any
user generated content site.

Let’s suppose that in an ideal world all the information that comes through Twitter is based on truth. How would a search engine trying to incorporate real-time Twitter content grade the value and quality of each tweet to rank tweets? User authority could be established over
time based on a system of votes of confidence, but that would create an environment where the user, not the information, defines quality and relevance.

Perhaps we use something other than user authority, such as allowing people to vote on comments, Digg style. That way, when someone queries the real-time search engine, they find the most popular tweets first. However, this creates three new problems: (1) people tend to
vote up funny or humorous comments, (2) spammers artificially vote up content, and (3) popular topics will linger in the results. Over time (meaning a day or two), the popular tweets would continually build up and fill the top results, moving the results outside the window of real-time, and becoming more like the search we know today.

This paradox highlights an inherent problem with a true real-time search engine. Information needs time to grow in authority, much like the process of building natural content links, and real-time search results can only be snapshots of that moment in search history, soon to be
bumped off by the next batch of tweets.

This could mean that the search results page will be a vertical aggregator of information- pulling pieces from authority sites related to the query, social media platforms, and traditional search properties. One great example of this evolution in action is the Ask.com results for high-volume queries like “U2.” The first listing is pieced together from several information sources, and the right side of the page lists quick links to the Wikipedia page, songs, and pictures. The true natural listings appear lower on the page and are just one piece of the whole U2 query result.
Contrast this with the Google results, where the entire page is natural listings with a few video and blog links thrown in.

Both engines are working toward the integration of Twitter-like information into the search results, but from different ends. At some point, they should reach a middle ground where the traditional natural listings and the social media listings are shown side by side. This creates a better searcher experience and will surely grow share for whichever search engine could pull it off.

Advertise with my Blog