Google

Hardware's Next Little Things by Dan Weingrod

http://www.flickr.com/photos/aussiegall/ It seems that we’ve finally passed the point of expecting some sort of big breakout hit to come out of SXSWi.  With its size and scope most of the concern of attendees was focused on dealing with the usual long lines for highly featured speakers and panels, snagging the invites for secret parties or waiting on even longer lines for the sponsored ones.  On top of this, its become clear that with no-one wanting to try anything remotely daring outside of SXSW approved events, ((homeless hotspots anyone? (and by the way it looks like they worked)), we’re left with the organizers to try and create the foundation or groundwork from which we might find the next big, or little, thing.

The problem with this of course, is that much of the programming for SXSW was sealed pretty much six months ahead of the festival, which means that the “latest and greatest” breakout hit may already have happened. This pretty much seemed to be the case looking at the lineup of keynote speakers: Bre Pettis from MakerBot, Elon Musk, Tina Rosenberg and  Julia Uhrman of OUYA. What did all of these speakers have in common? At the core of their offering and interest is the strong theme of creating physical products in a digital age. Nowhere among these high profile speakers was a new killer mobile app or a hot new social network. In fact there wasn’t really much “New”. Pettis and Musk did manage to inject some serious new into their presentations. Pettis by announcing MakerBot’s new prototype 3D scanner and Musk by showing off this amazing freshly minted, video of a reusable SpaceX rocket practicing a short take-off and landing. But without the pull of a breakout hit it seemed to me that a theme of physical applications to digital technologies had become at least a major thread this year.  Here’s a few of the strings:

Big Sensor

It started for me in a Friday Healthcare App session with a questioner who asked about how the presenters were planning to take “Big Sensor” into account.  Big Sensor? I’d been hearing plenty about Big Data, but this was the first I heard about defining a more specific subset of it as the massive and rapidly growing amount of sensor data available. In the new world of the quantified self where we, and perhaps our doctors, are all tracking our own information, sensors from fit-bits to blood meters to some scary workplace motion tracking sensors are becoming the physical appendages of data networks. Their growing use is creating a deeper need for developing a more designed approach that can integrate how we use sensor data, how we control it and how we can take advantage of it while retaining privacy and humanity.

Crowdsourced Cars

The day following Elon Musk’s presentation I went to a far more sparsely attended session that took Musk’s approach to physical production and turned it on its head. Local Motors is a company I had heard about before from via Neil Perkin, who has championed its crowdsourced approach to automobile production.  What’s impressive about Local Motors is their ability to leverage a worldwide network of enthusiasts, experts and professionals, connected by software, to design, develop, build and constantly improve a complex physical product i.e. an automobile. While their Rally Fighter is in production and street legal in the US, they are also developing a limited edition pizza delivery vehicle for Domino’s pizza and natural gas powered concept cars for Shell. But the most impressive part of their story was how they worked with DARPA to concept a vehicle for specific requirements in Afghanistan. The result, the XC2V went from concept to delivery in 14 weeks an amazingly short period of time for vehicle, or any sort of, government procurement project.

Listening to the Local Motors story it became clear theirs is a case of hardware learning from software. By using the distributed model of design they are able to use over 35,000 employees, by adapting Agile and Lean approaches of startups to their, related, Toyota Production System they are able to produce limited editions of automobiles, that are limited for the purposes of continuous improvement.  The approach is to build 1,000 vehicles and then pause and optimize instead of the expense and hassle of the traditional mass model. All of this goes to the way that hardware is rapidly becoming more customized and customizable to a defined user experience. We’ve all gotten used to software that can be tweaked and refined to our specific needs, hardware is now rapidly approaching these same capabilities.

Leap Motion

Leap Motion wasn’t new to me, but it rapidly became one of the smaller scale breakouts of the show, even though its product had been announced and on pre-order since at least December.  The biggest reason for this lies in one of the critical differences between physical and digital adoption, hands on experience. Leap had set up a tent for attendees to sample the controller and the lines outside the tent, along with a presentation by its founders, created a strong word of mouth buzz around the product.

http://www.youtube.com/watch?v=Ew_8Uj5RnXs

What the Leap controller represents is another step in the growing world of gestural interfaces. Kinect got this off and running, but Leap takes it a number of steps forward especially when you consider its price, small form factor and ability to connect with multiple systems. What Leap also brings is a new relationship between physical and digital and the promise to interface with them in the same way. It also begins to ask serious questions about our basic device controllers such as buttons, keyboards and menus, but ultimately it starts blending the gap between physical and digital in ways that I am looking forward to imagining.

There were more examples of the deeper blending of the physical and into the digital landscapes, most notably full scale replica of NASA’s James Webb telescope, but one of my favorites brought it back to how marketing might start to use this combination in a far more interesting way than QR codes. During their presentation called Art Copy & Code, Google demonstrated some interesting and whimsical directions for marketers that start blending digital and physical to create more personal communications experiences. My favorite was this version of an arduino enabled basketball shoe that talked trash to its owner:

http://www.youtube.com/watch?feature=player_embedded&v=VcaSwxbRkcE

A funny, and admittedly very early, attempt at bringing advertisers into the new environment connecting digital and physical. But when you consider how hardware is making so many small, innovative advances on so many fronts its hard to imagine that we won't wake up soon to a new normal where connected communications is part of the physical world all around us.

Google Ranks Social by Dan Weingrod

Google's inclusion and ranking of social media will make it more relevant while bringing more authority to social media.
 

It’s been a tough few months for Google. There’s been a high level re-shuffle, accusations of theft of search results and a steady growth of complaints about the quality of Google’s organic search product. And if this were not enough, the meteoric growth of Facebook and Twitter are beginning to make Google seem just slightly irrelevant.But Google is not one to be trifled with. It has a way of fighting back and usually with seemingly incremental changes. The most recent of these occurred last week when it started to include social media as ranked results in the Search Engine results page, (AKA SERP). If you’ve been paying attention to your SERP’s over the past year you’ve probably seen twitter results in the page, but they have always have been presented as tweet content aggregated in their own section similar to the way Google presents Local, Image or News results. The difference with this update is that not only is Twitter integrated into the SERP,  the tweets now have an real effect on rankings.This example demonstrates how powerful this change is:

Above is the SERP from a search for a site called “No Right Brain Left Behind”. You can see that two of the top five results are attributed to individuals. Not random individuals, but people who happen to be in my Twitter network. Even more important you can see that one of the results is ranked number one, above other Google-based algorithmic “organic” search results.

While visually this is seemingly a small change, the potential here is enormous. Studies have consistently shown that most searchers prefer the top 3 links on a page and generally don’t look beyond the first page for results. So suddenly we are entering an era where critical positions on the search page could be affected by a mere tweet from a friend. Imagine the scenario where searching for anything from a bicycle to a bank to insurance could bring up a friend’s preferred social media links and opinions above Web site links. If that isn’t enough imagine the combined power of word of mouth and high page position that these links will carry.

For the moment, Google is pulling its social results from Twiter, Flickr and Quora. Facebook,  the other 800 lb. gorilla, missing from the party. Bing is already incorporating Facebook Likes into its search results and while Google apparently has a deal with Facebook they may still be trying to figure out how to  incorporate the firehose of Facebook’s Open Graph into the SERP.

But Google will likely figure it out, and as they do the world of search will become even more complex. Google’s shift to social will help it in its quest for relevancy. More importantly it will help Google  in its quest for accuracy against the large content-spam sites such as Associated Content and Demand Media. For marketers and advertisers, Google’s inclusion of social media is a powerful validation of social’s role in defining customer interest and preference. It will mean that we will have to pay more attention to the role social platforms play and be more prepared to accept the transparency that social media will bring.  On the other hand, if we can learn to collaborate with customers and help them to tell their stories about brands and experiences we may be able to make search engines become even more valuable tools for marketing.

 

Instant Analysis by Dan Weingrod

It's too soon to tell what kind of changes Google Instant will bring to Search, but Google may have a loftier goal in mind.
 

It’s nearly a week since the launch of Google Instant launched to great fanfare. Along with the launch there was a great deal of immediate analysis, hand wringing and cynical comments. Now that this period is over I’m ready to provide my own instant analysis about probably the biggest interface change on one of the world’s most recognizable platforms, how it changes things and what I think it may really be pointing to. The most important thing to consider is that the change is really not that big. I would rank the additional selection tools that Google has slowly migrated into the SERP (Search Engine Results Page), as ultimately more of a significant change. A lot of the discussion around Instant has centered on the auto-suggest keyword fill that Google presents as you are typing your query, but this is really just a slight upgrade to the previous non-instant version which already presented a list of alternative keywords culled from Google’s algorithmic brain. So what is the big deal?

The big deal is in the sum of the parts and not in any single attribute of Instant. If you have used Instant you have seen that not only do you get auto suggestions, but that as you input or select a keyword the actual SERP content changes on the fly. This is  significant change and one that Google really likes to crow about. In the name of saving you time and getting you to your results faster, Google is not only suggesting options, but presenting them without the millisecond that occurs when you hit the “Search” button. As you type a query, the list of words automatically updates and simultaneously the page below fills up with ever changing eye-candy. It could be a map, it could be paid search results, it could be images, but what it can be colorful, distracting and seems to focus the user on the upper section of the page.So if auto-suggest along with a dynamically changing page focuses the user on the upper mid-section of the SERP could this change search results? I think it might. As searchers potentially get more focused on the sweet spot below the entry bar the role of the top three paid search ads and more importantly the top 4 or 5 organic results could get more and more important.

Below are examples using one of Google’s favorite search terms: “tennis shoes”. You can see that the non-instant results page on top has a very “boring” background, but actually displays more suggestions than the highly active Instant results page below. My homegrown conclusion is that this is going to make it more and more important for brands to get results into the top section of the page in any way. This will mean getting smarter about organic results by using images and other sources as well as making sure that Adword buys are placed in the now-even-more-highly-favored top three positions.

 

 

On the other hand I could be wrong. Google says that they have tested this thoroughly and have found that users are not actively affected by the SERP results until they have finished typing or made a selection from one of the suggested keywords. This of course is the real conclusion that most commentators are drawing from the launch of Instant. It feels like a major change, but we’ll have to wait and see if it really makes a difference. There are concern that complex long tail keywords like “red and white sneakers with blue laces” will be trumped by autosuggestion. There is the philosophical fear that we will all become search borgs ruled by the results that Google feeds us. And there is the very real alarm that top suggested keywords will skyrocket in price in the Adwords auction. All of these will require usage over time to fully digest their effect and to figure out the appropriate solution.

We should also remember that a large amount of Google searches are done via toolbars or other distributed methods and these do not use Instant yet. GigaOm quotes the ad network Chitika as stating that just 18 percent of Google’s traffic came from its Home or Instant page on the first day of Instant’s launch. Ultimately there is another objective may be the core objective for Instant and that is Mobile. As mobile devices continue to grow in popularity searches will need to be easier, more user friendly and less susceptible to the ham-fingered. Google Instant, with its auto-suggestion, instant page revealing and high speed may simply be a rocket targeted at ownership of this rapidly growing space.

 

 

 

Of Alzheimer’s and Net Neutrality by Dan Weingrod

It’s hard enough to create viable analogies to explain the Web let alone the Net Neutrality debate, but the recent breakthrough inAlzheimer’s research might just do.
 
 

Earlier this week newspapers across the globe headlined an immense breakthrough in Alzheimer’s research. A spinal fluid test was shown to be potentially “100 percent accurate in identifying patients with significant memory loss who are on their way to developing Alzheimer’s disease.” This stupendous breakthrough in a field that has generally seen slow research gains came seemingly out of the blue. One of the most critical reasons for this sudden breakthrough was detailed in a follow-up article that was published yesterday.

In 2003, a number of scientists involved in Alzheimer’s research pushed for a total and open sharing of data between all researchers, universities, corporations and other groups involved in serious Alzheimer’s research. Traditionally each of these groups holds on to their own data and keeps it as un-normalized information that is shared internally. Why? Because the potential for huge profits were too astronomical to share. As it became clear that this was not working researchers realized there had to be a different way.  As one researcher put it: “we wanted to get out of what I called 19th-century drug development” and the sharing of data was key to making that happen. By sharing data openly among all groups, from largest to smallest, capitalists to non-profits, the opportunity was created to blend innovation, technique and insights together to reach this result and others to come.

So what’s the analogy to the net neutrality debate? In their legislative framework addressing net neutrality Google and Verizon proposed maintaining an open internet while prioritizing some traffic on the Web through “fast lanes” in return for additional payment. While this was galling enough to net neutrality advocates, the proposal rubbed salt into the wound by leaving out the rapidly growing wireless market. In addition, the framework also discussed the unspecified “additional online services” that would be outside the regulation of this framework. All in all, the proposal hinted at the idea that some traffic would ultimately receive better, and faster, treatment than other traffic.

So let’s consider what might have happened if this framework had been, even philosophically, applied to Alzheimer’s research. Instead of the unprecedented open sharing of data we might have seen data from corporations or large universities take precedence over the data from the smaller less funded research groups. Smaller, more agile labs or researchers might have seen their data sit on the back burner. How many ideas would have been lost in this process? How much slower would the process have been if researchers working with “fast lane” data had to wait or even backtrack once “slow lane” data arrived? I think it is clear that any prioritization of data or outside control by a bandwidth provider could have significantly disrupted the results of the research.To be clear most of the Web content in the “fast lanes” of the Google-Verizon framework is far more prosaic than medical research data. It’s not like getting e-mail or an old episode of “Lost” at high speed is as critical as gaining insight into an inscrutable, brain ravaging disease. But innovation online takes many forms and has many applications. By putting a premium on one packet of data over another, and especially in the world of wireless, we risk going back to “19th-century thinking” of another sort  and losing the openness that has made the Web the agent for positive change that it is today.