Play Chess!

Monday, April 25, 2011

Place your bets HERE! What are the odds I make another blog post this year?

For the record, I'm a believer in prediction markets. A winning bet on a pair of Steelers Super Bowl tickets to prove. Prediction markets power remains largely unrealized. Google had potential to capitalize on the phenom, yet there are some critical factors for success which Google didn't meet to make GPM a viable product on a mass scale. The factors REQUIRED for successful prediction markets are:

1. Statistically significant volume.
2. "Liquid" market.
3. Meaningful financial incentives. (increasing marginal utility)
4. Low barriers to trade / complexity.
5. market manipulation controls

In addition to the necessary above, an internal champion to drive product strategy / marketing makes sufficient the viability of the products future.

Let's take approach one assumption at a time.
1. Statistically significant volume.
- 48% of volume by 13 traders
- 3 month trade volume (450k shares) is less than .00001% of Dow Jones DAILY volume
- even w low volume accuracy impressive. (Degrees of Freedom is significantly met)

2. Liquidity.
- no downside or loss to play. a real futures market is ZERO sum game. This was not designed to be zero sum, but encouraged trading with $1K incentive for volume trader. (didn't Wall Street have same moral hazard with encouraging volume trades?)
- computer bots provide market clearing function! Identifies and eliminates arbitrage opportunities quickly. I though computer trades were adding to risk/ volatility on the exchanges?

3. Low utility payout.
- $1K is meaningless to GOOG millionaires

4. Low barriers / complexity.
- rolled out too many bets (95 markets / quarter)
- dumb down bets. very narrow scope on bets (how many people could competently bet on success of individual internal projects)

5. Market manipulation.
- because of low utility and high percents of narrow traders the GPM are subject to manipulation by a single or multiple players
- lack of regulation / exchange rules

Finally, the suggestions to "launch" the product were nearly all internally focused: "internal emails" "bigger prizes" or "party" only one was arguably forward looking to integrate with social networks.

For now, Google has missed the boat on this. A second chance would largely bet on the talent that produced the beta software remaining largely intact and could only be sealed with a product champion in the "fast" growing organization. The success of any software project is highly dependent on retaining the people involved with originally creating the intellectual property.

There are legal risks to mention as well, especially with 2ndary markets that these exchanges create. FirstDibz / OptionIT encountered those first hand which forced the shut-down of those exchanges. These are highly unregulated and present high risks to the "credibility" of these markets. TicketReserve is taking a platform / hosting approach (like eBay)

I have more than a few practical examples of this. I'll save my thoughts on application to my football pool for class.

Monday, April 18, 2011

Crowdsourcing Capitalism

Both outsourcing and "crowdsourcing" offer the same proposition to reducing one of the core components of Capitalism - "free" movement of labor (the other components being free movement of capital and resources) The desire for cheaper labor directly depends on the cost of access to that labor. Both these phenomenon are neither new nor particularly surprising. The incentives are baked into the profit equation. What IS stunning is the rate at which technology has broken barriers and increased the supply of skilled labor. Something that has been historically expensive to procure. (SETI hoodwinks aside)

One could probably correlate the rate of technological innovation with the cost of any of these components. From the wheel lowering the costs of transportation in finding new markets/distribution to the telephone lowering communication costs in all facets of life. In particular, this weeks case highlights the role of the internet in dramatically increasing labor supply in functions as disparate as professional photography, basic R&D and TV shows development. While it does have amazing potential to redefine markets and harness the power of "collective" knowledge, I believe it's a corollary to the Economic Calculation Problem and a derivative of Market Clearing Price

In fact, price discovery in the most basic sense is "crowdsourcing" - the collective knowledge of what a good or service is "worth" is determined by many people over a broad range of the market. Sure this can be distorted across markets, products and regions, but not for long. The impact of the internet simply shortens the time prices can be "distorted." This can be measured by the variability of price changes across markets which should decrease with each new technology (diminishing marginal price variability - note this does NOT impact price volatility, but price variability across markets) These new technologies expand consumers and producers access to information about the market - including increasing supplies of labor that the Computer Age is connecting with unheralded speed. Similarly, the investment and growth of the internet is also dependent on the profit motive. "Crowdsourcing" may be a new buzz-word, but it's far from novel. Price discovery will always be a play between consumer, producer and market "distortions" - anything to mitigate those distortions increases the effectiveness of delivering goods to consumers and labor resources to worthy endeavors.

The more important questions are:

1. Can the rate of progress be sustained? What are the key factors enabling this progress? Political / Socio-economical? How to reinforce those factors?

2. Can knowledge of the crowds be wrong? It was once true that the "Crowd" believed the Earth flat and the center of the Universe. This wasn't simple laymen believing this, this was the specialized science community "fact" at the time. Science is NOT consensus nor is consensus Science. (see SETI/Aliens above)

3. What are the risks to relying on the "crowd"? In what markets does relying on the crowd work well? In what markets is it deficient?

4. All data not equal. Surely there is more power to leverage if consumers made public their credit card / financial transactions. Identifying fraud and theft would be easier if you could mash up phone / credit / tax records. But who wants to pay the cost of intrusion on privacy? There are limits to this phenomenon.

These are just a few broad questions. I suspect Ehab has not gotten this far - will find out tomorrow.

Monday, April 11, 2011

Facebook Thoughts

I actually read the FB case last week by accident. I was a bit disappointed that the question posted this week was on LinkedIn rather than FB. I had jotted down some thoughts about the case and didn't want to lose them.

1 - FB low Ad impression rate (50x less than google)

2 - Who does FB Connect create most value for? Users? Connected sites? Facebook?

3 - Stunning that MySpace had technical hurdles around the area of privacy.
I did some poking and found that the backend was run on MySQL at the time. FB is also run on MySQL. An interesting overview of their technical operating paradigms can be viewed here Att 4:30 he starts going into back-end design "philosophy"(simple joins, 2ndary indexes, 100 millisecs response time) and at 12min he talks about the "Long Tail" of database rows. At Oracle we've designed both simple and complex security models that scale with both MySQL and Oracle RDBMS. These designs are typically no more than 3 to 4 relational tables and can even be denormalized for scalable, highly efficient queries. I think Oracle can use some of their bold goals while FB can use some of Oracle's technical talent.

4 - a hodge podge of niche competitive sites (Dogster.com, etc)

5 - Immediate recognition of unpopular features (Zuckerburg's apologies worded "We screwed up."

6 - MSFT investment overvalued the company at $15B. MSFT had vested interest in an overvalued FB to the tune of $240M. Interesting that the valuation was 33% higher in October 2007 to $10B in summer of 2009.

7 - FB's desire for a unified computing platform is at odds with the Darwinian concept of best of breed applications. This desire has been persistent throughout the industry but really is practically impossible. Even the storied Wintel dominant platform is considered by many a thing of the past.

8 - Facebook Connect gets it "right" with an Opt-In model. I wish the No Call List (among many others) would default to the same.

9 - FBML - when you have your own "dialect" on the web, you know you've hit it big. http://developers.facebook.com/docs/reference/fbml/

10 - Twitter's actual function (enabling service update from any cell phone) is a low-barrier to entry technology

11- The 94% of 800K users who didn't like FB redesign represented l.1% of the user base.

12 - what is the "active" user base? No exhibits state distribution of a usage histogram.

LinkedIn or LockedOut?

LinkedIn recently hit 100M users. If you were in charge at LinkedIn, what would be your strategic goals for the next several years? How would you achieve them?

At the time of the case, Social (SNS) and Professional Networks (PNS) was a fast growing, internet phenomenon. However, in this increasingly crowded and fluid space, PNS sites faced a strategic question of adopting social "features" or remaining in a "Walled Garden" mode as LinkedIn was very much adherent to. Should LinkedIn mimic popular social value propositions or should it kept it's value in the fact that it was a differentiated network?

LinkedIn was a site for "professionals" - limited use on the weekends, better demographics than WSJ, Forbes and BusinessWeek. More importantly, only 2% of FaceBook users had LinkedIn accounts (as opposed to 41% of LinkedIn users with FB accounts) Clearly, if LinkedIn opens up its model it risks: 1. dumbing down it's user base demographics and 2. making it YASN

It's important to note that a "significant" portion of the revenue is from their ads. Marketers pay top dollar for high-end demographics. A walled garden model to broad ad revenue did not prove successful. Facebooks' ad model was so poor it was only 2% of Googles' click rate.

LinkedIn should try and maintain itself as "different" from simple entertainment/SNS. This is it's core value proposition. Changing it would require significant change in culture. Given the lack of urgency in the organization to address this problem, any radical change in direction would offer limited chances for success.

The bottom line is that being in a "wall-garden" isn't necessarily being completely "LockedOut" Walled gardens were successful elsewhere (Apple) though the product/service quite different. The near-term strategic goal should be to maintain the focus on growing the current demographic. Longer-Term LinkedIn should continue to look for ways to keep its value different from the oncoming competition from (mainly) SNS.

The best way to achieve this would be to try and incorporate the vertical PNS (MedicalMingle, CIOZone) They could tightly control the process of platform integration by being very selective on which other PNS could join their ecosystem. Each additional PNS signed up would generate positive network effects.They were already dabbling in "opening" their platform and had the infrastructure in place to change the mission of only a small part of the organization devoted to exploring new platforms. This gives them avenues for future growth, represents only an incremental change to the processes, and most importantly consolidates the PNS market.

What LinkedIn should really do is LockOut the competition.

Tuesday, April 5, 2011

The marginal quality of "Collective Wisdom" vs the marginal cost of "Authoritative Knowledge"

How do Wikipedia’s processes for creating and modifying articles ever lead to high-quality results? In other words, since anyone can easily edit Wikipedia, how is it that good (and usually accurate) content emerges?

 The case doesn't present a proper mechanism to quantify a reasonably accurate answer to the question posited.  Anecdotal evidence of error rates in one study on a minute portion of the sites articles (42 of over 3 million) is not statistically significant.  Though the idea of collective wisdom has power that perhaps wikipedia has developed a process of "quality" that is greater than traditional authoritative hierarchies cannot be discounted.  They say that journalism represents a "first draft of history."  The more important question here is: "Has wikipedia designed a process that represents a "continual draft" of history."

To be sure, the model has advantages over the traditional storage of mankind's knowledge.  1. It can't be easily lost (i.e. the Library of Alexandria) 2. It's less likely to be biased by the original writers of "history" (i.e. "the winners) 3. It can be corrected quickly, cheaply and continually. There are more major benefits.  The trade-offs of accuracy vs cost/ speed of value delivered to reader cannot be ignored.

The marginally lower costs of quality (if they are in fact borne out by more studies) can be easily mitigated.  Some articles rate of change (velocity) are bound to lower than others.  A simple visual cue to discern the difference could easily add value to readers.  Perhaps a "green" color for articles about less contentious subjects, like the Periodic Table or the Arabic Numeral System (which is really Hindu) Presumably, this would be less volatile than the recent Sendai Earthquake in Japan (pun intended) - that could be colored "Red". In the former, the debates are limited to highly technical matters.  In the latter, the information is more fluid and dynamic.  Does that mean it's not worth capturing? In fact, some subjects are much more inclined to have embellishment over others.  Subjective topics such as music, movies, books, etc... will always be based more on opinion than "fact." Eventually mistakes will be weeded out and reflected in a reduction in rate of article change. Quantifying that in effect "qualifies" the article.

The allusion to a "Bolshevik Soviet" system smacked of an overly-socialist inclination bias that marked this article of lesser quality.  The evidence of an equally brutal, bureaucratic system was weak.  I've been an infrequent editor of wikipedia over the past 5 years. Though my contributions are mostly frivolous, my non-scientific test was to see how long egregious comments would be removed (in MOST cases VERY quickly)  Though I have learned if a user is only making stupid comments those are easy to identify and quickly remove with scanning technology.  SO I've adapted and occasionally make some meaningful edits (mostly deleting irrelevant info)  Find me a system that ISN'T trying to be gamed, be it overly academic or free to the "common man" to alter.

If we stick to the anecdotal approach to answering the question posited I'll present the following evidence.  I've run a football pool that directly measures "collective" intelligence both BEFORE and DURING the entire NFL season.  I've done this for over 15 years and have over 23,000 data points over that span.  In one game, players pick every football game for the entire season before a single ball is kicked.  The "collective wisdom" or accuracy of those picks is not only better before a ball is kicked, it gets better as the season progresses, injuries and poor performances apparently insignificant.  It's not always right and doesn't beat the "best" player or win the pool.  But it's usually better than 80% of the "NFL experts" and has an impressive track record for a crude, limited version of a "crowd-sourcing" model.

There's a larger irony here to point out.  The insulting arrogance of the Ivory Tower (those Phds that produced 12 articles in 18 months on Nupedia) is rooted in the simple danger that the transparency of wikipedia marginalizes their expensively acquired efforts.  A distributed knowledge creation system may just be more efficient than "traditional" higher-education systems (no, it doesn't replace the need for doctorates, just diminishes and is better suited for some categories, not all) Given this phenomenon the cost / benefit for higher education is a critically re-framed question for those most heavily vested in the process. OF COURSE they will complain on marginal quality over marginal cost. Anything to challenge the unsustainable increases in tuition costs represents a direct threat to their way of life. They should be worried Zuckerburg dropped out of Harvard. More worried that there is no monetization model for wikipedia. The marginal knowledge added by this select, overly-qualified community is diminishing. Data in social networks much more powerful. But that's a question for next week's case. :)

Monday, March 28, 2011

KickStarter.com

Found this cool new site: KickStarter.com which helps budding entrepreneurs raise $ for projects.  Typically these are small funded operations (less than $100K) that would be difficult to obtain financing elsewhere.  Funding can come from anyone willing to donate to the project (usually in return for a product or goodie) The funding doesn't kick in unless a funding hurdle is met in an allotted amount of time ...thereby reducing the chances that your $ is wasted on projects that can get "off the ground"

Definitely this is a platform that has positive network effects. There are a few projects of mine I've had trouble getting off the ground so perhaps you might see one or two of mine on there soon. Though some of the projects are of limited commercial value and limited scope (not every project is going to be the source of a new company) some are just for fun: http://www.kickstarter.com/projects/andrewhyde/monster-records-laser-cut-vinyl-record-puzzles?ref=spotlight