A personal blog sharing ideas and observations on start-ups, the vc industry, technology, and life.
Wednesday, June 29, 2005
Alan Morgan - VC Best Practices
I find the most useful VC blogs offer postings that help shed light on the process of raising money and/or share best practices on how to run start-ups, raise money, or effectively manage board of directors.
Mayfield's Alan Morgan's blog includes a series of very useful posts that are worth reading.
The first series of posts provides insight into how entrepreneurs should work with VCs during the capital raising process. Click here to read the series - Ten Commandments for Entrepreneurs.
The second post provides advice for start-up CEOs with respect to managing and working with a Board of Directors. Click here to read the post - "Managing" Your Board of Directors.
Tuesday, June 28, 2005
Jack Kilby
Kilby and Noyce independently invented the IC, with Kilby's pioneering work ultimately recognized with a Nobel Prize in 2000.
For anyone in the technology industry, I strongly suggest reading The Chip.
The book is a wonderful account of the two men, their inventions, and the IC's impact on the world.
RIP.
Monday, June 27, 2005
The Cost of Optimism
The Economist recently ran a wonderful piece on the sorry state of project management.
The Standish Group, which analyzes IT projects, reported that in 2004 only 29% of IT projects succeeded, down from 34% in 2002. Cost over-runs from original budgets averaged 56%, and projects on average took 84% more time than originally anticipated.
Put another way, 71% of projects did not succeed, 44% came in on budget, and only 16% came in on time. Wow.
Another study examined 210 rail and road projects and found that traffic estimates used to justify the projects (i.e. passenger or car traffic) were overly aggressive by an average of 106%.
Today's papers are rife with horror stories of projects failing - from the FBI's abandoned $170m internal IT project, to EDS' failing Navy contract, to incredible cost overruns and delays in the Pentagon's weapons development programs.
What does all this mean for venture capital and for executive teams?
Venture capitalists fund companies to value creating milestones. The theory is that if objective value milestones are met, the company and insiders will be able to raise a new round of funding at a stepped-up valuation. All too often, however, the cost, time, and effort associated with such milestones is underestimated. Instead of hitting plan, the company runs out of money a quarter or two prior to realizing its objectives. The insiders and management are then faced with the dreaded prospect of a down round or a bridge financing to tide the company through to meeting its original plan.
Why do such smart people, across so many industries, fail to adequately account for two crucial variables in planning - cost and time?
Max Bazerman, an HBS professor and former professor of mine at Kellogg, blames "self-serving bias," overly optimistic projects that help win the business and advance careers and agendas.
Think about the LBO business. Most deals are auctions, and the winning bid is often simply the highest bid. In some sense, the only way to win is to forecast the rosiest outlook and forecasts.
Along those lines, I once sat through a McKinsey pitch on private equity firm performance in which McKinsey found that the winning bidder/firm overestimated the target company's first year EBITDA 66% of the time. By overestimating profit performance, the winner bidder justified a very aggressive bid.
This is not good for investors, nor for companies who set overly aggressive goal, fail to realize them, and then have to retrench, rationalize, and regroup.
Project management gurus think of five key stages of project management: initiation, planning, execution, control, and closure.
If we think of start-ups as projects (a popular VC description of young companies) and if start-ups suffer the statistics of the IT industry at large, then 71% will go under, 84% will take longer than anyone thought, and 44% will run out of money before they get to value creating events.
Another cliche in venture is that execution separates great start-ups from losers. These numbers illustrate why that is the case. If you are great at the initiation phase - idea articulation and business plan creation - and suffer the ability to execute and control the project...then not good.
These numbers suggest that VC firms that help their portfolio companies optimize execution - operating plan development, sales forecasting and management, engineering project planning, marketing plans, etc - will add tremendous value.
Helping young companies develop the best practices associated not just with coming up with great ideas or products, but also on executing on a budgeted plan that ensures the company comes in on time and on budget with the deliverables in hand will be of immense value.
Start-ups should look for VCs who add value in this very concrete manner. Ask VCs how they provide the tools, systems, and practices that contribute to project success and avoid the long history of project disasters.
Thursday, June 23, 2005
Roku SoundBridge
The device serves as a "bridge" between digital music on the PC and the stereo. Roku's device includes a compact-flash wireless NIC that enables the PC to stream music from iTunes or Windows Media Player to the stereo.
While at first the Roku device had trouble communicating with my Netgear wireless router, after working through a few issues, I am loving the convenience of playing my iTunes library and playlists on the stereo.
The product is a great link between two worlds and allows me to leverage two sets of investments - while I am sure that my stereo will soon go the way of all flesh, I recommend using the Soundbridge in the interim.
One problem with the Roku device is that AAPL's file format prevents the playing of music purchased at iStore - anyone have ideas on how to fix this issue?
Saturday, June 18, 2005
The Value of Virtualization
Utility computing is a hot topic. Companies are spending millions of dollars on VMWare and other virtualization technologies in order to consolidate data center operations and pool IT resource demand. Utility computing and virtualization are reshaping today's data center technologies and operations. Why?
Quite simply, virtualization technologies drive significant improvements in asset utilization by pooling resources across variable demand. By pooling fluctuations in demand, one needs fewer physical resources to meet resource demand. Fewer provisioned resources reduces operating and capital expenses, a good thing:)
The technology industry is borrowing an important operational best-practice from supply chain practioners - that is the pooling of inventory. Why does this work?
It works because averages are additive, while standard deviations are not. This statistical fact drives the benefits of pooling inventory (IT assets), across variable demand for those assets.
In operations, in order to avoid lost sales, supply chain managers pad inventory levels in order to avoid expensive stock-outs. The IT equivalent is to overprovision. Supply chain experts leveraged basic statistics to realize that by pooling inventory one can greatly reduce the amount of safety stock required to adequately meet demand.
How? A simple (VERY) example shows the power of pooling.
Assume two applications, managed independently, with the following weekly demand characteristics:
Application I
Mean Demand 50 servers
Standard Deviation 4 servers
Application II
Mean Demand 50
Standard Deviation 3 servers
If we set safety stock equal to two times the standard deviation, application I needs 58 servers to adequately meet demand (ie with 95% confidence), while application II requires 56 servers. The data center responsible for both applications, in aggregate, overprovisions by 14 servers of "safety stock."
Now if we decide to pool the two applications, we would observe the following:
Consolidated Applications
Mean Demand 100 servers (averages are additive)
Standard Deviation 5 servers, or sqrt(4^2 + 3^2)
Pooling the server demand of the two applications allows the enterprise to reduce server safety stock by 4 units, or by 28%.
For a large enterprise, reduced server safety stock translates into large savings.
Theoretically, 4 servers x 1,000 applications x $5,000 per server (estimate) = $20,000,000
This is an artificial example, but I think it is illustrative of the value of virtualization. It helps me, at least!, tie supply chain management operations theory to high-tech market adoption.
By leveraging enabling technologies to pool spare capacity and statistics (averages are additive but that standard deviations grow by the square root of the sum of the standard deviations in question), IT operators can continue to strip capacity and costs out of the data center while continuing to meet demand.
Thursday, June 16, 2005
Morgan Stanley CTO Summit
The summit serves as a testament to why innovation matters and how large companies can effectively partner with start-ups to make a difference.
First, Morgan Stanley's CTO, Guy Chiarello, makes working with start-ups a top priority. He recognizes that he simply cannot wait for his incumbent vendors to deliver on next-generation technology, and he works hard to ensure that Morgan Stanley's competitive advantage is maintained by the identification and adoption of cutting edge technologies.
For example, Guy handed out two innovation awards to companies that embody how technology can help large enterprises grow IT capacity without a correlated growth in head-count or costs. This year's winners were VMWare, which provides processor virtualization technologies, and Avamar, a provider of disk-based, rather than tape-based back-up solutions. Both companies were identified by Morgan Stanley in prior West Coast CTO conferences and since implementation have helped Morgan Stanley increase IT asset utilization and data back-up efficiencies.
Second, he brings his entire senior team to the West coast for the two-day event, no small expense in time, logistics, and coordination. This year he brought out 40 of his best and brightest to meet with emerging start-ups and VCs. The 40 represent the bank's leaders in BI/information management, collaboration and web applications, compute and storage platforms, enterprise and application management, network connectivity, risk and security, and the CTO's office.
Third, his staff candidly explain where they need help and where technology can, on the margin, better help Morgan's IT group deliver on business benefit and advantage.
Fourth, he devotes an entire day to meeting with pre-screened start-up companies that appear to meet one of MS' unmet needs. This year, one of my companies, Klocwork, had the privilege of presenting to Guy and the security group. Klocwork provides source code quality and security analysis and is in a position to help Morgan automate code review and improve the performance and security of its home-grown applications.
Over the last three years, Morgan Stanley used start-up technologies, such as VMWare, to dramatically increase its compute, storage, and network capacity (in some cases by over 100% without ANY increase in head-count and operating costs). MS offers the IT world a paradigm of how to leverage commodity hardware (x86), commodity software (Linux), in combination with cutting edge innovation to grow IT capacity and drive business advantage without seeing an equivalent growth in cost.
Why does Morgan Stanley send 40 of its smartest guys to the West coast every year?
They recognize that young, start-up technology companies can help large enterprises solve complex business and technology challenges in ways that larger system vendors and software houses simply cannot match. Per my post on the Golden Age of IT, Morgan is embracing two powerful trends (standard hardware and commodity software) in complement with innovative, start-up technologies that help accelerate cost-reduction, efficiency, and capacity.
I left impressed by their commitment to innovation and how well they APPLY innovation to solve real problems.
Guy is a model CTO/CIO and he gives all us in the technology industry hope that the enterprise will continue to recognize and reward technology innovation. Now, who said IT is dead?
Wednesday, June 15, 2005
Firefox Tip
For Firefox users, Brad Feld posted a helpful set of configuration tips to improve Firefox's performance. See the below.
Go to the address bar in Firefox and type in "about:config"
Look for the following lines:
- network.http.pipelining = false
- network.http.pipelining.maxrequests = 4
- network.http.proxy.pipelining = false
Change them to (by click/double-click the line):
- network.http.pipelining = true
- network.http.pipelining.maxrequests = 30
- network.http.proxy.pipelining = true
This configures the browser to make 30 requests at once and not wait for a reply to the request before making another request
Then you need to create one new option:
- Right click anywhere on the page and select New-> Integer.
- Name it "nglayout.initialpaint.delay"
- Set its value to "0".
This value is the amount of time the browser waits before it acts on information it receives. You need to restart Firefox for this to be enabled. On sites that support pipelining (not all do) the results are dramatic.
Thanks Brad for passing on the tip.Sunday, June 12, 2005
Mark Leslie, VRTS founder, on Sales Life Cycle
The Sales Learning Curve - Optimizing the Path to Postive Cash Flow
By Mark Leslie
There's an old saying about rolling out a successful new product: "It always takes longer and costs more." Many company executives are resigned to this state of affairs, determined to ride out the tough phase of the new product cycle on the path to positive cash flow.
But it doesn't have to be that way. By learning from the mistakes of the past, start-ups can build cost-effective, successful sales teams that burn through a minimal amount of cash on the road to breakeven.
This method of establishing a sales force is called the Sales Learning Curve (SLC). It's a concept adapted from the Manufacturing Learning Curve (MLC), which is widely accepted in the manufacturing sector. The MLC states that the cost to produce the early units of a new product normally is high, but over time, as the production team learns how to optimize manufacturing and wring out costs, volume increases and per-unit product costs decline sharply.
When we apply the MLC to sales, we come to the following conclusion: The time it takes to achieve cash flow breakeven is reasonably independent of sales force staffing. It is, instead, entirely dependent on how well and how quickly the entire organization learns what it takes to sell the product or service while incorporating customer feedback into the product itself. Because the entire organization has to come up to speed, hiring a large initial sales staff does not speed up the time to breakeven, it simply consumes cash more quickly.
Let's look at a case study of what happens when the SLC is not applied. Our model company experiences positive early product revenues from beta customers. Top management, eager to establish an early leadership position in the market, adopts an aggressive approach to sales. The company hires a VP of sales, as well as regional sales managers, systems engineers, inside sales reps and field sales reps. The clear expectation is that this team - often upward of 30 people - will deploy rapidly and efficiently, reaching breakeven within three or four quarters.
Then reality sets in. It takes longer than expected to convince initial customers to buy. The positioning of the product is not quite right, the price needs to be adjusted and product features need to be tweaked. Meanwhile, typical start-up issues, such as opening regional sales offices and establishing lines of command, distract the sales force. The result: This oversized team burns through tons of cash and does not come close to reaching breakeven within the target timeframe.
What happened? Management incorrectly assumed that by simply ramping up the sales force, revenues would automatically increase at the same pace.
If the company had applied the SLC, it would have staffed sales at a much lower and cheaper level, in anticipation of a slower initial sales ramp. This would allow the company to fine-tune the product or service (feature set, ease of use, integration needs, etc.), to hone its sales and marketing processes and to learn from customers about positioning, promotion and pricing, all before deploying a large and expensive sales force.
Rather than starting with a large sales force, a company using the SLC is better served by hiring a small team of sales execs with the analytical skills and patience to lead the company through an iterative learning process that includes the continuous discovery and solution of small but crucial problems.
Let's revisit our model company. Let's say it adheres to the SLC. When should it start expanding the sales force? Keeping in mind that the slope of the SLC varies depending on the product being sold, a good starting place would be to wait until the initial sales team is generating a marginal contribution of two times the cost of a field sales rep. While data points in sales tend to be scarcer than in manufacturing, this is a reliable indicator that you've started to climb the Sales Learning Curve.
By adhering to the Sales Learning Curve model of sales force staffing, you can safely toss out that old adage we started with. It doesn't have to take longer and cost more than you planned.
Mark Leslie is an El Dorado Ventures Technology Partner. He is a managing director of Leslie Ventures and teaches at Stanford University's Graduate School of Business and Graduate Engineering School. From 1990 to 2000, Mark served as Chairman and CEO of Veritas Software and oversaw the growth of the company from start-up mode to $1.2 billion in annual revenues. He is on the boards of Avaya Corp. (NYSE: AV), Metric Stream, Model N, Outerbay, Panta, PostX and Scalix.
Friday, June 10, 2005
After the Term sheet
FYI.
Click here for article
Thursday, June 09, 2005
IBM: Standards, Customer Alignment, and Ecosystem based competition
The event focused on exposing VCs to IBM's software and systems strategy and detailed a "how to" guide for start-ups to work with IBM. See www.ibm.com/isv
I left IBM impressed by the cogency and power of their strategy, which is predicated on:
- standards-based software and hardware
- open source and open systems (shared specifications)
- ecosystem based competition
- customer solutions
Ideally, IBM's Software unit provides standards-based software solutions that run on IBM's Systems groups commodity hardware built by IBM Global Services into vertically-relevant solutions. Contrast this with Microsoft's approach of proprietary, closed source software that creates major switching costs, with no MSFT enterprise delivery arm to ensure that components are effectively built into customer solutions.
Microsoft's interests are increasing orthogonal to those of their customers, while IBM's commoditization of certain elements of the software stack and embrace of standards continue to align IBM with the best interests of their major customers who aspire to fungible, easily integrated, standards-based software components. This is a hard riddle for MSFT to solve.
IBM also highlighted the power of ecosystem-based competition. For example, to maintain an enterprise-scale operating system costs $500m per year (according to Paul Horn, IBM Research). By moving to Linux, IBM invests $50m per year (1/10th the cost) in Linux and benefits from a cumulative pan-industry investment of $1,000m, 5,000 developers, etc. IBM's embrace of Linux allows it to free human and financial resources away from low-value commodity functions to higher value opportunities, whereby the cost of operating system development is reduced and investments in new classes of infrastructure software and innovation are funded via an industry/ecosystem pooling of resources and effort.
Paul stated, "companies that innovate on top of open standards are advantaged because resources are freed up to higher value work and market opportunities expand as standards proliferate." IBM is focused on raising its collaborate/compete ratio - the greater the community focus on shared resources and advances, the lower the cost of the components that drive composite solutions, the greater the customer value, and ultimately, the greater the profits.
Where does this leave start-ups without a services arm (IBM GS) to drive revenue?
There is a wisdom of crowds with respect to innovations, extensions, documentation, best practices, etc. - we need to ask, how can we leverage the power of crowds in order to increase the value of what we bring to the table?
But there is an attendant danger - IBM is commoditizing the stack to drive the standardization of componentry and the cost of solutions. Fungibility has negative implications for start ups, but the concept of ecosystem based competition rather than company level competition is interesting and worth thinking more about.
Standards are the stated rules of engagement - which reduces friction. We need to think about how we tap into the benefits of ecosystems based on collaboration and openness, while delivering sufficient incremental value to build viable businesses that improve the performance and availability of applications and systems.