This is a cross-post of a blog article written by Gregory Ness, former VP of Marketing for Blue Lane Technologies who is currently working for InfoBlox.
Over the last three decades we’ve watched a meteoric rise in processing power and intelligence in network endpoints and systems drive an incredible series of network innovations; and those innovations have led to the creation of multi-billion dollar network hardware markets. As we watch the global economy shiver and shake we now see signs of the next technology boom: Infrastructure2.0.
Infrastructure1.0- The Multi-billion Dollar Static Network
From the expansion of TCP/IP in the 80s/90s, the emergence of network security in the mid/late 90s to the evolution of performance and traffic optimization in the late 90s/early 00s we’ve watched the net effects of ever-changing software and system demands colliding with static infrastructure. The result has been a renaissance of sorts in the network hardware industry, as enterprises installed successive foundations of specialized gear dedicated to the secure and efficient transport of an ever increasing population of packets, protocols and services. That was and is Infrastructure1.0.
Infrastructure1.0 made companies like Cisco, Juniper/NetScreen, F5 Networks and more recently Riverbed very successful. It established and maintained the connectivity between ever increasing global populations of increasingly powerful network-attached devices. Its impact on productivity and commerce are proportionate to the advent of oceanic shipping, paved roads and railroads, electricity and air travel. It has shifted wealth and accelerated activities on a level that perhaps has no historical precedent.
I talked about the similar potential economic impacts of cloud computing in June, comparing its future role to the shipment of spices across Asia and the Middle East before the rise of oceanic shipping. One of the key enables of cloud computing is virtualization. And our early experiences with data center virtualization have taught us plenty about the potential impact of clouds on static infrastructure. Some of these impacts will be felt on the network and others within the cloudplexes.
The market caps of Cisco, Juniper, F5, Riverbed and others will be impacted by how well they can adapt to the new dynamic demands challenging the static network.
Virtualization: The Beginning of the End of Static Infrastructure
The biggest threat to the world of multi-billion dollar Infrasructure1.0 players is neither the threat of a protracted global recession nor the emergence of a robust population of hackers threatening increasingly lucrative endpoints. The biggest threat to the static world of Infrastructure1.0 is the promise of even higher factors of change and complexity on the way as systems and endpoints continue to evolve.
More fluid and powerful systems and endpoints will require either more network intelligence or even higher enterprise spending on network management.
This became especially apparent when VMware, Microsoft, Citrix and others in virtualization announced their plans to move their offerings into production data centers and endpoints. At that point the static infrastructure world was put on notice that their habitat of static endpoints was on its way into the history books. I blogged about this, (sort of ) at Always On in February 2007 when making a point about the difficulties inherent with static network security keeping up with mobile VMs.
The sudden emergence of virtualization security marked the beginning of an even greater realization that the static infrastructure built over three decades was unprepared for supporting dynamic systems. The worlds of systems and networks were colliding again and driving new demands that would enable new solution categories.
The new chasm between static infrastructure and software now disconnected from hardware, is much broader than virtsec, and will ultimately drive the emergence of a more dynamic and resilient network, empowered by continued application layer innovations and the integration of static infrastructure with enhanced management and connectivity intelligence.
As Google, Microsoft, Amazon and others push the envelope with massive virtualization-enabled cloudplexes revitalizing small town economies -and whomever else rides the clouds– they will continue to pressure the world of Infrastructure1.0. More sophisticated systems will require more intelligent networks. That simple premise is the biggest threat today to network infrastructure players.
The market capitalizations of Cisco, Juniper, F5 and Riverbed will ultimately be tied to their ability to service more dynamic endpoints, from mobile PCs to virtualized data centers and cloudplexes. Thus far, the jury is still out about the nature and implications of various partnership announcements between 1.0 players and virtualization players.
As enterprises scale their networks to new heights they are already seeing the evidence of the stresses and strains between static infrastructure and more dynamic endpoint requirements. A recent Computerworld Research Report on core network services already shows larger networks paying a higher price (per IP address) for management. Back in grad school we called that a diseconomy of scale; today in the networked world I think it would be one of the four horsemen of infrastructure1.0 obsolescence. Those who cannot adapt will lose.
Virtsec as Metaphor for the New Age
Earlier this year VMware announced VMsafe at VMworld in Cannes. Yet at the recent VMworld conference mere months later the virtsec buzz was noticeably absent. The inability of the VMsafe partners to deliver on the promise of virtualization security was a major buzz killer and I think it may be yet another harbinger of things to come for all network infrastructure players. This issue is infinitely larger than virtsec.
I suspect that the VMsafe gap between expectations and reality drove production virtualization into small hypervisor VLAN pockets, limiting the payoff of production virtualization and I think impacting VMware’s data center growth expectations. That gap was based on the technical limitations of Infrastructure1.0, more than any other factor. It also didn’t help the 1.0 players grow their markets by addressing these new demands. The result was as slowdown in production virtualization, a huge potential catalyst for IT, with new economies of scale and potential.
The appliances that have been deployed across the last thirty years simply were not architected to look inside servers (for other servers) or dynamically keep up with fluid meshes of hypervisors powering servers on and off on demand and moving them around with mouse clicks.
Enterprises already incurring diseconomies of scale today will face sheer terror when trying to manage and secure the dynamic environments of tomorrow. Rising management costs will further compromise the economics of static network infrastructure.
The virtsec dilemma was clearly a case of static netsec meeting dynamic software capable of moving across security zones or changing states. There are more dilemmas on the way. Take the following chart and simply add cloud and virtualization in the upper right and kink the demands line up even higher:
If you take a step back and look at the last thirty years you’ll see a series of big bang effects from TCP/IP and application demand collisions. As we look forward five years into a haze of economic uncertainty, maybe it’s a proper time to take heed that the new demands of movement and change posed by virtualization and cloud computing need to be addressed sooner rather than later.
If these demands are not addressed, more enterprise networks will face diseconomies of scale as TCP/IP proliferates. They’ll experience additional availability and security challenges and will emerge when the haze clears at a competitive disadvantage after years of overpaying for fundamental things like IP address management (or IPAM). Most enterprises today are still managing IP addresses with manual updates and spreadsheets and paying the price, according to Computerworld research. How will that support increasing rates of change?
The Emergence of Connectivity Intelligence
As I mentioned one of the biggest challenges of virtsec was the inability of network appliances to see VMs and keep track of them as they move around inside a virtualized blade server environment (racks and stacks of powerful commodity servers deployed in a fluid pool that can add or remove servers/VMs on short notice and therefore operate with less power than the conventional data center with each server running a unique application or OS and therefore having to be powered 24/7).
The static infrastructure was not architected to keep up with these new levels of change and complexity without a new layer of connectivity intelligence, delivering dynamic information between endpoint instances and everything from Ethernet switches and firewalls to application front ends. Empowered with dynamic feedback, the existing deployed infrastructure can evolve into an even more responsive, resilient and flexible network and deliver new economies of scale.
A dynamic infrastructure would empower a new level of synergy between new endpoint and system initiatives (consolidation, compliance, mobility, virtualization, cloud) and open new markets for existing and emerging infrastructure players. Cisco, Juniper, F5 Networks, Riverbed and others who benefited from the evolving collisions between TCP/IP and applications could then benefit from the rise of virtualization and enterprise and service provider versions of cloud, versus watching it from the sidelines.
The Rise of Core Net Service Automation
That connectivity intelligence requirement will make core network service automation (DNS, DHCP, and IPAM, for example) strategic to infrastructure2.0. Most of these services are today manually managed. That means that network and system are connected and adjusted manually. More changes will mean more costs and more downtime and less budget for static infrastructure.
These networks need dynamic reachability (addressing and naming) and visibility (status and location) capabilities. In essence, I’m advocating the evolution of a central nervous system for the network capable of delivering commands and feedback between endpoints, systems and infrastructure; at the core it would be a kind of digital positioning system (DPS) that would enable access, policy, enforcement and flexibility without the need for ongoing and tedious manual intervention.
In between recent emails with Rick Kagan and Stuart Bailey (both also at Infoblox) Stuart recommended Morville’s “Ambient Findability”. I soon found out why. The following is from the online Amazon review:
“The book’s central thesis is that information literacy, information architecture, and usability are all critical components of this new world order. Hand in hand with that is the contention that only by planning and designing the best possible software, devices, and Internet, will we be able to maintain this connectivity in the future.”
In a recessionary scenario these labor-intensive strains will get worse as budgets and resources are trimmed. Rising TCO for infrastructure will impact the success of the infrastructure players as well as VMware, Microsoft and others, as virtsec friction has already impacted VMware. The virtualization players will be forced to build or acquire application layer and connectivity intelligence as a means of survival. They may not wait for the static team to convert to a more fluid vision.
That is why the fates of the static infrastructure players (and IT) will be increasingly tied to their ability to make their solutions more intelligent, dynamic and resilient. Without added intelligence today’s network players will benefit less and less from ongoing innovations that show no sign of slowing; the impacts of a recession would be made even more severe.
Leave a Reply