How SDN Can Redefine Whitespace
- Products & Innovation
- 05.04.2017
Software Defined Networking (SDN) is more than another TLA (Three Letter Acronym). It promises to allow rapid deployment and redeployment of networking resources not by the old fork lift method, but rather through software. If you think of the possibilities here, they are pretty amazing. Suppose you could design a data center and its whitespace contents once. Then once installed, perform moves and changes in software. Adds obviously would require more labor, so I’m not covering that in this blog.
Let’s assume that we engineer the space using CFD modeling and place equipment based on the room, balancing power and cooling needs for optimal efficiency. This is a complete change from the way many data centers are constructed and filled today. In fact, many data centers are quite the opposite. Rows belong to functions and/or departments. Sometimes groups of blade servers are installed requiring supplemental cooling. In many cases, the supplemental cooling wouldn’t be needed if the load could be scattered around the floor. So, what if……the hardware and location no longer matter?
CROSSING SILOS
I believe that SDN fans are missing a part of the bigger picture here! This is in part to some areas within the data center ecosystem not caring about others and also departmental silos that don’t foster the spirit of interdepartmental cooperation. For instance, new gear coming in and facilities is the last to know or server teams rejecting blade servers with a switch installed because they can’t or won’t share a box with the networking team. SDN could, however, be a game changer for facilities, networking and server teams. Storage teams may already be used to some of the advantages if they are using SAN fabrics in their deployments.
SDN could really drive a change in facilities and the management of whitespace with respect to power and cooling. With SDN, as long as the connectivity is there, the location of the equipment is no longer a concern outside of the number of hops a packet must take, but this too can be addressed by centralizing switches for use by the various servers. A limiting factor in top of rack deployments (where the switch is located in and serves one rack) is the number of hops between edge, aggregate and core that packets could take on their journey from one server to the next. With SDN, more centralized switches can be deployed and the traffic route(s) become software controlled.
Say that company A retires a server from the finance department. That server could remain in situ and be allocated to the HR network without change orders outside of a software help ticket. No need to do “what ifs” in DCIM, the location and floor was balanced at design. The server team simply asks the networking team to assign it to the HR network. The whitespace contents remain unchanged from a physical standpoint
I will admit that full SDN adoption is a ways off and standards are not fully in place at this point. There is also a concern of vendor lock in remains. But one thing is certain, the more centralized the equipment, the greater the possibilities. We may gain better control of our whitespace. Most importantly is that the companies operating in the open system environment will have a huge advantage over the proprietary solutions.