The need for a Predictable network in the DataCentre

What is HyperConvergence

Legacy data centers originally designed to support conventional client-server applications can’t meet the increased agility, scalability and price-performance requirements of today’s virtualized IT environments. Conventional data centers composed of independent compute, storage and networking platforms are costly to operate and scale, and difficult to administer and automate.

Most legacy data centers are made up of distinct servers and storage arrays with separate Ethernet LANs for data traffic and storage area networks (SANs) for storage traffic. Each technology platform (servers, storage arrays, L2 switches, SAN switches) consumes power, cooling and rack space, and supports a unique management interface and low-level APIs. Rolling out a new application is a manually intensive, error- prone process involving a number of different people, devices and management systems. Turning up new IT services can take days or weeks and involve multiple operations teams. Troubleshooting problems or orchestrating services can be just as difficult.

Many IT organizations are looking to hyperconverged integrated systems1 (HCIS) to reduce infrastructure cost and complexity. In a Gartner survey, 74% of respondents indicated they are using, piloting or actively researching HCIS solutions. Next-generation HCIS solutions from vendors like Nutanix and SimpliVity pack compute and storage resources into compact x86 building blocks that are fully virtualized and uniformly administered. Hyperconverged systems contain CAPEX by collapsing compute and storage silos and eliminating SANs. And they contain OPEX by reducing IT sprawl and lowering recurring power, cooling, rack space and administrative expenses. By consolidating and unifying the compute and storage domains HCIS solutions can reduce TCO, accelerate time-to-value and simplify operations. But many enterprises fail to consider the networking implications of HCIS. Current data center networking constraints can hinder IT service agility, impair the performance of contemporary applications and hamper HCIS initiatives.

Re-architecting Data Center Networks for HCIS Implementations

Hyperconverged systems consolidate diverse applications, workloads and traffic types onto common infrastructure, introducing network engineering challenges for IT planners. Conventional data center networks designed to support siloed IT environments aren’t well suited for carrying diverse HCIS traffic. Most are simply overprovisioned to support peak traffic demands—an inefficient, application- agnostic approach that squanders bandwidth and budget. And because all workloads are treated equally, a data-intensive or bursty application can monopolize network capacity, impairing the performance of other applications (the so-called noisy neighbor problem).

As Gartner (1)  points out “Mixing user access, node-to-node application traffic, VM mobility, storage access and back-end storage traffic on a single network can lead to unpredictable performance, availability and data integrity.

Conclusion

When implementing HCIS solutions, IT planners must make a reset on their old practices and re-architect data center networks to ensure adequate performance and service quality for all applications.

 

(1) Gartner’s “Leverage Networking to Ensure Your Hyperconverged Integrated Systems Can Support Demanding Workloads”

(2) This article is inspired from a joint paper Gartner/Plexxi

The financial industry contained an average of 52 open source vulnerabilities per application. London we have a problem!

 

For many years, DTmag has been reporting open source security issues, and two studies within the past couple weeks demonstrate there is a problem: Last week, this site reported on a study conducted by German researchers that linked open source software vulnerabilities to developers copying source code from flawed online tutorials and pasting it into open source applications.

The financial industry contained an average of 52 open source vulnerabilities per application, while 60 percent of the applications contained high-risk vulnerabilities, the company said. It added: “The retail and e-commerce industry had the highest proportion of applications with high-risk open source vulnerabilities, with 83 percent of audited applications containing high-risk vulnerabilities.”

blog-knight-pexels-photo-162007

Our view?: as open-source is a very valuable approach, and vulnerabilities will keep existing; better have a new strategy for protecting your information/data. It’s time to have a real visibility of “intents” of attackers, this is the key.

This is why we’re embracing technologies such as http://www.empownetworks.com

source: https://adtmag.com/articles/2017/05/01/black-duck-audits.aspx

Will we rebuild 2 Tier Architectures for the Cloud to deal with Latency?

 

This article highlights an important question that will pop up soon. Maybe we’ll re-invent what came 30 years ago.. a 2 tier architecture w/ front-end and back-end servers? The Cloud being the Back-end.

IT is a wheel..from Mainframe to mini, from mini to PCs and back-end, two tier architectures, distributed systems, cloud..and now?

 

 

source: http://formtek.com/blog/edge-computing-cloud-latency-cant-support-real-time-iot-and-ai-apps/

Edge Computing: Cloud Latency Can’t Support Real-Time IoT and AI Apps

By Dick Weisinger

Is the Cloud the ultimate solution?  Many of the benefits of the cloud can be alluring. Security has long been a pain point, but cloud security is increasingly less of an issue, and security has been steadily improving.

One issue remaining issue with the cloud that won’t go away anytime soon is latency.  Cloud apps are slow compared to apps that run on local machines.  Network latency, the time it takes data to travel from devices to the datacenter and back, is a problem for any application that needs immediate feedback of information.

Peter Levine, general partner of Andreessen Horowitz, told the Wall Street Journal that “I have this theory that cloud computing goes away in the not-too-distant future. I believe there is now a shift, a return to the edge where computing gets done much more at the edge than in the cloud. The Internet of Everything, specifically if you think about self-driving cars, is a great example of this. A self-driving car is a data center on wheels. And it has 200-plus central processing units. It’s got to do all its computation at the endpoint and only pass back to the cloud, if it’s connected at all, important bits of curated information.”

Deepu Talla, VP and General Manager of Mobile at Nvidia, said that “by 2020, there will be 1 billion cameras in the world doing public safety and streaming data. There’s not enough upstream bandwidth available to send all this to the cloud. Latency becomes an issue in robotics and self-driving cars, applications in which decisions have to be made with lightning speed. Privacy, of course, is easier to protect when data isn’t moving around. And availability of the cloud is an issue in many parts of the world where communications are limited. We will see AI transferring to the edge.”

Thomas Bittman, analyst at Gartner, wrote that “there’s a sharp left turn coming ahead, where we need to expand our thinking beyond centralization and cloud, and toward location and distributed processing for low-latency and real-time processing. Customer experience won’t simply be defined by a web site experience. The cloud will have its role, but the edge is coming, and it’s going to be big.”

IT Infrastructure should be designed to understand the “intent”. Only the “intent”? Not just!, add real-time!

For Networking, understanding what the Application intends to require in resources

For Security, what the hacker wants to do

But not yesterday, now!

Sharp article from MIT Sloan Mngt review: https://goo.gl/jgTpgn  (extract follows)IMG_0982

The future of IT includes those keywords: “intent”, “abstraction”, “real-time” and indeed Applications. The rest is technology?

Disruptive ‘Cyber Trends’ At RSA

Extract from Forbes Magazine on RSA event


Disruption #3: ‘Software-Defined’ Cybersecurity

Cybersecurity has also joined the Software-Defined Everything (SDX) movement. If we can represent our entire cybersecurity deployment as a software-based model, the reasoning goes, then we have better control, visibility, and flexibility.

empow leverages software-defined techniques to implement an abstraction and orchestration layer on top of a range of disparate enterprise security tools.


 

We are proud to see empow listed here!

Is the Telecom industry getting distracted?

Quite interesting article from Telecoms.com: MWC Review by Telecoms.com

Extract: “..Overall, this year’s show demonstrated one thing to us; the industry is going through an identity crisis. Telcos don’t know how to take on the challenges to the status quo, and despite numerous statements billed around digital transformation, the reality is that there has been little progress made to realize the potential of the digital economy.

So yes, VR is exciting and so are autonomous vehicles, and the products on show demonstrated there is certainly life left in the telco industry, but let’s not forget the basics. The business model needs to fundamentally change, and there’s only so long confused executives can hide behind the exciting distractions…”

Disappointing?

railingbear

At AnotherTrail, we believe this Industry needs to focus on Software Services, helping Enterprises to get the best of the Digital World, transforming their business, increasing agility. That includes Infrastructure made virtual (eg SD-WAN) , but there are many examples.

Software Services requires an API on-demand interfacing architecture facilitating the access to a community of Services coming Industry small and large players. Finally just a new generation of “Content” made accessible thanks to Telcos.

Let’s see who will be the fastest players here!

 

 

 

Automation needs more and more Intelligence

At AnotherTrail, we see the needs and values of automation between applications, systems, storage and networks abstraction layers. This is the future of IT.

We see more and more the requirement to add 3 key points:

  • An intelligence that can anticipate situations and use-cases requiring changes, tuning of an infrastructure. Yes it includes machine-learning techniques, and workflows of scenarios
  • Correlating events from the different elements at network & security levels in order to detect “strange” behaviors that could match with intents to break/attack
  • Capacity to check the compliance in a visual manner on a regular basis, and discover differences. This is what we call the “Polaroïd ™ of the virtual world”

Contact us for sharing views on this!