The need for a Predictable network in the DataCentre

What is HyperConvergence

Legacy data centers originally designed to support conventional client-server applications can’t meet the increased agility, scalability and price-performance requirements of today’s virtualized IT environments. Conventional data centers composed of independent compute, storage and networking platforms are costly to operate and scale, and difficult to administer and automate.

Most legacy data centers are made up of distinct servers and storage arrays with separate Ethernet LANs for data traffic and storage area networks (SANs) for storage traffic. Each technology platform (servers, storage arrays, L2 switches, SAN switches) consumes power, cooling and rack space, and supports a unique management interface and low-level APIs. Rolling out a new application is a manually intensive, error- prone process involving a number of different people, devices and management systems. Turning up new IT services can take days or weeks and involve multiple operations teams. Troubleshooting problems or orchestrating services can be just as difficult.

Many IT organizations are looking to hyperconverged integrated systems1 (HCIS) to reduce infrastructure cost and complexity. In a Gartner survey, 74% of respondents indicated they are using, piloting or actively researching HCIS solutions. Next-generation HCIS solutions from vendors like Nutanix and SimpliVity pack compute and storage resources into compact x86 building blocks that are fully virtualized and uniformly administered. Hyperconverged systems contain CAPEX by collapsing compute and storage silos and eliminating SANs. And they contain OPEX by reducing IT sprawl and lowering recurring power, cooling, rack space and administrative expenses. By consolidating and unifying the compute and storage domains HCIS solutions can reduce TCO, accelerate time-to-value and simplify operations. But many enterprises fail to consider the networking implications of HCIS. Current data center networking constraints can hinder IT service agility, impair the performance of contemporary applications and hamper HCIS initiatives.

Re-architecting Data Center Networks for HCIS Implementations

Hyperconverged systems consolidate diverse applications, workloads and traffic types onto common infrastructure, introducing network engineering challenges for IT planners. Conventional data center networks designed to support siloed IT environments aren’t well suited for carrying diverse HCIS traffic. Most are simply overprovisioned to support peak traffic demands—an inefficient, application- agnostic approach that squanders bandwidth and budget. And because all workloads are treated equally, a data-intensive or bursty application can monopolize network capacity, impairing the performance of other applications (the so-called noisy neighbor problem).

As Gartner (1)  points out “Mixing user access, node-to-node application traffic, VM mobility, storage access and back-end storage traffic on a single network can lead to unpredictable performance, availability and data integrity.

Conclusion

When implementing HCIS solutions, IT planners must make a reset on their old practices and re-architect data center networks to ensure adequate performance and service quality for all applications.

 

(1) Gartner’s “Leverage Networking to Ensure Your Hyperconverged Integrated Systems Can Support Demanding Workloads”

(2) This article is inspired from a joint paper Gartner/Plexxi

The financial industry contained an average of 52 open source vulnerabilities per application. London we have a problem!

 

For many years, DTmag has been reporting open source security issues, and two studies within the past couple weeks demonstrate there is a problem: Last week, this site reported on a study conducted by German researchers that linked open source software vulnerabilities to developers copying source code from flawed online tutorials and pasting it into open source applications.

The financial industry contained an average of 52 open source vulnerabilities per application, while 60 percent of the applications contained high-risk vulnerabilities, the company said. It added: “The retail and e-commerce industry had the highest proportion of applications with high-risk open source vulnerabilities, with 83 percent of audited applications containing high-risk vulnerabilities.”

blog-knight-pexels-photo-162007

Our view?: as open-source is a very valuable approach, and vulnerabilities will keep existing; better have a new strategy for protecting your information/data. It’s time to have a real visibility of “intents” of attackers, this is the key.

This is why we’re embracing technologies such as http://www.empownetworks.com

source: https://adtmag.com/articles/2017/05/01/black-duck-audits.aspx

Will we rebuild 2 Tier Architectures for the Cloud to deal with Latency?

 

This article highlights an important question that will pop up soon. Maybe we’ll re-invent what came 30 years ago.. a 2 tier architecture w/ front-end and back-end servers? The Cloud being the Back-end.

IT is a wheel..from Mainframe to mini, from mini to PCs and back-end, two tier architectures, distributed systems, cloud..and now?

 

 

source: http://formtek.com/blog/edge-computing-cloud-latency-cant-support-real-time-iot-and-ai-apps/

Edge Computing: Cloud Latency Can’t Support Real-Time IoT and AI Apps

By Dick Weisinger

Is the Cloud the ultimate solution?  Many of the benefits of the cloud can be alluring. Security has long been a pain point, but cloud security is increasingly less of an issue, and security has been steadily improving.

One issue remaining issue with the cloud that won’t go away anytime soon is latency.  Cloud apps are slow compared to apps that run on local machines.  Network latency, the time it takes data to travel from devices to the datacenter and back, is a problem for any application that needs immediate feedback of information.

Peter Levine, general partner of Andreessen Horowitz, told the Wall Street Journal that “I have this theory that cloud computing goes away in the not-too-distant future. I believe there is now a shift, a return to the edge where computing gets done much more at the edge than in the cloud. The Internet of Everything, specifically if you think about self-driving cars, is a great example of this. A self-driving car is a data center on wheels. And it has 200-plus central processing units. It’s got to do all its computation at the endpoint and only pass back to the cloud, if it’s connected at all, important bits of curated information.”

Deepu Talla, VP and General Manager of Mobile at Nvidia, said that “by 2020, there will be 1 billion cameras in the world doing public safety and streaming data. There’s not enough upstream bandwidth available to send all this to the cloud. Latency becomes an issue in robotics and self-driving cars, applications in which decisions have to be made with lightning speed. Privacy, of course, is easier to protect when data isn’t moving around. And availability of the cloud is an issue in many parts of the world where communications are limited. We will see AI transferring to the edge.”

Thomas Bittman, analyst at Gartner, wrote that “there’s a sharp left turn coming ahead, where we need to expand our thinking beyond centralization and cloud, and toward location and distributed processing for low-latency and real-time processing. Customer experience won’t simply be defined by a web site experience. The cloud will have its role, but the edge is coming, and it’s going to be big.”

Overlapping Multi-tenant Networks

Compute, Storage and Network are the 3 basic IaaS element of Cloud computing, administrated by orchestrators like Openstack, Cloudstack, VMware and delivered as an elastic service to public or private clients.

Over this IaaS your clients require a rich set of application services be they local or via 3rd party SaaS providers.

What we have seen

As Cloud services expand, you see more and more overlapping multi-tenant networks, with the possibility of multiple orchestrators, be that multiple instances of say Openstack, or combinations of different ones.

Fold into this problem the dynamic nature of the service, the compute platform, and the application delivery and you see a major network administration headache. The primary reason is that traditional IP address management platforms are silo/orchestrator based, so managing the whole IP Address range becomes a time consuming task.

Effects

In the last 6 months AnotherTrail has seen significant cost implications for Cloud Service Providers caused by:

  1. Overlapping Multi-tenant networks
  2. Service expansion/acquisition
  3. 3rd Party SaaS service access
  4. Multiple orchestrators
  5. Need to accommodate virtual user-id naming conventions
  6. Lack of associated CMDB

Continue reading “Overlapping Multi-tenant Networks”

When will we stop the nightmare of Passwords?

Interesting article:

http://www.wired.co.uk/article/biggest-hacks-2016?utm_content=bufferdafb3&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer

The real question: we’re in 2016, soon 2017!. And we still use passwords. The issue isn’t about using more passwords..the issue is that we all need to write those passwords down, we all need to store them one a notepad.

Who will trigger a real move to only using the mobile phone (a real private object) as a way to authenticate?

Our friends from http://www.nexims.com have a neat solution. watch them!

get rid of all your passwords!