Author Archive Jos Keulers

ByJos Keulers

New Partnership with RNT Rausch

We are proud to announce that our new partner RNT Rausch joined us in our journey to build on NVME capabilities to offer customers solutions that are simple, scalable, cost-efficient, and integrate with any orchestration system on any server or any cloud. #NVMEoverTCPIP #S3onNVME #Storageonsteroids

RNT Rausch is a Germany based technology pioneer with 20+ years of experience in the high tech server and storage industry. Their mission is to always be ahead of technology trends and rethink future-proof server and storage solution designs that go hybrid and tackle business challenges and makes SMBs, enterprises, data centres and service providers around the world fit for tomorrow’s technical revolution because our brain’s always on. RNT is making IT possible.

ByJos Keulers

Free Your Flash And Disaggregate

Would you like some CPU to go along with your SSDs? 

Ordering a combo meal from your favorite burger joint isn’t all that different from deploying a server with SSDs in the data center. Each server comes with CPUs, DRAM, and SSDs. 

However, with servers, your applications may not have an appetite for all of these other components. A more likely scenario is where at least one of these resources mostly sits idle. In deployments with multiple SSD based applications, you are leaving money on the table—or leaving food on the plate, if we extend the combo meal metaphor—in the form of unused CPU, DRAM, or SSDs.

These underutilized CPUs, DRAM, and SSD resources are difficult to repurpose and become stranded resources, resulting in lost capital and operating expenditures. These purchased (or financed) resources consume power, real estate, and require cooling yet they don’t provide any useful benefit to the application. Reducing or eliminating stranded resources represents significant cost savings for your enterprise.  

The major challenge has always been how to deploy CPU, DRAM, and flash resources in just the right quantities. Infrastructure architects employ different techniques to minimize the waste of these underutilized resources.

For instance, some architects use a large number of system configurations with each configuration matching a specific application. Deploying a large number of system configurations, however, comes with significant management and operational overhead, which doesn’t align with the operational efficiency of hyperscale data centers. 

Another technique is to share storage using distributed storage software. But this method can result in a performance penalty when compared with direct attached storage. Many enterprise storage solutions offer high performance, but their cost is prohibitive for scale-out infrastructure.

It is essential to understand why architects go to great lengths to share resources with neighboring servers. This technique is known as disaggregation. Let us specifically focus on SSD disaggregation since technologies to effectively disaggregate SSDs exist today.

ByJos Keulers

Edge Computing Has Risks that Can be Addressed with Computational Storage.

A variety of edge computing implementations exist, and each requires careful alignment with networks services and infrastructure. I&O leaders responsible for networking will optimize performance and costs, and reduce risks, by following the guidance in this research.

The topology directive of edge computing is not new. However, the demands of the plethora of latency-sensitive devices at the remote locations are new. An edge computing approach is intended to reduce latency and minimize bandwidth demands by locating application intelligence, storage and compute closer to endpoints, rather than in a remote server or data center.

Edge computing represents an emerging topology-based computing model that enables and optimizes extreme decentralization. However, it is still synergistic with a centralized core, whether traditional data center or hyperscale cloud provider. It places intelligence, storage and other functionality close the edge, where people and endpoints, such as sensors or monitors in an IoT deployment, produce, analyze or consume information. Gartner has identified five requirements that drive edge computing.

Below you will see our take on their “Five Imperatives Driving Computing toward the Edge” today. Fill out the form to read their un-biased view on these issues and then just reach our for how we truly aid your deployments today!

Computational Storage Helps Address
Many Issues at the Edge

How can you effectively utilize edge computing and solve the challenges in data, diversity, protection, and locations? Read what Analysts are saying about the changing infrastructure:

Latency is imperative – Our Computational Storage products allow for this to be managed more effectively than any existing architecture, especially in edge constrained environments

Minimizing the attack surface by ensuring that edge computing hardware, software, applications, data, and networking have security and self-protection built-in.

Bandwidth is increasing, Still Clogged – With Computational Storage doing localized processing the amount of bandwidth needed for value from data is reduced significantly

Investing in technologies that automate data management and governance at the edge as much as possible.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or it’s affiliates in the U.S. and internationally, and is used herein with permission.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from NGD Systems.

ByJos Keulers

Top 2019 enterprise storage arrays win Product of the Year

Winners of the 2019 Products of the Year signal a shift from disk to flash storage arrays. Vast Data takes gold, NGD Systems wins silver and DataDirect Networks captures bronze.

ByJos Keulers

NGD Systems Raises $20 Million in Series C Funding

NGD Systems Raises $20 Million in Series C Funding to accelerate the deployment of the World’s First NVMe Computational Storage Drive.

The latest round includes new investments from MIG Capital and Western Digital Capital Global, Ltd, that enables artificial intelligence and edge computing within computational storage.

ByJos Keulers

Why CDN Like Computational Storage

When most people think of content delivery networks (CDNs), they think about streaming huge amounts of content to millions of users, with companies like Akamai, Netflix, and Amazon Prime coming to mind. What most people don’t think about in the context of CDNs is computational storage – why would these guys need a technology as “exotic” as in-situ processing? Sure, they have a lot of content – Netflix has nearly 7K titles in its library, while Amazon Prime has almost 20K titles; but at 5GB per title, that is only 35TB for Netflix, and 100TB for Amazon Prime. These aren’t the petabyte sizes that one typically thinks of when discussing computational storage.

So why would computational storage be important to CDNs? Two phrases summarize it all – encryption/Digital Rights Management (DRM), and locality of service. For CDNs that serve up paid content, the user’s ability to access the content must be verified (this is the DRM part), and then the content must be encrypted with a key that is unique to that user’s equipment (computer, tablet, smartphone, set-top box, etc.). When combined with the need to position points of presence (PoPs) in multiple global location, the cost of this infrastructure (if based on standard servers) can be significant.

Computational storage helps to significantly reduce these costs in a couple of ways. Our ability to search subscriber databases while on the SSD eliminates the need for expensive database servers, significantly reducing the PoP footprint. Our ability to encrypt content on our computational storage devices also eliminates the servers that typically perform this task. When you consider that six of our 16TB U.2 SSDs could hold the entire Netflix library (with three SSDs for redundancy), you can see how this technology could be important to CDNs. Want more information on how computational storage can help the content delivery network industry, just contact us at nvmestorage.com.

ByJos Keulers

Amsterdam is blocking new datacenters.

Too much energy consumption and space allocation! What can we do?

Datacenter capacity in and around Amsterdam has grown with 20% in 2018 according Dutch Data Center Association (DDA). Bypassing rival data center locations such as London, Paris and Frankfurt. But that growth comes with a problem. They claim too much space and consume too much energy. Therefor the Amsterdam Counsel have decided to invoke a temporary building stop for new data centers until new policy is in place to regulate growth.

At current, local governments do not have control on new initiatives and energy suppliers have a legal obligation to provide power. The problem here is that the growth of datacenter capacities is contra productive to the climate ambitions of governments which are adopted in legislation. Therefor regulation enforcing sustainable growth with green energy and delivery of residual heat back to consumer households will be mandated and in line with climate ambitions.

For now, the building stop is for 1 year. The question is what can we do in between, or are we going to sit and wait and do nothing?

If you cannot grow in performance and capacity outside the current floor space, the most sensible thing to do is to reclaim space within the existing datacenters. There are a few practical changes possible that have minor impact on operations but a huge impact on density and efficient usage of currently available floor space.

  1. Usage of lower power, larger capacity NVMeSSD’s instead of spinning disk’s and high performance energy slurping 1st generation NVMe SSD. Already available are 32TB standard NVMe SSD at less than 12w power. Equipping a 24 slot 2 u server delivers 768TB of storage capacity in just 2U Rackspace at less than ½ watt/TB. NGD Systems is the front runner of delivering these largest capacities NVMe SSD’s at the lowest power consumption rates. No changes required, just install and benefit from low power and large capacities.
  2.  Reduce the number of servers, Cpu, RAM and reduce movement of data by processing secondary compute tasks, like inference, encryption, authentication, compression on the NVMe SSD itself. This is called Computational Storage by NGD Systems. Simply explained. Install an ARM quad core CPU on every NVME SSD and standard Linux applications can run directly off the drive. A 24 slot 2U server can host 96 additional Linux cores that augment the existing server, creating an enormously efficient compute platform at very low power, replacing many unbalanced X86 servers. Change required. Look at the application landscape and determine what applications are using too much resources and migrate them off, one by one.
  3.  Disaggregate storage from cpu. There is huge inefficiency in server farms. Lots of Idle time of CPU’s and unbalanced storage to cpu ratio’s. Eliminating this unbalance is relatively simple and increases storage/cpu utilization and efficiency. Application servers mount their exact required storage volumes from a networked storage server over the already existing network infrastructure at the same low  latencies as if the NVMe SSD was inside the server chassis. The people at Lightbits Labs have made it their mission to tackle the problem of storage inefficiencies in the datacenter. Run POC’s to determine where the improvements are.
  4.  The most simple method to reclaim space is to throw away what you are not usinganymore or move it outside to where the rent is cheaper and space widely available. If you know what data you have and what the value of that data is actions to save, move or delete that data can be put into policies and automated. Komprise has the perfect toolset to analyze, qualify and move data to where it sits best, including to the waste bin. Run a simple pilot and check the cost savings.

 What happens in Amsterdam area today is something we will start to see happening more and more and will kick off many more initiatives in other areas to regulate data center growth and bring that in line with our climate ambitions. If we start banning polluting diesel cars from our inner cities and tax them why could we not have the same discussion on the usage of power slurping old metal boxes with SATA spinning rust within data centers? I am pretty sure that regulators will encourage good behavior with permits and discourage bad behavior with taxes in the not too distant future. 

Maybe it is time to start thinking about Watts/TB in stead of $/Gb

ByJos Keulers

NGD Systems Becomes First to Demonstrate Azure IoT Edge using Computational Storage

The world leader in NVMe Computational Storage, announced today at the Flash Memory Summit that it has embedded the Azure IoT Edge service directly within its Computational Storage solid-state devices (SSDs), making them the only platform to support the service directly within a storage device.

ByJos Keulers

Lower Network Threat Detection cost with Computational Storage

Network Threat Detection Systems require huge amounts of DRAM and CPU to analyse streams. By having Computational NVMe SSD’s pre-processing the streams, the clean streams can bypass main CPU and DRAM and directly moved to primary storage as where only the dirty streams are moved to DRAM and CPU for deeper inspections. As shown on the left hand side the classical CPU/DRAM bound architecture and on the right side the picture that illustrates Computational NVMe SSD’s taking over pre processing compute tasks.

The clear benefits are:

  • Infinitely scalable analysis buffer using lower cost NVMe SSD’s (as opposed to more expensive DRAM) 
  • In-Situ processing allows for incoming stream to be pre-processed and either sent directly to primary storage (if clean) or sent into main memory for further analysis (dirty) 
  • Architecture allows for system cost reductions by reducing the amount of DRAM needed for analysis buffer and reducing x86 CPU cycles required for analysis which can reduce core count (most of these boxes use tens of cores so a lot of money on the table here) 

Computational Storage is a concept that has the power to deliver huge business benefit of faster results at lower cost per result. By having a fully functional Quad core 64 bit ARM processor on each of the NVMe SSD’s in the server, the SSD’s are not only used to store large amounts of  data but can process and analyze data right at the location where the data was stored in the first place. Main CPU, GPU and DRAM are only being used for very high demanding compute tasks as where the secondary functions like search, indexing, pre analytics, encryption etc are processed inside the NVMe SSD. All the SDD’s work together as compute nodes in a distributed compute cluster inside the server chassis.

With Computational Storage overall compute performance and storage capacity per node increases significantly while it requires less equipment, less IO, less power and less floor space.

The result: faster results at lower processing cost.

ByJos Keulers

A Paradigm Shift in Storage; Move compute to where the data is.

Today’s Cloud models are not designed for the fast volume, type and velocity of data that will be generated by IoT devices. Two years ago I attended a webinar from Andreesen Horowitz, Sr VC partner, Peter Levine, where he predicted the end of the cloud as we know it. He compared the move to and away from centralized cloud compute to decentralized edge compute with the move form centralized mainframe computing in the 60’s and 70’s to decentralized mini and client server computing in the 80’s and 90’s and to centralized and virtualized IT and Cloud compute at the beginning of the 21st century and decentralizing compute and storage again today and tomorrow because of IoT.

To get some perspective on what IoT can produce on data a few examples. A commercial airplane generates around 10TB of data for every half hour flight. An offshore oil rig produces up to 1 Tb a week. A self-driving vehicle produces 10Gb of data per mile. And as these devices rely on real time correction based on data that has been gathered, going back and forth to the cloud is not an option. Bandwidth latency is simply too high. Also these billions of new connected devices also represent countless new types of data, using numerous industrial protocols, not all being IP. Before this data can be send to the cloud for storage or analytics these need to be converted into IP at first. And last but not least government and industry regulations and privacy concerns may prohibit that certain types of IoT data is stored offsite. So the ideal place to analyze IoT data is as close to the devices that produce and act on that data. This called Fog Computing.

What is it? The fog server is basically an extension of the Cloud. It stretches the Cloud to be closer to the things that produce the data and receive instructions to act on IoT data. These systems that gather data and do analytics and give instructions back are called fog nodes. The smaller you can make them the closer they can be to the IoT devices themselves.

In Situ processing is designed to be at the heart of fog computing. In Situ processing means that it has compute power on the drive itself where the data is being stored in the first place. The data does not leave the drive it is originally stored on. After it is stored it can do analytics at the exact location where the data is stored without having to transport the data off the drive. It can make local judgement of what data it need to send back to the cloud for big data analyses, do the protocol conversion and also it can clean the disk and remove the stored IoT data once that data is no longer useful or valuable.

Fog computing gives the cloud the extension to be able to handle the vast amounts IoT data and allows only the valuable IoT data assets to be transported over IP back to the cloud for Analytics, Learning, Research and Archiving. In Situ processing is the vital component here since the IoT data does not have to travel, guaranteeing a safe journey in the fog with NVMestorage.com powered by NGD Systems’ Catalina In Situ processing NMVe technology.