“Big Memory Computing Is an Overhaul of Memory Infrastructure and Consists of DRAM, Persistent Memory, and Memory Virtualization Software Working Together.”
Hi Charles, please tell us about your role and the team / technology you handle at MemVerge.
I am the CEO and a cofounder of MemVerge. So, my role stretches across the company and our technology, which is memory virtualization software.
How did your role evolve through the pandemic months? How did your previous experiences with technology management help you scale your efforts and meet unprecedented challenges?
“Necessity is the mother of invention” is a well know proverb and definitely applied to the evolution of our business during COVID. There was no previous experience with a pandemic, but if my previous experience in technology management taught me anything, it was to adapt quickly to change. No one could have predicted in 2019 that we would all work apart, hire different types of people, and close sales with a new class of memory infrastructure…all on Zoom calls. From our past experience we know that a strong bond among team members is critical to shared success. One of the things we did to overcome the constraints of the pandemic was Zoom “happy hours” where we could have drinks together, play games and talk about things other than work.
We have heard so much about Big Data! What is Big Memory technology? Could you tell us how Big Memory technology expands application needs?
For over 50 years since DRAM was invented in 1969, server memory has been expensive, scarce, and volatile. Big Memory Computing is an overhaul of memory infrastructure and consists of DRAM, persistent memory, and memory virtualization software working together. Once the DRAM and persistent memory are virtualized, a pool of software-defined memory is formed which offers a fantastic combination of lower cost, higher capacity and higher availability.
The next-gen memory infrastructure is needed by a new generation of apps that use Big Data, but unlike previous apps that could take hours or days to analyze the data, they must deliver results in real-time. A few examples of apps that use “big and fast” data today are fraud detection and risk analysis in financial services, genomics in life sciences, recommendation engines in retail, and facial recognition in social media. In the next decade, we expect mainstream business apps to incorporate AI/ML which will drive larger data sets making the need for Big Memory pervasive. In fact, we believe that someday all apps will run in memory.
What is the future of CXL fabric in the AI ML era? How does MemVerge transform the overall data storage and security management practices?
CXL is a new server interconnect with an open architecture supporting CPUs, GPUs, and DPUs, as well as different types of memory in a switched CXL fabric. What stands out for the AI/ML era is that CXL fabrics enable peta-scale memory configurations. Before CXL fabrics, nanosecond memory performance could only scale inside a server to terabytes. CXL extends nanosecond performance to a few racks that can house petabytes of memory. This will allow the data from massive data sets to move into memory, which will accelerate AI/ML jobs by orders of magnitude. With heterogeneous processors and memory, virtualization software such as Memory Machine from MemVerge will be needed to manage the provisioning of capacity, performance, security, and availability of the massive blast zone.
Which industries are most likely to benefit the most from adopting your CXL services? Could you tell us about the AI and Machine Learning features included in your platform?
We expect Big Memory and CXL to benefit most AI/ML applications known for large data sets and having multiple stages in their pipeline. These apps load data and execute code at each stage. If data is loaded from disk in milliseconds or from SSD storage in microseconds, that’s 1,000,000x to 1,000x slower than loading data from memory in nanoseconds. Similarly, if all the data can’t fit into memory, the application is slowed by access to disk or SSD. One MemVerge customer with a multi-stage single-cell sequencing analytics pipeline slashed 60% off their overall job throughput by loading data that was snapshot to persistent memory, and by executing with all data in-memory.
If we were to evaluate the modern Big Memory technology maturity trends, which industries have been leading in the adoption of these capabilities? How do you enable the slower / lagging industries to come to pace with memory and storage upgrades?
The industries with massive data sets and applications that need to deliver instant results are leading the adoption of Big Memory. Those industries include the trading, banking, risk analysis, and fraud segments of the financial services industries; retail which thrives on recommendation engines; the media and entertainment industry that increasingly relies on animation and visual effects applications; and the life sciences industry racing to develop life-saving vaccines.
Tell us how hiring trends in the data storage industry would further evolve in the innovation sector? Which domains are you most excited about?
Ten years ago the industry was faced with hiring to blend the best of disk-based storage system technology with new flash technology. Today, we’re faced with blending the best of all-flash technology with Big Memory, cloud-native, and AIML technology. Of course we are most excited about innovation in the Big Memory sector because we are only at the beginning of an epic migration which we are leading.
An advice for young technology professionals looking at IT networking and data management market with eagerness;
Think Cloud and Think Memory. These will be the two most important forces in IT for the next decade.
Tag a person from the industry whose answers you would like to see here:
Pat Gelsinger, CEO of Intel
Thank you, Charles! That was fun and we hope to see you back on itechnologyseries.com soon.
[To participate in our interview series, please write to us at email@example.com]
Charles Fan is co-founder and CEO of MemVerge. Prior to MemVerge, Charles was the CTO of Cheetah Mobile leading its global technology teams, and an SVP/GM at VMware, founding the storage business unit that developed the Virtual SAN product. Charles also worked at EMC and was the founder of the EMC China R&D Center. Charles joined EMC via the acquisition of Rainfinity, where he was a co-founder and CTO.
Charles received his Ph.D. and M.S. in Electrical Engineering from the California Institute of Technology, and his B.E. in Electrical Engineering from the Cooper Union.
In 2017, Intel released a new Optane SSD product. Under the covers of this new SSD was 3D XPoint, a new persistent memory media. In the history of computing, “memory” and “storage” have always been two different concepts. Persistent Memory promises to change that and can be operated at memory speed while being persistent like storage. With the Optane SSD available, we knew that the real game changer, the persistent memory DIMM, was not far away.
We decided, right at that moment, to start MemVerge. With every new hardware substrate, a new software stack will need to be developed to allow the applications to take full advantage of the new hardware. In this case, that solution is Big Memory Software. At MemVerge, our mission is to open the door to Big Memory Computing via the Big Memory Software we develop.