Hitachi Vantara claims Hitachi iQ the most complete AI stack
Hitachi Vantara says its approach to storage and AI offers the most comprehensive solutions, based on its industrial heritage and RAG-like functionality it claims others don’t have
Among storage player claims about how well-suited their products are to AI workloads, Hitachi Vantara has a unique backstory to support its arguments. Namely, that the Japanese array maker is part of a giant manufacturing conglomerate that makes everything from nuclear power stations and high-speed trains, to air conditioners and household appliances, and handles its data using Hitachi Vantara products.
Also key to the narrative is that the company offers a converged infrastructure portfolio – Hitach iQ – that combines Nvidia GPUs and Enterprise AI software with Hitachi Vantara’s VSP One storage arrays, Hammerspace file storage and data orchestration, Hitachi Vantara server products, plus Cisco networking equipment.
“Our group uses Nvidia’s Omniverse digital twin ecosystem, which provides training data for AI that allows for development and extension of robotic capacity in manufacturing,” said CTO for AI at Hitachi Vantara, Jason Hardy.
Converged infrastructure for AI
Meanwhile, Hitachi Vantara’s AI converged product family, Hitachi iQ, is a complete converged infrastructure that can go from one to 16 SuperMicro servers, each with eight Nvidia GPUs for AI processing using Nvidia’s HGX configuration.
Then there are multiple Hitachi HA G3 servers that share the (object storage) contents of VSP One array nodes. Some of these servers run the Nvidia AI Enterprise software layer in Kubernetes containers. Others run Hammerspace storage software that allows parallelised access between GPUs and storage.
Finally, Cisco Nexus switches connect the whole thing. Regarding the role of the VSP One array – the flagship of the Hitachi Vantara array family – it is connected to Hammerspace servers to provide object storage to the bulk of the data which these servers distribute in file mode.
IQ Time Machine: VSP One gives LLM a memory
“To base the whole thing on our VSP array offers some benefits,” said Hardy. “Among them is our new Hitachi iQ Time Machine functionality, which allows submission to an LLM of previous versions of documents and data that has since been updated.”
Hardy’s point here is that such documents in other systems will have been updated and therefore past versions will be lost to LLMs that interrogate that dataset. The RAG-like function rests on the retention of historic data in object storage on the VSP One array, and iQ Studio – the chatbot that provides the Hitachi iQ infrastructure to Hitachi Vantara – provides this via a timeline in the interface.
For example, if a member of the finance team wants to ask the AI about an event, they can hover over the date and potentially see details notified via a document ingested at the time. And so, customers can access data from different time periods via an LLM.
Data storage is a critical component for AI projects because it must deal successfully with three constraints – the array must communicate very rapidly with the GPUs; for RAG, data needs to be in a format compatible with Nvidia’s software modules that build AI applications; and, finally, they are required to help enterprises prepare and test data that they submit to AI.
With Hitachi iQ, which goes way beyond just storage functionality, Hitachi aims to tackle these three challenges at the same time.
Read more about AI and storage
- Storage technology explained: AI and data storage. In this guide, we examine the data storage needs of artificial intelligence, the demands it places on data storage, the suitability of cloud and object storage for AI, and key AI storage products.
- Interview: Nvidia on AI workloads and their impacts on data storage. We talk to Charlie Boyle of Nvidia about data challenges in artificial intelligence, key practical tips for AI projects, and demands on storage of training, inferencing, RAG and checkpointing.
Originally published at ECT News