Our Products

MDC400

MDC400 is a 4U form factor disaggregated hyper converged hybrid compute, hybrid storage, hybrid network, and acceleration system. MDC400 is built on MDC Super Computer Fabric which is a low latency (140ns) switch fabric and has 3.2 terabits of switching capacity ……..View More

MDC1000

MDC1000 is a 10U form factor disaggregated hyper converged hybrid compute, hybrid storage, hybrid network, and acceleration system. MDC1000 is built on MDC Super Computer Fabric, which is a low latency fabric(140ns) and has 9.6 terabits of switching capacity………..View More

MDC400


mdc400

MDC400 is a disaggregated hyper converged hybrid compute, hybrid storage, hybrid network, and acceleration system. MDC400 is built on MDC Super Computer Fabric which is a low latency (140ns) switch fabric and has 3.2 terabits of switching capacity and it delivers near-infiniband performance at price of an Ethernet switch. The system has universal slots which can accommodate server/ storage/GPU/DSP cartridges.

MDC400 Components

Chassis Supporting 12 server cartridges, this versatile chassis shares power, cooling, network uplinks, switches and data centre acceleration modules. Its 4U from factor allows 10 chassis per rack.
Server Cartridges used as nodes for web serving, hosted desktops, e-commerce servers, video transcoding, HADOOP, open stack and low-end HPC clusters.
Network Switches and Uplinks has a built-in capacity of 3.2 terabits of switching. It supports 8 Uplinks of 10/40 Gbps to interface with enterprises networks. The TOR switch is eliminated by the MDC400 Switch Fabric. Each node is connected to each other by 32/64 Gbps speed@140ns latency. It has the capacity to accommodate two more switch cards and each supports upto 1.6 terabits switching capacity.
Storage Cartridges It can accommodate up to 12 storage cartridges and each of them can hold four 2.5” SATA or SAS hard drives. Optionally, it can also accommodate two 3.5” Enterprises SATA or SAS drives. It also has the capacity to accommodate one NVMe storage cards.
GPU Cartridges It can accommodate up to 12 GPU cartridges. They can be shared across multiple server nodes. MDC400 is the first device in the Industry to support this feature.
FPGA Cartridges It can accommodate up to 12 FPGA cartridges. They can be shared across multiple server nodes. MDC400 is the first device in the Industry to support this feature.

MDC400 Configuration

  • Server Only Configuration – 24 Intel Xeon-D processor nodes (D-1527 / D-1537 /D-1548) with 96 TB storage.
  • Storage only configuration – Max 256TB storage; can interface with any standard storage protocol (FC, iSCSI, PCIe, SAS, Infiniband etc.)
  • Optional TCP/IP session hardware offload support
  • Optional SSL hardware offload support
  • Virtualized GPU support – Virtual GPU can be shared across server nodes
  • Virtualized FPGA support – FPGA engines can be shared across server nodes

Target Applications

  • e-Commerce servers
  • storage Box
  • VDI applications
  • Hadoop Clusters
  • Open Stack (Cloud-in-box)
  • Mini HPC clusters
  • Network Function Virtualization (NFV)
  • Artificial Intelligence & Machine Learning

MDC1000


mdc1000

MDC1000 is a disaggregated hyper converged hybrid compute, hybrid storage, hybrid network, and acceleration system. MDC1000 is built on MDC Super Computer Fabric, which is a low latency fabric(140ns) and has 9.6 terabits of switching capacity and it delivers near-infiniband performance at price of Ethernet switch. The system has universal slots which can accommodate server/ storage/GPU/FPGA/DSP cartridges.

MDC1000 Components

Chassis Supporting 36 server cartridges, this versatile chassis shares power, cooling, network uplinks, switches and data centre acceleration modules. Its 10U from factor allows 4 chassis per rack.
Server Cartridges Used for web serving, hosted desktops, e-commerce servers, video transcoding, HADOOP, open stack and mini HPC clusters.
Network Switches and Uplinks has a built-in capacity of 9.6 terabits of switching. It supports 24 Uplinks of 10/40 Gbps to interface with enterprise networks. The TOR switch is eliminated by the MDC1000 Switch Fabric. Each node is connected to each other by 32/64 Gbps speed. It has the capacity to accommodate six switch switch cards and each supports upto 1.6 terabits switching capacity and can be configured in 3+3 redundancy mode.
Storage Cartridges It can accommodate up to 36 storage cartridges and each of them can hold four 2.5” Enterprises SATA or SAS hard drives. Optionally, it can also accommodate two 3.5” Enterprises SATA or SAS drives. It also has the capacity to accommodate PCIe storage cards.
GPU Cartridges This has the provision to hold a maximum of 36 GPU cartridges. They can be shared across multiple server nodes. MDC1000 is the first device in the Industry to support this feature.
FPGA Cartridges It can accommodate up to 36 FPGA cartridges. They can be shared across multiple server nodes. MDC1000 is the first device in the Industry to support this feature.

MDC1000 Configuration

  • Server Only Configuration – 72 Intel Xenon-D processor nodes() with a 288 TB storage.
  • Storage only configuration – Max 768TB storage; can interface with any standard storage protocol (FC, iSCSI, PCIe, SAS, Infiniband etc.)
  • Integrated L2 &L3 switch support.
  • TCP/IP session hardware offload support
  • SSL hardware offload support
  • Virtualized GPU support – Virtual GPU can be shared across server nodes
  • Virtualized FPGA support – FPGA engines can be shared across server nodes

Target Applications

  • e-Commerce servers
  • storage Box
  • VDI applications
  • Hadoop Clusters
  • Open Stack (Cloud-in-box)
  • Mini HPC clusters