Oracle announces AI-powered programming assistant beta

AI-Powered Assistant

Oracle has announced the beta release of Oracle Code Assist, an AI-powered programming assistant optimized for Java and designed to enhance application development on Oracle Cloud Infrastructure (OCI). The beta version introduces new capabilities aimed at boosting the development of both new Java applications and updating legacy Java applications to improve performance and security. In addition to Java optimizations, Oracle plans to offer support for NetSuite SuiteScript within the next year, assisting developers in building NetSuite extensions and customizations using NetSuite’s native scripting language.

The programming assistant will be deployable as a plugin for JetBrains’ IntelliJ IDEA integrated development environment (IDE), offering suggestions for various modern programming languages. Oracle also unveiled new features for its OCI Kubernetes Engine (OKE) to simplify the deployment and management of AI workloads and cloud-native applications on OCI. These new features include support for Ubuntu Linux images, enhanced container security, logging analytics for OKE workloads, and cluster node health checks.

The enhancements aim to improve security problem identification and remediation speed while ensuring the health and updates of worker nodes in cluster environments. These advancements are part of Oracle’s broader strategy to support developers and businesses in modernizing their applications and infrastructure through powerful, AI-driven tools and robust cloud solutions. Enterprises are seeking increasingly powerful computing capabilities to support their AI workloads and accelerate data processing.

The efficiency gained from advanced AI infrastructures can lead to better returns on investments in AI training, improved user experiences, and more sophisticated AI inference. At the Oracle CloudWorld conference today, Oracle Cloud Infrastructure (OCI) announced the launch of the first zettascale OCI Supercluster. This new offering utilizes the latest-generation NVIDIA GPUs to help enterprises train and deploy next-generation AI models.

OCI Superclusters provide flexibility, allowing customers to choose from a variety of NVIDIA GPUs and deploy them on premises, in public clouds, or in sovereign clouds. Set for availability in the first half of next year, the Blackwell-based systems can scale up to 131,072 Blackwell GPUs with RoCEv2 networking, delivering an astounding 2.4 zettaflops of peak AI compute power. Oracle also previewed liquid-cooled bare-metal instances designed to support large-scale AI training and real-time inference of trillion-parameter models.

OCI will offer bare-metal instances with NVLink and NVLink Switch, scaling to 65,536 H200 GPUs, providing customers with high-performance infrastructure for real-time inference at scale and accelerated training workloads. Midrange AI workloads will benefit from the general availability of GPU-accelerated instances, including Oracle’s edge offerings, which provide scalable AI capabilities in remote and disconnected locations. Several companies are already leveraging NVIDIA-powered OCI Superclusters to drive AI innovation.

For instance, AI startup Reka is using these clusters to develop advanced multimodal AI models for enterprise agents. Dani Yogatama, cofounder and CEO of Reka, highlighted the power and scalability provided by NVIDIA GPU-accelerated infrastructure. NVIDIA has received the 2024 Oracle Technology Solution Partner Award in Innovation for its comprehensive approach to advancing AI technologies.

Oracle Autonomous Database is incorporating NVIDIA GPU support to accelerate data processing workloads.

Oracle’s new AI tool for developers

Demonstrations at Oracle CloudWorld showcased how NVIDIA GPUs can enhance various components of generative AI pipelines.

Examples included accelerating bulk vector embeddings, reducing the time needed to build efficient vector search indexes, and boosting generative AI performance for text generation and translation. NVIDIA and Oracle are working together to deliver AI infrastructure that meets data residency requirements for governments and enterprises globally. Brazil-based startup Wide Labs, for instance, has trained and deployed Amazônia IA, one of the first large language models for Brazilian Portuguese, using OCI’s Brazilian data centers to ensure data sovereignty.

Nomura Research Institute in Japan is enhancing its financial AI platform with NVIDIA GPUs, adhering to strict financial regulations and data sovereignty requirements. Communication and collaboration company Zoom is utilizing NVIDIA GPUs in OCI’s Saudi Arabian data centers to comply with local data regulations. Additionally, geospatial modeling company RSS-Hydro is using NVIDIA technology to simulate flood impacts in Japan’s Kumamoto region, illustrating the broad applications of these advanced infrastructures.

Enterprises can accelerate task automation on OCI by deploying NVIDIA software and leveraging OCI’s scalable cloud solutions. These solutions enable enterprises to quickly adopt generative AI and build efficient workflows for complex tasks like code generation and route optimization. Oracle has announced new types of clusters for AI training through its Oracle Cloud Infrastructure (OCI).

These clusters will feature Nvidia’s upcoming hardware and will deliver up to 2.4 ZettaFLOPS of AI performance. Oracle’s new supercomputer clusters can be configured with Nvidia’s Hopper or Blackwell GPUs and various networking gear, including ultra-low latency RoCEv2 with ConnectX-7 and ConnectX-8 SuperNICs or Nvidia’s Quantum-2 InfiniBand-based networks. The clusters will include a range of configurations:
– OCI Superclusters equipped with Hopper-based GPUs, supporting up to 16,384 GPUs, offering a peak performance of 65 FP8/INT8 exaFLOPS and a network throughput of 13 petabits per second (Pb/s).

– Hopper-powered OCI Superclusters, launching later this year, scaling up to 65,536 GPUs, delivering up to 260 FP8/INT8 exaFLOPS and 52 Pb/s in network throughput. – Blackwell-based OCI Superclusters scaling up to 131,072 GPUs, offering peak performance of up to 2.4 FP4/INT8 zettaFLOPS. These upcoming supercomputing clusters from OCI will far surpass the capabilities of current leading systems.

The top-tier B200-based OCI Superclusters will feature over three times more GPUs than the Frontier supercomputer, which utilizes 37,888 AMD Instinct MI250X GPUs, and six times more than other hyperscalers, according to Oracle. “We have one of the broadest AI infrastructure offerings and are supporting customers that are running some of the most demanding AI workloads in the cloud,” said Mahesh Thiagarajan, executive vice president, Oracle Cloud Infrastructure. “With Oracle’s distributed cloud, customers have the flexibility to deploy cloud and AI services wherever they choose while preserving the highest levels of data and AI sovereignty.”

Several companies are already benefiting from this advanced infrastructure.

WideLabs and Zoom are leveraging OCI’s high-performance AI infrastructure to accelerate their AI development while maintaining sovereignty controls. “As businesses, researchers and nations race to innovate using AI, access to powerful computing clusters and AI software is critical,” said Ian Buck, vice president of Hyperscale and High-Performance Computing at Nvidia. “Nvidia’s full-stack AI computing platform on Oracle’s broadly distributed cloud will deliver AI compute capabilities at unprecedented scale to advance AI efforts globally and help organizations everywhere accelerate research, development and deployment.”

The upcoming OCI Superclusters will use Nvidia’s GB200 NVL72 liquid-cooled cabinets with 72 GPUs that communicate with each other at an aggregate bandwidth of 129.6 terabytes per second (TB/s) in a single NVLink domain.

Oracle stated that Nvidia’s Blackwell GPUs will be available in the first half of 2025, although it is still unclear when OCI will offer fully loaded Blackwell-powered clusters.