< Back to Search Results

Six Smarter Scheduling Techniques for Optimizing EDA Productivity

Read the White Paper

Semiconductor firms rely on software tools for all phases of the chip design process, from system-level design to logic simulation and physical layout. Given the enormous investment in tools, design talent, and infrastructure, even minor improvements in server farm efficiency can significantly impact the bottom line. As a result, verification engineers and IT managers are constantly looking for new sources of competitive advantage.

Workload management plays a crucial role in helping design teams share limited resources, boost simulation throughput, and maximize productivity. In this paper, we discuss six valuable techniques to help improve design center productivity.

All Related White Papers

EDA in the Cloud: Containerization, Migration, and System Telemetry

EDA in the Cloud: Containerization, Migration, and System Telemetry

Cloud computing is becoming an increasingly good choice for EDA, but a data-led transformation is needed to take advantage of the flexibility that public compute resources can offer and to maximize performance and cost. Cloud allows an organization to benefit from a clear return on investment that supports innovation and rapid prototyping, provided those advantages are fully exploited. The wins achievable by a well-planned hybrid cloud strategy should see reduced costs both on-premises and in the cloud. The ability to dynamically tune compute and storage resources based on business and application needs is only available in the cloud, and only if the right telemetry and data pipelines are in place to inform infrastructure decisions. In this paper we discuss how to put that plan in place and ensure that key business objectives are met as workflows are adapted and migrated to a new compute environment.

White Papers
Accelerating Cloud-based Genomics Pipelines Through I/O Profiling for Analysis of More Than 3,000 Whole Genome Pairs on AWS

Accelerating Cloud-based Genomics Pipelines Through I/O Profiling for Analysis of More Than 3,000 Whole Genome Pairs on AWS

This paper presents an overview of the work by the Wellcome Sanger Institute to make one of their cancer pipelines portable and to tune it for cloud deployment using the Altair Breeze™ and Altair Mistral™ I/O profiling tools. With the insight from the tools we were able to tune the cloud configuration to boost speed by 20% with a cost reduction of 10%.

White Papers
Profiling OpenFOAM With Altair I/O Analytics Tools on Oracle Cloud Infrastructure

Profiling OpenFOAM With Altair I/O Analytics Tools on Oracle Cloud Infrastructure

The software tool OpenFOAM is used extensively in high-performance computing (HPC) to create simulations but is known to have challenging I/O patterns. To uncover the reasons why, we profiled OpenFOAM with the Altair Breeze™ and Altair Mistral™ I/O profiling tools on the Oracle bare metal cloud. The results detailed in this white paper provide a clear picture about why OpenFOAM performs slowly at times and highlight key areas for improvement.

White Papers
Short-running Jobs Can Help Optimize Your Resource Utilization

Short-running Jobs Can Help Optimize Your Resource Utilization

Semiconductor companies typically run jobs by queuing them up, then using a job scheduler to dispatch them onto available cores in server farms while pulling EDA tool licenses from a license server. There are two primary goals facing organizations, and they’re somewhat at odds with each other: first, maximizing the utilization of server farms and software licenses and second, running jobs with minimal latency so users aren’t delayed. The easiest way to satisfy the requirements of high utilization and low latency is to maximize the number of short-duration jobs. Just as it is easier to fill a bucket with sand than it is with large rocks, short jobs give the scheduler increased flexibility in what jobs to run and when. Short jobs will not block or occupy a resource for long periods and are therefore not likely to impede the progress of higher-priority jobs arriving in the queue. Altair Accelerator™ is an agile, fully featured scheduler optimized for today’s EDA workloads. The most important difference between Accelerator and other popular schedulers is its event-driven architecture, which allows it to schedule a new job immediately when compute resources and software licenses become available.

White Papers
Planning Cloud Strategy for HPC and High-throughput Applications

Planning Cloud Strategy for HPC and High-throughput Applications

When adopting flexible compute, half the challenge is in selecting a storage solution that suits the application. Whether you are tuning for performance, throughput, or cost, or a combination of all three, it’s important to balance the compute nodes with the right access to data so that you are not paying for underutilized resources.

White Papers
Meeting Job Scheduling Challenges for Organizations of All Sizes

Meeting Job Scheduling Challenges for Organizations of All Sizes

Every semiconductor design group uses some sort of job scheduler, whether it is selected by a corporate IT department or by the group itself. At a high level, the function of a job scheduler is simple: Be aware of what is in the queue and what hardware and software license resources are available and make good decisions about what jobs to schedule when. In practice there are a several subtle issues that are just as important, including job mix, prioritization, and ease of support. While every company is unique, the issues surrounding job scheduling are often related to company size or, more accurately, the size of the compute farm and design teams. Altair Accelerator™ is a high-throughput, enterprise-grade job scheduler designed to meet the complex demands of semiconductor design, EDA, and high-performance computing (HPC). It’s a highly adaptable solution capable of managing compute infrastructures from small, dedicated server farms to complex, distributed environments.

White Papers
Top 10 Issues Facing License Managers

Top 10 Issues Facing License Managers

An organization’s software asset management (SAM) team wears many hats. Team members are responsible for setting up and maintaining license servers, installing licenses, ensuring compliance, assisting users, monitoring license availability, and generating usage reports. At many companies they also perform requirements analysis, remix software pools, manage contracts, and negotiate new contracts with vendors. This paper explores the top 10 issues license administrators face and how each can be tackled using Altair Monitor™.

White Papers
Managing TCO in HPC Hybrid Cloud Environments

Managing TCO in HPC Hybrid Cloud Environments

Enabled by improvements in security, new instance types, and fast interconnects, high-performance computing (HPC) users are increasingly shifting workloads to the cloud. With cloud usage increasing, however, managing and containing costs is a growing concern. As organizations become more reliant on cloud, they are also concerned with staying portable and flexible, and avoiding lock-in to a single cloud ecosystem or provider. In this paper, we make a case for HPC hybrid cloud and explain how operators can manage total cost of ownership (TCO) more effectively. We present a simple TCO model that can help users estimate the cost of hybrid cloud deployments and describe various Altair solutions that can help organizations quickly and cost-effectively implement private and hybrid multi-cloud HPC environments. Using our TCO model, we illustrate how Altair cloud automation and spend management solutions such as Altair® Control™ and Altair® NavOps® can boost productivity and reduce cloud-related expenses while delivering a compelling return on investment (ROI).

White Papers
The Importance of Software License Server Monitoring

The Importance of Software License Server Monitoring

In electronic design automation (EDA) and other areas of computer-aided design (CAD) that utilize expensive software tools, centralization of license servers has become the norm. High software asset utilization is achieved by sharing a common pool of licenses with a large group of engineers, regardless of their locations. While this model provides many advantages over maintaining license servers at multiple locations, it also has risks. License availability becomes even more important once all your software licenses serving multiple sites are centralized. A license server failure of any kind has the potential to impact the entire company, as license servers provide licenses to mission-critical applications. Downtime may cost millions in schedule slippage. Tools that monitor the server, CPU load, file system, license daemons, and individual license features become essential. Altair Monitor™ is a a robust, enterprise-grade license monitoring solution that delivers a complete set of monitoring functions.

White Papers
A Cost-benefit Look at Open-source vs. Commercial HPC Workload Managers

A Cost-benefit Look at Open-source vs. Commercial HPC Workload Managers

High-performance computing (HPC) fuels scientific discovery and innovation across multiple industries. The combination of large datasets, advanced simulation techniques, and machine learning helps organizations generate insights that would not be possible without modern HPC infrastructure. Given HPC’s outsized role in driving business results, selecting the right management software is critical. This is especially true in commercial organizations where time is money. In this paper, we discuss the pros and cons of open-source software in HPC and make the case for commercial workload management. While open-source workload managers are fine in some situations, they can present disadvantages in production environments.

White Papers
I/O Profiling to Improve DL_POLY for Molecular Dynamics Simulation

I/O Profiling to Improve DL_POLY for Molecular Dynamics Simulation

We worked with the team at the U.K.'s STFC Hartree Centre to assess and improve the performance of a number of commonly used HPC applications. They used Altair Breeze™ to profile and improve DL_POLY, a general-purpose classical molecular dynamics (MD) simulation software developed at STFC's Daresbury Laboratory. This paper presents the initial findings and performance improvements that have been submitted to the DL_POLY development repository. By looking at the I/O patterns using Breeze the Hartree Centre was able to reduce simulation software run time by at least 8% with a relatively small investment of time.

White Papers
I/O Profiling to Improve Storage Performance at Diamond Light Source on an Altair Grid Engine Cluster

I/O Profiling to Improve Storage Performance at Diamond Light Source on an Altair Grid Engine Cluster

Diamond Light Source is the UK’s national synchrotron or particle accelerator. It works like a giant microscope, harnessing the power of electrons to produce bright light that scientists can use to study anything from fossils to jet engines to viruses and vaccines. Because Diamond Light Source handles a wider variety of workloads than many, performance of both in-house and third-party applications is vital. The team, which employs Altair® Grid Engine® for workload management, used Altair Mistral™ to identify straightforward improvements that could be made to improve performance and cut down runtime.

White Papers
Have a Question? If you need assistance beyond what is provided above, please contact us.