Skip to content

GCP HPC DevOps Engineer

  • Remote
    • Wroclaw, Dolnośląskie, Poland
  • PLN 21,700 - PLN 28,500 per month
  • DevOps

Job description

Hello, let’s meet!

We are Xebia – a place where experts grow. For nearly two decades now, we’ve been developing digital solutions for clients from many industries and places across the globe. Among the brands we’ve worked with are UPS, McLaren, Aviva, Deloitte, and many, many more.

We’re passionate about Cloud-based solutions. So much so, that we have a partnership with three of the largest Cloud providers in the business – Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). We even became the first AWS Premier Consulting Partner in Poland.

Formerly we were known as PGS Software. In 2021, we joined Xebia Group – a family of interlinked companies driven by the desire to make a difference in the world of technology.

Xebia stands for innovation, talented team members, and technological excellence. Xebia means worldwide recognition, and thought leadership. This regularly provides us with the opportunity to work on global, innovative projects.

Our mission can be captured in one word: Authority. We want to be recognized as the authority in our field of expertise.

What makes us stand out? It’s the little details, like our attitude, dedication to knowledge, and the belief in people’s potential – emphasizing every team members development. Obviously, these things are not easy to present on paper – so, make sure to visit us to see it with your own eyes!

Now, we’ve talked a lot about ourselves – but we’d love to hear more about you.

You will be:

  • leading the migration of on-premises SLURM-based HPC (High-Performance Computing) clusters to Google Cloud Platform,

  • designing, implementing, and managing scalable and secure HPC infrastructure solutions on GCP,

  • optimizing SLURM configurations and workflows to ensure efficient use of cloud resources,

  • managing and optimizing HPC environments, focusing on workload scheduling, job efficiency, and scaling SLURM clusters,

  • automating cluster deployment, configuration, and maintenance tasks using scripting languages (Python, Bash) and automation tools (Ansible, Terraform),

  • integrating HPC software stacks using tools like Spack for dependency management and easy installation of HPC libraries and applications,

  • deploying, managing, and troubleshooting applications using MPI, OpenMP, and other parallel computing frameworks on GCP instances,

  • collaborating with engineering, support teams, and stakeholders to ensure smooth migration and ongoing operation of HPC workloads,

  • providing expert-level support for performance tuning, job scheduling, and cluster resource optimization,

  • staying current with emerging HPC technologies and GCP services to continually improve HPC cluster performance and cost efficiency.

Job requirements

Your profile:

  • 5+ years of experience with HPC (High-Performance Computing) environments, including SLURM workload manager, MPI, and other HPC-related software,

  • extensive hands-on experience managing Linux-based systems, including performance tuning and troubleshooting in an HPC context,

  • proven experience migrating and managing SLURM clusters in cloud environments, preferably GCP,

  • proficiency with automation tools such as Ansible and Terraform for cluster deployment and management,

  • experience with Spack for managing and deploying HPC software stacks,

  • strong scripting skills in Python, Bash, or similar languages for automating cluster operations,

  • in-depth knowledge of GCP services relevant to HPC, such as Compute Engine (GCE), Cloud Storage, and VPC networking,

  • strong problem-solving skills with a focus on optimizing HPC workloads and resource utilization.

Work from EU and a work permit to work from EU are required.

Nice to have:

  • Google Cloud Professional DevOps Engineer or similar GCP certifications,

  • familiarity with GCP’s HPC-specific offerings, such as Preemptible VMs, HPC VM images, and other cost-optimization strategies,

  • experience with performance profiling and debugging tools for HPC applications,

  • advanced knowledge of HPC data management strategies, including parallel file systems and data transfer tools,

  • understanding of container technologies (e.g., Singularity, Docker) specifically within HPC contexts,

  • experience with Spark or other big data tools in an HPC environment.

Recruitment Process

CV review – HR call – Technical Interview – Client Interview – Decision

or