Job Details

University of Utah
  • Position Number: 6979896
  • Location: Salt Lake City, UT
  • Position Type: Computing - Database Administration


Data Engineer III

Job Summary

Data Engineers
Join the Utah Data Coordinating Center (DCC) as a Data Engineer, where your work will directly enable innovative clinical research at the University of Utah and across national partners. You'll lead the design of scalable data systems, define and enforce architecture standards, and work alongside software developers, data analysts, and research teams to ensure our platforms evolve with the needs of scientific discovery. This is a growth-focused role ideal for someone who thrives in a collaborative, mission-driven environment. The Utah DCC supports large-scale health data infrastructure that underpins national emergency response, clinical registries, and federal research initiatives.

Establish project teams and provide overall direction for technical projects from initiation through to delivery. Perform project requirements, estimation, and budget management. Formulate project scope and delivery strategies and establish milestones/schedules. Maintain and report project status and monitor progress of all team members. Gather required data from end-users to evaluate objectives, goals, and scope to create technical specifications. Serve as liaison between technical and non-technical departments in order to ensure that all targets and requirements are met. Keep leadership informed of key issues that may impact project completion, budget, or other results.

As a Data Engineer, your responsibilities will include:

1. Design, develop, and maintain database architecture following industry best practices
Design and implement scalable, secure, and high-performing database solutions aligned with industry standards and architectural best practices. This includes data modeling (conceptual, logical, and physical), schema design, indexing strategies, performance tuning, backup and recovery planning, and ensuring data integrity and consistency. Establish governance standards, naming conventions, version control processes, and documentation to support maintainability, reliability, and long-term scalability across environments.

2. Build, optimize, and maintain scalable data pipelines
Design, develop, and orchestrate reliable, high-performance data pipelines from initial data ingestion through final delivery. This includes data pipeline development, orchestration, transformation logic, and supporting data models optimized for analytics and operational workloads.

3. Develop and optimize data processing and automation code
Design, implement, and maintain robust code for data extraction, transformation, integration, and analysis using appropriate languages and frameworks. Optimize performance, ensure data accuracy, and uphold high standards for code quality, reliability, and maintainability in alignment with software and data engineering best practices.

4. Drive continuous improvement and innovation in cloud data technologies (AWS-focused)
Stay current with emerging data engineering technologies, industry trends, and evolving AWS services to continuously enhance platform capabilities and architectural standards. Evaluate and adopt appropriate AWS services (e.g., S3, Glue, Lambda, Redshift, RDS, EMR, Step Functions, Lake Formation) to improve scalability, performance, cost efficiency, and reliability. Balance innovation with operational excellence by maintaining and optimizing existing services, enforcing best practices, and ensuring stable, secure, and high-performing production environments.

5. Collaborate with business partners to develop scalable data solutions
Partner with internal teams and external stakeholders to design and deliver innovative data solutions that support evolving business needs. This includes developing and exposing data through APIs, building and maintaining multi-dimensional cubes and semantic models, enabling secure data sharing, and creating reusable data services. Translate business requirements into scalable technical solutions that align with enterprise architecture standards, governance policies, and performance expectations.

6. Implement and maintain CI/CD and version control best practices
Design, implement, and support robust CI/CD pipelines to automate build, test, deployment, and release processes for data pipelines, database objects, and cloud infrastructure. Enforce effective version control practices using Git-based workflows, including branching strategies, pull requests, code reviews, and release management. Promote automated testing, infrastructure as code (IaC), and deployment standards to ensure consistency, traceability, reliability, and rapid, low-risk delivery across environments.

7. Develop and support data pipelines for business intelligence and analytics
Design, build, and maintain reliable, scalable data pipelines that deliver curated, analytics-ready datasets to support Business Intelligence and reporting needs.
Implement transformation logic, data validation checks, and orchestration workflows to ensure accuracy, consistency, and timely data availability. Proactively monitor pipeline performance, troubleshoot data issues, and optimize data flows to support dashboards, KPI tracking, ad hoc analysis, and enterprise reporting requirements.

8. Support and implement data security and compliance requirements
Partner with operations and security teams to implement and maintain data security controls, access policies, encryption standards, and compliance requirements to safeguard sensitive and regulated data.

9. Monitor, troubleshoot, and enhance pipeline performance
Continuously monitor data workflows, resolve data processing issues, identify bottlenecks, and enhance performance across ETL/ELT processes, pipelines, and data integrations.

10. Gather requirements and document data workflows
Collaborate with business stakeholders to collect requirements for data pipelines, integrations, and reporting needs. Document data processes, transformation logic, workflow designs, and operational procedures for cross-team visibility and long-term maintainability.

11. Operate effectively both independently and within cross-functional teams
Demonstrate the ability to manage priorities, drive initiatives, and deliver high-quality solutions independently while also contributing collaboratively within cross-functional teams. Engage proactively with engineering, BI, security, operations, and business stakeholders to align on requirements, resolve issues, and deliver integrated data solutions. Communicate clearly, share knowledge, and support team objectives to ensure successful project outcomes and continuous improvement.

The Utah DCC offers a career ladder for Data Engineers and provides growth and professional development opportunities.

To learn more about the Utah DCC visit http://uofuhealth.org/UtahDCC

Learn more about the great benefits of working for University of Utah: benefits.utah.edu



Responsibilities
Data Engineer, IIIDesign, build, implement, and maintain data processing pipelines for the extraction, transformation, and loading (ETL) of data from a variety of data sources. Develop robust and scalable solutions that transform data into a useful format for analysis, enhance data flow, and enable end users to consume and analyze data faster and easier. Write complex SQL queries to support analytics needs. Evaluate and recommend tools and technologies for data infrastructure and processing. Collaborate with engineers, data scientists, data analysts, product teams, and other stakeholders to translate business requirements to technical specifications and coded data pipelines. Work with tools, languages, data processing frameworks, and databases such as R, Python, SQL, MongoDB, Redis, Hadoop, Spark, Hive, Scala, BigTable, Cassandra, Presto, Strom. Work with structured and unstructured data from a variety of data stores, such as data lakes, relational database management systems, and/or data warehouses. Considered highly skilled and proficient in discipline. Conducts complex, important work under minimal supervision and with wide latitude for independent judgment.

Requires a bachelor's (or equivalency) + 6 years or a master's (or equivalency) + 4 years of directly related work experience.

This is a Career-Level position in the General Professional track.Job Code: P34033Grade: P21Expected Pay Range: $64,122 to $124,278

Minimum Qualifications
EQUIVALENCY STATEMENT: 1 year of higher education can be substituted for 1 year of directly related work experience (Example: bachelor's degree = 4 years of directly related work experience).
Department may hire employee at one of the following job levels:. Data Engineer, III: Requires a bachelor's (or equivalency) + 6 years or a master's (or equivalency) + 4 years of directly related work experience.



Preferences
Experience with cloud data services (AWS preferred: Glue, S3, EC2; bonus for Lambda, Athena, EMR)Familiarity with building and maintaining data pipelines and integrations in cloud environments.

Strong experience with Microsoft SQL Server and T-SQLProficiency in writing, optimizing, and troubleshooting complex queries, stored procedures, and database objects.

Development experience in Python for data engineering Hands-on experience using Python libraries such as Pandas, PySpark, or Boto3 for data processing, automation, or integrations.

Experience with version control and CI/CD tools (Git, GitHub/GitLab, Jenkins, etc.) Ability to build and maintain automated deployment workflows for data pipelines.

Ability to read or understand Java is a plusHelpful for working with legacy connectors, middleware, or JVM-based big data tools.

Experience with data visualization/reporting tools (Power BI, Tableau, SSRS)Ability to support analytics teams by preparing data structures suitable for reporting and dashboarding.

Understanding of data warehouse principles (Star/Snowflake schemas)Knowledge of how to structure data for analytics and reporting, even if the primary focus is pipeline engineering.

Working knowledge of database management, data integration patterns, and ETL/ELT frameworksComfortable working with relational, cloud, and distributed data platforms.

Strong analytical and problem-solving skillsAbility to diagnose data issues, performance bottlenecks, and pipeline failures.

Strong communication skillsCapable of explaining data concepts and pipeline logic to developers, analysts, and non-technical stakeholders.

Experience working in Agile environmentsProven ability to meet deadlines, prioritize tasks, and deliver high-quality solutions in iterative development cycles.Applicants will be screened according to preferences.

Special Instructions


Requisition Number: PRN44413B
Full Time or Part Time? Full Time
Work Schedule Summary: Full-time, 40 hours per week. Monday - Friday from 8:00 am to 5:00 pm This position offers a flexible, mostly remote work schedule for candidates who reside in the state of Utah. While most duties can be performed remotely, the employee must be available to attend essential meetings and events on campus as needed.A hybrid telework schedule is available for this position, dependent on operational needs and management approval. The arrangement will be established in partnership with the manager and is subject to ongoing departmental needs.
Department: 02228 - Data Coordinating Center
Location: Campus
Pay Rate Range: 81,983 to 124,278
Close Date: 6/5/2026
Open Until Filled:

To apply, visit https://utah.peopleadmin.com/postings/197372







Copyright 2025 Jobelephant.com Inc. All rights reserved.

Posted by the FREE value-added recruitment advertising agency

jeid-e62cd734eccb8a4993b3a5d34796ba5c
Asians in Higher Education
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.