We need your help. We’re looking for a self-motivated and detail-oriented data engineer to join our DataOps team. This opportunity will enable you contribute to the operation, support, and enhancement of our mission critical data operations platform and the development of data pipelines. Our data operations platform supports high volume, high velocity data ingestion and curation to support our existing, and rapidly expanding, health plan client base.
What you’ll do
As a data engineer, you will be responsible for the execution and management of inbound client and internal service-based data pipelines. This encompasses the development, operation, and management of our client data hubs, including data intake, data quality assessment/evaluation and data curation and enrichment/preparation processes. Our client data hubs consist of various health plan data sources to support Decision Point services including our AI/ML platform, analytics platform, and OPUS application. If this interests you, read on.
- Design and develop scalable data integration (ETL/ELT) processes (including ingestion, cleansing, curation, unification, etc.)
- Automate the processing of inbound customer data feeds
- Design and develop tools to support data profiling and data quality methodologies
- Work with our data science team to assist with data prep, enrichment and feature engineering for AI/ML
- Engage with our software engineering team to ensure precise data points per application specification
- Provide periodic support to the customer success team
Skills & Experience
- BS / MS in Computer Science, Engineering or applicable experience
- 3+ years of experience with ETL/ELT and data pipeline principles
- 3+ years of SQL experience; Microsoft SQL Server or PostgreSQL preferred
- Knowledge of data manipulation methodologies
- Excellent verbal and written communication
- Strong data profiling skills; Ability to discover and highlight unique patterns/trends within data to identify and solve complex problems
- Keen understanding of EDW and other database design principles
- Comfortable working with very large data sets and VLDB environments
- Experience with CI/CD and version control tools: Git preferred
- Understanding of data science and machine learning concepts preferred
- Experience working within hybrid cloud environment; AWS experience is a plus
- Familiarity with data visualization tools such as Tableau or QuickSight is a plus
- Familiarity with healthcare data is a plus
- Some familiarity with statistical software tools and libraries such as R and scikit-learn is a plus
A little more about Decision Point
We are a rapidly growing healthcare company in the healthcare market. This year, we’ll nearly double in size. Our products & support services advise healthcare insurance and provider organizations on how to best target and engage their members so that members make better health decisions. The result? Our clients can identify their sickest members before they get sick and connect these members with the health and social support services they need to manage their condition.
Our innovative approach to member and provider engagement strategy and orchestration leverages cutting edge machine learning techniques to better inform clients on how to take action. Guided by our team of industry recognized healthcare and technology experts, with backgrounds from NASA, Microsoft, E&Y, health plans, and other great organizations, we are putting healthcare data to good use.