Unlimited Job Postings Subscription - $99/yr!

Job Details

GenAI Data Automation Engineer

  2026-03-04     equiliem     all cities,AK  
Description:

GenAI Data Automation Engineer
100% remote - (Quarterly Travel to Gaithersburg, MD for team activities)
Ability to Obtain Public Trust required

26-01995

Job Summary
The GenAI Data Automation Engineer designs and implements AI-driven automation solutions across AWS and Azure hybrid cloud environments. This role is responsible for building scalable data pipelines, integrating enterprise systems, and leveraging Generative AI frameworks to support mission-critical analytics, reporting, and customer engagement platforms. The position requires strong expertise in data engineering, cloud services, and LLM-based automation, with a focus on performance, security, and compliance in regulated environments.

Job Responsibilities

  • Design, develop, and maintain data pipelines using AWS services including S3, RDS/SQL Server, Glue, Lambda, EMR, DynamoDB, and Step Functions.
  • Build ETL/ELT processes to move data across systems, including DynamoDB to SQL Server and AWS to Azure SQL integrations.
  • Integrate contact center and CRM data into enterprise data platforms for analytics and operational reporting.
  • Engineer and enhance real-time and batch ingestion pipelines using Apache Spark, Flume, and Kafka, delivering data to Solr and OpenSearch platforms.
  • Leverage Generative AI frameworks such as AWS Bedrock, Amazon Q, Azure OpenAI, Hugging Face, and LangChain to:
    • Automate vector generation and embedding from unstructured data.
    • Implement automated data quality checks, metadata tagging, and lineage tracking.
    • Enhance ETL processes with LLM-assisted transformations and anomaly detection.
    • Build conversational business intelligence interfaces for natural language querying of structured and indexed data.
    • Develop AI-enabled copilots for pipeline monitoring and automated troubleshooting.
  • Optimize SQL Server performance through stored procedures, indexing strategies, query tuning, and execution plan analysis.
  • Implement CI/CD pipelines using GitHub, Jenkins, or Azure DevOps to support data and AI solution deployment.
  • Ensure security and compliance using IAM, encryption, VPC configurations, role-based access controls, and firewall policies.
  • Participate in Agile DevOps processes and deliver sprint-based enhancements to data and AI solutions.
Job Requirements
  • Bachelor degree in Computer Science or related field.
  • Minimum of 2 years of experience in data engineering and automation.
  • Hands-on experience with LLM and Generative AI frameworks, including AWS Bedrock, Azure OpenAI, or open-source platforms.
  • Proficiency in SQL, SSIS, Python, Spark, Bash, PowerShell, and AWS/Azure CLI tools.
  • Experience with AWS services such as S3, RDS/SQL Server, Glue, Lambda, EMR, and DynamoDB.
  • Familiarity with Apache Flume, Kafka, and Solr for large-scale ingestion and search.
  • Experience integrating REST APIs into data pipelines and workflows.
  • Knowledge of SDLC and CI/CD tools such as JIRA, GitHub, Azure DevOps, and Jenkins.
  • Strong troubleshooting and performance optimization skills in SQL, Spark, or related technologies.
  • Experience operationalizing Generative AI pipelines, including deployment, monitoring, retraining, and lifecycle management.
  • Strong written and verbal communication skills.
  • US Citizenship and ability to obtain Public Trust clearance.
    #ZR


Apply for this Job

Please use the APPLY HERE link below to view additional details and application instructions.

Apply Here

Back to Search