Hollywood, Florida, United States Hybrid / WFH Options
INSPYR Solutions
in a hybrid role-60% administration and 40% development/support-to help us scale our data and DataOps infrastructure. You'll work with cutting-edge technologies like Databricks, ApacheSpark, Delta Lake, and AWS CloudOps, Cloud Security, while supporting mission-critical data pipelines and integrations. If you're a hands-on engineer with strong Python skills, deep … knack for solving complex data challenges, we want to hear from you. Key Responsibilities Design, develop, and maintain scalable ETL pipelines and integration frameworks. Administer and optimize Databricks and ApacheSpark environments for data engineering workloads. Build and manage data workflows using AWS services such as Lambda, Glue, Redshift, SageMaker, and S3. Support and troubleshoot DataOps pipelines, ensuring … years of experience in integration framework development with a strong emphasis on Databricks, AWS, and ETL. Required Technical Skills Strong programming skills in Python and PySpark. Expertise in Databricks, ApacheSpark, and Delta Lake. Proficiency in AWS CloudOps, Cloud Security, including configuration, deployment, and monitoring. Strong SQL skills and hands-on experience with Amazon Redshift. Experience with ETL More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Tenth Revolution Group
Be Doing You'll be a key contributor to the development of a next-generation data platform, with responsibilities including: Designing and implementing scalable data pipelines using Python and ApacheSpark Building and orchestrating workflows using AWS services such as Glue , Lambda , S3 , and EMR Serverless Applying best practices in software engineering: CI/CD , version control , automated … testing , and modular design Supporting the development of a lakehouse architecture using Apache Iceberg Collaborating with product and business teams to deliver data-driven solutions Embedding observability and quality checks into data workflows Participating in code reviews, pair programming, and architectural discussions Gaining domain knowledge in financial data and sharing insights with the team What They're Looking For … for experience with type hints, linters, and testing frameworks like pytest) Solid understanding of data engineering fundamentals: ETL/ELT, schema evolution, batch processing Experience or strong interest in ApacheSpark for distributed data processing Familiarity with AWS data tools (e.g., S3, Glue, Lambda, EMR) Strong communication skills and a collaborative mindset Comfortable working in Agile environments and More ❯
experience in a leadership or technical lead role, with official line management responsibility. Strong experience with modern data stack technologies, including Python, Snowflake, AWS (S3, EC2, Terraform), Airflow, dbt, ApacheSpark, Apache Iceberg, and Postgres. Skilled in balancing technical excellence with business priorities in a fast-paced environment. Strong communication and stakeholder management skills, able to translate More ❯
Chantilly, Virginia, United States Hybrid / WFH Options
The DarkStar Group
rather huge and includes Python (Pandas, numpy, scipy, scikit-learn, standard libraries, etc.), Python packages that wrap Machine Learning (packages for NLP, Object Detection, etc.), Linux, AWS/C2S, Apache NiFi, Spark, pySpark, Hadoop, Kafka, ElasticSearch, Solr, Kibana, neo4J, MariaDB, Postgres, Docker, Puppet, and many others. Work on this program takes place in Chantilly, VA, McLean, VA and …/standards. Develop and deliver documentation for each project including ETL mappings, code use guide, code location and access instructions. Design and optimize Data Pipelines using tools such as Spark, Apache Iceberg, Trino, OpenSearch, EMR cloud services, NiFi and Kubernetes containers Ensure the pedigree and provenance of the data is maintained such that the access to data is … years' experience with: Data lifecycle engineering Development and maintenance of extract, transform and load (ETL) tools and services Cloud and on-prem data storage and processing solutions Python, SQL, Spark and other data engineering programming COTS and open source data engineering tools such as ElasticSearch and NiFi Processing data within the Agile Lifecycle Desired Skills (Optional) Experience using AI More ❯
Herndon, Virginia, United States Hybrid / WFH Options
The DarkStar Group
rather huge and includes Python (Pandas, numpy, scipy, scikit-learn, standard libraries, etc.), Python packages that wrap Machine Learning (packages for NLP, Object Detection, etc.), Linux, AWS/C2S, Apache NiFi, Spark, pySpark, Hadoop, Kafka, ElasticSearch, Solr, Kibana, neo4J, MariaDB, Postgres, Docker, Puppet, and many others. Work on this program takes place in Chantilly, VA, McLean, VA and …/standards. Develop and deliver documentation for each project including ETL mappings, code use guide, code location and access instructions. Design and optimize Data Pipelines using tools such as Spark, Apache Iceberg, Trino, OpenSearch, EMR cloud services, NiFi and Kubernetes containers Ensure the pedigree and provenance of the data is maintained such that the access to data is … years' experience with: Data lifecycle engineering Development and maintenance of extract, transform and load (ETL) tools and services Cloud and on-prem data storage and processing solutions Python, SQL, Spark and other data engineering programming COTS and open source data engineering tools such as ElasticSearch and NiFi Processing data within the Agile Lifecycle Desired Skills (Optional) Experience using AI More ❯
Reston, Virginia, United States Hybrid / WFH Options
ICF
engineering, data security practices, data platforms, and analytics 3+ years Databricks Platform Expertise - SME Level Proficiency including: Databricks, Delta Lake, and Delta Sharing Deep experience with distributed computing using ApacheSpark Knowledge of Spark runtime internals and optimization Ability to design and deploy performant end-to-end data architectures 4+ years of ETL Pipeline Development building robust … U.S. for three (3) full years out of the last five (5) years Technologies you'll use: Databricks on Azure for data engineering and ML pipeline support. SQL, Python, Spark, Tableau. Git, Jira, CI/CD tools (e.g., Jenkins, CodeBuild). Jira, Confluence, SharePoint. Mural, Miro, or other collaboration/whiteboarding tools. What we'd like you to have … or similar. Machine Learning and Analytical Skills including: MLOps - Working knowledge of ML deployment and operations Data Science Methodologies - Statistical analysis, modeling, and interpretation Big Data Technologies - Experience beyond Spark with distributed systems Experience with deployment pipelines, including Git-based version control and CI/CD pipelines and DevOps practices using Terraform for IaC. Emergency management domain knowledge a More ❯
Chantilly, Virginia, United States Hybrid / WFH Options
Rackner
software engineering (backend, API, or full-stack) Proficient in Python, Java, or C# Experienced with REST APIs (FastAPI, AWS Lambda) and OpenAPI specifications Skilled in data pipeline orchestration (dbt, Apache Airflow, ApacheSpark, Iceberg) Knowledgeable in federal compliance frameworks (NIST 800-53, HIPAA, FISMA High) Preferred/Bonus: Prior work with DHA, VA, or federal healthcare IT More ❯
Washington, Washington DC, United States Hybrid / WFH Options
Equiliem
data solutions Design and implement data models and pipelines for relational, dimensional, data lakehouse, data warehouse, and data mart environments Utilize Azure services including Azure Data Factory, Synapse Pipelines, ApacheSpark Notebooks, Python, and SQL to build and optimize pipelines Redevelop existing SSIS ETL processes using Azure Data Factory and Synapse Pipelines Prepare and manage data for analytics More ❯
Reston, Virginia, United States Hybrid / WFH Options
ICF
governance best practices. Demonstrated experience showing strong critical thinking and problem solving skills paired with a desire to take initiative Experience working with big data processing frameworks such as ApacheSpark, and streaming platforms like Kafka or AWS Kinesis. AWS certification (Data Analytics, Developer, or Solutions Architect) is a plus. Experience with event-driven architectures and real-time More ❯
Coventry, West Midlands, United Kingdom Hybrid / WFH Options
Coventry Building Society
Experience with tools like AWS (S3, Glue, Redshift, SageMaker) or other cloud platforms. Familiarity with Docker, Terraform, GitHub Actions, and Vault for managing secrets. Experience in coding SQL, Python, Spark, or Scala to work with data. Experience with databases used in Data Warehousing, Data Lakes, and Lakehouse setups. You know how to work with both structured and unstructured data. More ❯
binley, midlands, united kingdom Hybrid / WFH Options
Coventry Building Society
Experience with tools like AWS (S3, Glue, Redshift, SageMaker) or other cloud platforms. Familiarity with Docker, Terraform, GitHub Actions, and Vault for managing secrets. Experience in coding SQL, Python, Spark, or Scala to work with data. Experience with databases used in Data Warehousing, Data Lakes, and Lakehouse setups. You know how to work with both structured and unstructured data. More ❯
leicester, midlands, united kingdom Hybrid / WFH Options
Coventry Building Society
Experience with tools like AWS (S3, Glue, Redshift, SageMaker) or other cloud platforms. Familiarity with Docker, Terraform, GitHub Actions, and Vault for managing secrets. Experience in coding SQL, Python, Spark, or Scala to work with data. Experience with databases used in Data Warehousing, Data Lakes, and Lakehouse setups. You know how to work with both structured and unstructured data. More ❯
Manchester, Lancashire, United Kingdom Hybrid / WFH Options
CHEP UK Ltd
plus work experience BS & 5+ years of work experience MS & 4+ years of work experience Proficient with machine learning and statistics Proficient with Python, deep learning frameworks, Computer Vision, Spark Have produced production level algorithms Proficient in researching, developing, synthesizing new algorithms and techniques Excellent communication skills Desirable Qualifications Master's or PhD level degree 5+ years of work More ❯
on experience with cloud data platforms such as Snowflake, Redshift, Athena, or BigQuery, including optimization techniques and custom parsers/transpilers. Practical knowledge of distributed and analytical engines (e.g., ApacheSpark, Trino, PostgreSQL, DuckDB) with skills in query engines, performance tuning, and integration in local and production environments. Experience building developer tooling such as CLI tools, SDKs, and More ❯
platform. Candidate Profile: Proven experience as a Data Engineer, with strong expertise in designing and managing large-scale data systems. Hands-on proficiency with modern data technologies such as Spark, Kafka, Airflow, or dbt. Strong SQL skills and experience with cloud platforms (Azure preferred). Solid programming background in Python, Scala, or Java. Knowledge of data warehousing solutions (e.g. More ❯
Sheffield, South Yorkshire, England, United Kingdom Hybrid / WFH Options
Vivedia Ltd
/ELT pipelines , data modeling , and data warehousing . Experience with cloud platforms (AWS, Azure, GCP) and tools like Snowflake, Databricks, or BigQuery . Familiarity with streaming technologies (Kafka, Spark Streaming, Flink) is a plus. Tools & Frameworks: Airflow, dbt, Prefect, CI/CD pipelines, Terraform. Mindset: Curious, data-obsessed, and driven to create meaningful business impact. Soft Skills: Excellent More ❯
Washington, Washington DC, United States Hybrid / WFH Options
Neuma Consulting LLC
related ML libraries Experience with Docker, Kubernetes, and cloud AI platforms such as AWS Bedrock, Sagemaker, Azure ML, or GCP Vertex AI Working knowledge of data tools such as Spark, Pandas, SQL/NoSQL databases Security: TS clearance required Nice to Have Experience with LangChain, hybrid retrieval orchestration frameworks, or custom AI agent architectures Experience implementing custom authentication/ More ❯
Arlington, Virginia, United States Hybrid / WFH Options
Amazon
Science, Engineering, related field, or equivalent experience - 3+ years of experience with data warehouse architecture, ETL/ELT tools, data engineering, and large-scale data manipulation using technologies like Spark, EMR, Hive, Kafka, and Redshift - Experience with relational databases, SQL, and performance tuning, as well as software engineering best practices for the development lifecycle, including coding standards, reviews, source More ❯
San Antonio, Texas, United States Hybrid / WFH Options
Wyetech, LLC
AGILE software development methodologies and use of standard software development tool suites Desired Technical Skills Security+ certification is highly desired. Experience with big data technologies like: Hadoop, Accumulo, Ceph, Spark, NiFi, Kafka, PostgreSQL, ElasticSearch, Hive, Drill, Impala, Trino, Presto, etc. Experience with containers, EKS, Diode, CI/CD, and Terraform are a plus. Work could possibly require some on More ❯
Excellent problem-solving skills and ability to work independently in a fast-paced environment. Desirable: Experience with NLP, computer vision, or time-series forecasting. Familiarity with distributed computing frameworks (Spark, Ray). Experience with MLOps and model governance practices. Previous contract experience in a similar ML engineering role. Contract Details Duration: 6–12 months (extension possible) Location: London (Hybrid More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Experis
Excellent problem-solving skills and ability to work independently in a fast-paced environment. Desirable: Experience with NLP, computer vision, or time-series forecasting. Familiarity with distributed computing frameworks (Spark, Ray). Experience with MLOps and model governance practices. Previous contract experience in a similar ML engineering role. Contract Details Duration: 6–12 months (extension possible) Location: London (Hybrid More ❯
london, south east england, united kingdom Hybrid / WFH Options
Experis
Excellent problem-solving skills and ability to work independently in a fast-paced environment. Desirable: Experience with NLP, computer vision, or time-series forecasting. Familiarity with distributed computing frameworks (Spark, Ray). Experience with MLOps and model governance practices. Previous contract experience in a similar ML engineering role. Contract Details Duration: 6–12 months (extension possible) Location: London (Hybrid More ❯
slough, south east england, united kingdom Hybrid / WFH Options
Experis
Excellent problem-solving skills and ability to work independently in a fast-paced environment. Desirable: Experience with NLP, computer vision, or time-series forecasting. Familiarity with distributed computing frameworks (Spark, Ray). Experience with MLOps and model governance practices. Previous contract experience in a similar ML engineering role. Contract Details Duration: 6–12 months (extension possible) Location: London (Hybrid More ❯
london (city of london), south east england, united kingdom Hybrid / WFH Options
Experis
Excellent problem-solving skills and ability to work independently in a fast-paced environment. Desirable: Experience with NLP, computer vision, or time-series forecasting. Familiarity with distributed computing frameworks (Spark, Ray). Experience with MLOps and model governance practices. Previous contract experience in a similar ML engineering role. Contract Details Duration: 6–12 months (extension possible) Location: London (Hybrid More ❯
and delivering end-to-end AI/ML projects. Nice to Have: Exposure to LLMs (Large Language Models), generative AI , or transformer architectures . Experience with data engineering tools (Spark, Airflow, Snowflake). Prior experience in fintech, healthtech, or similar domains is a plus. More ❯