Wakefield, Yorkshire, United Kingdom Hybrid / WFH Options
Flippa.com
/CD) automation, rigorous code reviews, documentation as communication. Preferred Qualifications Familiar with data manipulation and experience with Python libraries like Flask, FastAPI, Pandas, PySpark, PyTorch, to name a few. Proficiency in statistics and/or machine learning libraries like NumPy, matplotlib, seaborn, scikit-learn, etc. Experience in building More ❯
storage, data pipelines to ingest and transform data, and querying & reporting of analytical data. You've worked with technologies such as Python, Spark, SQL, Pyspark, PowerBI etc. You're a problem-solver, pragmatically exploring options and finding effective solutions. An understanding of how to design and build well-structured More ❯
Proven experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog, Metadata Management, Data Lineage More ❯
code development practices. Knowledge of Apache Spark and similar programming to support streaming data. Experience with some of the following Python libraries: NumPy, Pandas, PySpark, Dask, Apache Airflow, Luigi, SQLAlchemy, Great Expectations, Petl, Boto3, matplotlib, dbutils, koalas, OpenPyXL, XlsxWriter. Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK More ❯
and support architectural decisions as a recognised Databricks expert. Essential Skills & Experience: Demonstrable expertise with Databricks and Apache Spark in production environments. Proficiency in PySpark, SQL, and working within one or more cloud platforms (Azure, AWS, or GCP). In-depth understanding of Lakehouse concepts, medallion architecture, and modern More ❯
of experience in Data Engineering, with a focus on cloud platforms (Azure, AWS, GCP). You have a proven track record working with Databricks (PySpark, SQL, Delta Lake, Unity Catalog). You have extensive experience in ETL/ELT development and data pipeline orchestration (Databricks Workflows, DLT, Airflow, ADF More ❯
Azure or AWS) Extensive experience in DBA, schema design & dimensional modelling, and SQL optimization. Programming experience in python or other languages Working proficiency with pySpark (Databricks platform preferred) Excellent written and verbal communication skills, with the ability to collaborate effectively with cross-functional teams. Understanding of good engineering practices More ❯
experience in a Data Engineering role: Passion for data and industry best practices in a dynamic environment. Proficiency in technologies such as Spark/PySpark, Azure Data services, Python or Scala, SQL, testing frameworks, open table formats, CI/CD workflows, and cloud infrastructure management. Excellent communication, analytical, and More ❯
platforms for your clients. Work with us to use big data for good. Qualifications You Have: 3+ years of experience using Python, SQL, and PySpark 3+ years of experience utilizing Databricks or Apache Spark Experience designing and maintaining Data Lakes or Data Lakehouses Experience with big data tools such More ❯
Delta Lake/Databricks), PL/SQL, Java/J2EE, React, CI/CD pipeline, and release management. Strong experience in Python, Scala/PySpark, PERL/scripting. Experience as a Data Engineer for Cloud Data Lake activities, especially in high-volume data processing frameworks, ETL development using distributed More ❯
Leeds, Yorkshire, United Kingdom Hybrid / WFH Options
Low Carbon Contracts Company
in a highly numerate subject is essential Minimum 2 years' experience in Python development, including scientific computing and data science libraries (NumPy, pandas, SciPy, PySpark) Solid understanding of object-oriented software engineering design principles for usability, maintainability and extensibility Experience working with Git in a version-controlled environment Good More ❯
Greater Bristol Area, United Kingdom Hybrid / WFH Options
ADLIB Recruitment | B Corp™
experience as a Senior Data Engineer, with some experience mentoring others Excellent Python and SQL skills, with hands-on experience building pipelines in Spark (PySpark preferred) Experience with cloud platforms (AWS/Azure) Solid understanding of data architecture, modelling, and ETL/ELT pipelines Experience using tools like Databricks More ❯
Bristol, Avon, South West, United Kingdom Hybrid / WFH Options
ADLIB Recruitment
experience as a Senior Data Engineer, with some experience mentoring others Excellent Python and SQL skills, with hands-on experience building pipelines in Spark (PySpark preferred) Experience with cloud platforms (AWS/Azure) Solid understanding of data architecture, modelling, and ETL/ELT pipelines Experience using tools like Databricks More ❯
Coalville, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
Ibstock PLC
Knowledge, Skills and Experience: Essentia l Strong expertise in Databricks and Apache Spark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development and Delta Lake optimisation. Experience with ETL/ELT processes for integrating diverse data More ❯
to Octopus offices across Europe and the US. Our Data Stack: SQL-based pipelines built with dbt on Databricks Analysis via Python Jupyter notebooks Pyspark in Databricks workflows for heavy lifting Streamlit and Python for dashboarding Airflow DAGs with Python for ETL running on Kubernetes and Docker Django for More ❯
a highly numerate subject is essential At least 2 years of Python development experience, including scientific computing and data science libraries (NumPy, pandas, SciPy, PySpark) Strong understanding of object-oriented design principles for usability and maintainability Experience with Git in a version-controlled environment Knowledge of parallel computing techniques More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Low Carbon Contracts Company
a highly numerate subject is essential At least 2 years of Python development experience, including scientific computing and data science libraries (NumPy, pandas, SciPy, PySpark) Strong understanding of object-oriented design principles for usability and maintainability Experience with Git in a version-controlled environment Knowledge of parallel computing techniques More ❯
Herndon, Virginia, United States Hybrid / WFH Options
The DarkStar Group
scikit-learn, standard libraries, etc.), Python packages that wrap Machine Learning (packages for NLP, Object Detection, etc.), Linux, AWS/C2S, Apache NiFi, Spark, pySpark, Hadoop, Kafka, ElasticSearch, Solr, Kibana, neo4J, MariaDB, Postgres, Docker, Puppet, and many others. Work on this program takes place in Chantilly, VA, McLean, VA More ❯
Chantilly, Virginia, United States Hybrid / WFH Options
The DarkStar Group
scikit-learn, standard libraries, etc.), Python packages that wrap Machine Learning (packages for NLP, Object Detection, etc.), Linux, AWS/C2S, Apache NiFi, Spark, pySpark, Hadoop, Kafka, ElasticSearch, Solr, Kibana, neo4J, MariaDB, Postgres, Docker, Puppet, and many others. Work on this program takes place in Chantilly, VA, McLean, VA More ❯
Herndon, Virginia, United States Hybrid / WFH Options
The DarkStar Group
scikit-learn, standard libraries, etc.), Python packages that wrap Machine Learning (packages for NLP, Object Detection, etc.), Linux, AWS/C2S, Apache NiFi, Spark, pySpark, Hadoop, Kafka, ElasticSearch, Solr, Kibana, neo4J, MariaDB, Postgres, Docker, Puppet, and many others. Work on this program takes place in Chantilly, VA, McLean, VA More ❯
added flexibility for diverse migration and integration projects. Prior experience with tools such as MuleSoft, Boomi, Informatica, Talend, SSIS, or custom scripting languages (Python, PySpark, SQL) for data extraction and transformation. Prior experience with Data warehousing and Data modelling (Star Schema or Snowflake Schema). Skilled in security frameworks More ❯
added flexibility for diverse migration and integration projects. Prior experience with tools such as MuleSoft, Boomi, Informatica, Talend, SSIS, or custom scripting languages (Python, PySpark, SQL) for data extraction and transformation. Prior experience with Data warehousing and Data modelling (Star Schema or Snowflake Schema). Skilled in security frameworks More ❯
Newcastle upon Tyne, Tyne & Wear Hybrid / WFH Options
Client Server
A A B grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, Apache Spark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have experience with Azure and Data Bricks More ❯
Falls Church, Virginia, United States Hybrid / WFH Options
Epsilon Inc
IAT Level II Certification may be required (GSEC, GICSP, CND, CySA+, Security+ CE, SSCP or CCNA-Security). Advanced proficiency in Python, SQL, PySpark, MLOps, computer vision, and NLP, Databricks, AWS, Data Connections Cluster Training and tools such as GitLab, with proven success in architecting complex AI/ML More ❯
Edinburgh, Scotland, United Kingdom Hybrid / WFH Options
Peaple Talent
Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as Apache Spark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure/AWS Data Engineering certifications Databricks certifications What's in it for More ❯