Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as Apache Spark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure/AWS Data Engineering certifications Databricks certifications What's in it for More ❯
london, south east england, United Kingdom Hybrid / WFH Options
Peaple Talent
Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as Apache Spark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure/AWS Data Engineering certifications Databricks certifications What's in it for More ❯
the work you deliver Furthermore, you have experience in: working with AWS developing applications in a Kubernetes environment developing batch jobs in Apache Spark (pyspark or Scala) and scheduling them in an Airflow environment developing streaming applications for Apache Kafka in Python or Scala working with CI/CD More ❯
Engineering. Develop customer relationships and build internal partnerships with account executives and teams. Prior experience with coding in a core programming language (i.e., Python, PySpark, or SQL) and willingness to learn a base level of Spark. Proficient with Big Data Analytics technologies, including hands-on expertise with complex proofs More ❯
propositions. Develop customer relationships and build internal partnerships with account executives and teams. Prior experience with coding in a core programming language (i.e., Python, PySpark or SQL) and willingness to learn a base level of Spark. Hands-on expertise with complex proofs-of-concept and public cloud platforms (AWS More ❯
/Experience: The ideal candidate will have the following: Proven experience leading data engineering projects, especially cloud migration initiatives. Hands-on experience with Python, PySpark, SQL, and Scala. Deep knowledge of modern data architecture, data modeling, and ELT/ETL practices. Strong grasp of data security, governance, and compliance More ❯
a bachelor or university degree or equivalent by experience. A post-graduate is an asset; Strong programming skills, particularly in languages such as Python (PySpark, Pandas), SQL, ; You have knowledge of big data principles (such as distributed processing); You have an understanding of data management principles, data architecture design More ❯
environment with one or more modern programming languages and database querying languages Proficiency in coding one or more languages such as Java, Python or PySpark Experience in Cloud implementation with AWS Data Services, Glue ETL (or) EMR, S3, Glue Catalog, Athena, Lambda, Step Functions, Event Bridge, ECS, Data De More ❯
and ensuring more reliable reporting and analytics with data marts. Expertise in data orchestration and automation tools such as Apache Airflow, Python, and PySpark, supporting end- to-end ETL workflows. Experience in deployment activities. More ❯
continuous improvement Core skills/experience 5+ years in data engineering, analytics, and cloud platforms Strong in Palantir Foundry, Snowflake, AWS Skilled in Python (PySpark), SQL, R, REST APIs, ETL Experience with Spark, Kafka, and distributed systems Knowledge of AI/ML, NLP/LLMs, predictive modeling, statistics Background More ❯
working with Databricks in a production environment. Strong background in ETL/ELT pipeline design and implementation within Databricks. Proficiency in SQL, Python, and PySpark for data processing and analysis. Experience with streaming technologies such as Kafka for real-time data processing. Experience in data migration and integration for More ❯
Strong knowledge of relational databases and experience working with flat files. • Familiarity with SQL-like query languages for data manipulation and analysis. • Experience with PySpark or JafaSpark (preferred) for distributed data processing. • Strong problem-solving skills and the ability to optimize processes for efficiency and scalability. • Excellent communication skills More ❯
City of London, London, United Kingdom Hybrid / WFH Options
Ikhoi Recruitment
the Canary Wharf office, 3 days WFH, pension, life assurance, season ticket loan and lot's more. You must have experience with Python/Pyspark, Azure and managing a small team of Mid level/Senior Data Engineers. As part of the Technology Hub within the client, Data Engineer More ❯
of experience in data engineering, focusing on ETL & integration 3+ years leading data engineering teams with Agile & DevOps expertise Proficiency in Python, SQL or PySpark Experience with Azure Data stack & Databricks Strong leadership, problem-solving, and strategic thinking skills Excellent communication & teamwork abilities If you're ready to make More ❯
of experience in data engineering, focusing on ETL & integration 3+ years leading data engineering teams with Agile & DevOps expertise Proficiency in Python, SQL or PySpark Experience with Azure Data stack & Databricks Strong leadership, problem-solving, and strategic thinking skills Excellent communication & teamwork abilities If you're ready to make More ❯
and mitigate risks, issues, or control weaknesses in your daily work What we're looking for Strong experience with dbt Proficiency in SQL, Python, Pyspark, or other relevant data processing languages; familiarity with cloud platforms like AWS, GCP, or Azure is desirable Excellent problem-solving skills and attention to More ❯
positive influence on Mesh-AI, our customers, and your team. Nice to Have NLP, LLMs, GenAI, time series forecasting, image recognition or deep learning. PySpark, OpenCV, spaCy or DVC. Exposure to MLOps. Why Mesh-AI Fast-growing start-up organisation with huge opportunity for career growth. Highly competitive salary More ❯
Bristol, Gloucestershire, United Kingdom Hybrid / WFH Options
Snap Analytics
cloud-based architectures on AWS, Azure, or GCP A strong background in Databricks for building enterprise data warehouses with strong knowledge of SQL and Pyspark A deep understanding of data modelling and advanced pipeline development Knowledge and understanding of best practice CI/CD approaches utilising Git. Strategic Vision More ❯
libraries, Good to have Gen-AI skillset. Good knowledge and work experience in Unix comm and Shell scripts etc Good to have experience on Pyspark Hadoop and Hive as well Expertise in software engineering principles such as design patterns code design testing and documentation Writing effective and scalable codes More ❯
a modern tech stack including SQL, Python, Airflow, Kubernetes, and various other cutting-edge technologies. You'll work with tools like dbt on Databricks, PySpark, Streamlit, and Django, ensuring robust data infrastructure that powers business-critical operations. What makes this role particularly exciting is the combination of technical depth More ❯
experience (at minimum) working with modern relational databases and/or distributed computing platforms Big Data, and their query interfaces, such as SQL, Spark, PySpark and Hive. Academic experience (at minimum) using visualization techniques for presenting data and analysis as dashboard in tools such as R/Shiny, GGPlot More ❯
influence. A drive to learn new technologies and techniques. Experience/aptitude towards research and openness to learn new technologies. Experience with Azure, Spark (PySpark), and Kubeflow - desirable. We pay competitive salaries based on experience of the candidates. Along with this, you will be entitled to an award-winning More ❯
years of experience in Python 2+ years of experience with AWS 3+ years of experience with Shell scripting 2+ years of experience with Spark (PySpark, Scala) At this time, Capital One will not sponsor a new applicant for employment authorization, or offer any immigration related support for this position More ❯
languages e.g. Python, R, Scala, etc.; (Python preferred). Proficiency in database technologies e.g. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and unstructured data e.g. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques and More ❯