Databricks. Solid understanding of ETL processes , data modeling, and data warehousing. Familiarity with SQL and relational databases. Knowledge of big data technologies , such as Spark, Hadoop, or Kafka, is a plus. Strong problem-solving skills and the ability to work in a collaborative team environment. Excellent verbal and written More ❯
e.g., Refinitiv, Bloomberg). Data Platforms: Warehouses: Snowflake, Google BigQuery, or Amazon Redshift. Analytics: Tableau, Power BI, or Looker for client reporting. Big Data: ApacheSpark or Hadoop for large-scale processing. AI/ML: TensorFlow or Databricks for predictive analytics. Integration Technologies: API Management: Apigee, AWS API More ❯
and contribute to code reviews and best practices Skills & Experience Strong expertise in Python and SQL for data engineering Hands-on experience with Databricks, Spark, Delta Lake, Delta Live Tables Experience in batch and real-time data processing Proficiency with cloud platforms (AWS, Azure, Databricks) Solid understanding of data More ❯
platform management roles, with 5+ years in leadership positions. Expertise in modern data platforms (e.g., Azure, AWS, Google Cloud) and big data technologies (e.g., Spark, Kafka, Hadoop). Strong knowledge of data governance frameworks, regulatory compliance (e.g., GDPR, CCPA), and data security best practices. Proven experience in enterprise-level More ❯
london, south east england, united kingdom Hybrid / WFH Options
Careerwise
Qualifications: Master's or Ph.D. degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or related fields. Proven experience in Databricks and its ecosystem (Spark, Delta Lake, MLflow, etc.). Strong proficiency in Python and R for data analysis, machine learning, and data visualization. In-depth knowledge of cloud … BigQuery, Redshift, Data Lakes). Expertise in SQL for querying large datasets and optimizing performance. Experience working with big data technologies such as Hadoop, ApacheSpark, and other distributed computing frameworks. Solid understanding of machine learning algorithms, data preprocessing, model tuning, and evaluation. Experience in working with LLM More ❯
practices to improve data engineering processes. Experience Required: Developing data processing pipelines in python and SQL for Databricks including many of the following technologies: Spark, Delta, Delta Live Tables, PyTest, Great Expectations (or similar) and Jobs. Developing data pipelines for batch and stream processing and analytics. Building data pipelines More ❯
london, south east england, united kingdom Hybrid / WFH Options
DATAHEAD
ensure high availability and accessibility. Experience & Skills : Strong experience in data engineering. At least some commercial hands-on experience with Azure data services (e.g., ApacheSpark, Azure Data Factory, Synapse Analytics). Proven experience in leading and managing a team of data engineers. Proficiency in programming languages such More ❯
london, south east england, united kingdom Hybrid / WFH Options
Aventis Solutions
services experience is desired but not essential. API development (FastAPI, Flask) Tech stack : Azure, Python, Databricks, Azure DevOps, ChatGPT, Groq, Cursor AI, JavaScript, SQL, ApacheSpark, Kafka, Airflow, Azure ML, Docker, Kubernetes and many more. Role Overview: We are looking for someone who is as comfortable developing AI More ❯
independently Experience in working with data visualization tools Experience in GCP tools – Cloud Function, Dataflow, Dataproc and Bigquery Experience in data processing framework – Beam, Spark, Hive, Flink GCP data engineering certification is a merit Have hands on experience in Analytical tools such as powerBI or similar visualization tools Exhibit More ❯
ll Bring 5+ years in data/analytics engineering, including 2+ years in a leadership or mentoring role. Strong hands-on expertise in Databricks , Spark , Python , PySpark , and Delta Live Tables . Experience designing and delivering scalable data pipelines and streaming data processing (e.g., Kafka , AWS Kinesis , or Azure More ❯
Glue, Athena, Redshift, Kinesis, Step Functions, and Lake Formation. Strong programming skills in Python and PySpark for data processing and automation. Extensive SQL experience (Spark-SQL, MySQL, Presto SQL) and familiarity with NoSQL databases (DynamoDB, MongoDB, etc.). Proficiency in Infrastructure-as-Code (Terraform, CloudFormation) for automating AWS data More ❯
london, south east england, united kingdom Hybrid / WFH Options
Chapter 2
issues . Ability to work in a fast-paced, high-growth environment with a product-oriented mindset . Bonus: Experience with big data tools (Spark, Kafka) and feature stores . Why Join Us? Work on cutting-edge AI and ML infrastructure supporting generative AI products. Be part of a More ❯
error handling, code optimization). Proficiency in SQL – comfortable designing databases, writing complex queries, and handling performance tuning. Experience with Databricks (or a comparable Spark environment) – ability to build data pipelines, schedule jobs, and create dashboards/notebooks. Experience with Azure services (Data Factory, Synapse, or similar) and knowledge More ❯
Staines, Middlesex, United Kingdom Hybrid / WFH Options
Industrial and Financial Systems
Argo, Dagster or similar. Skilled with data ingestion tools like Airbyte, Fivetran, etc. for diverse data sources. Expert in large-scale data processing with Spark or Dask. Strong in Python, Scala, C# or Java, cloud SDKs and APIs. AI/ML expertise for pipeline efficiency, familiar with TensorFlow, PyTorch More ❯
experience in data engineering, with a strong understanding of modern data technologies (e.g., cloud platforms like AWS, Azure, GCP, and data tools such as ApacheSpark, Kafka, dbt, etc.). Proven track record of leading and managing data engineering teams in a consultancy or similar environment. Strong expertise More ❯
london, south east england, united kingdom Hybrid / WFH Options
Randstad Digital UK
MSSQL, PostgreSQL, MySQL, NoSQL Cloud: AWS (preferred), with working knowledge of cloud-based data solutions Nice to Have: Experience with graph databases, Hadoop/Spark, or enterprise data lake environments What You’ll Bring Strong foundation in computer science principles (data structures, algorithms, etc.) Experience building enterprise-grade pipelines More ❯
london, south east england, united kingdom Hybrid / WFH Options
Peaple Talent
Azure or AWS Strong experience designing and delivering data solutions in Databricks Proficient with SQL and Python Experience using Big Data technologies such as ApacheSpark or PySpark Great communication skills, effectively participating with Senior Stakeholders Nice to haves: Azure/AWS Data Engineering certifications Databricks certifications What More ❯
Stroud, south east england, united kingdom Hybrid / WFH Options
Data Engineer
excellence and be a person that actively looks for continual improvement opportunities. Knowledge and skills Experience as a Data Engineer or Analyst Databricks/ApacheSpark SQL/Python BitBucket/GitHub. Advantageous dbt AWS Azure Devops Terraform Atlassian (Jira, Confluence) About Us What's in it for More ❯
HBase, Elasticsearch). Build, operate, maintain, and support cloud infrastructure and data services. Automate and optimize data engineering pipelines. Utilize big data technologies (Databricks, Spark). Develop custom security applications, APIs, AI/ML models, and advanced analytic technologies. Experience with threat detection in Azure Sentinel, Databricks, MPP Databases More ❯
related field. 5+ years of experience in data engineering and data quality. Strong proficiency in Python/Java, SQL, and data processing frameworks including Apache Spark. Knowledge of machine learning and its data requirements. Attention to detail and a strong commitment to data integrity. Excellent problem-solving skills and More ❯
to support business insights, analytics, and other data-driven initiatives. Job Specification ( Technical Skills) : Cloud Platforms: Expert-level proficiency in Azure (Data Factory, Databricks, Spark, SQL Database, DevOps/Git, Data Lake, Delta Lake, Power BI), with working knowledge of Azure WebApp and Networking. Conceptual understanding of Azure AI More ❯
hoc analytics, data visualisation, and BI tools (Superset, Redash, Metabase) Experience with workflow orchestration tools (Airflow, Prefect) Experience writing data processing pipelines & ETL (Python, ApacheSpark) Excellent communication skills and ability to work collaboratively in a team environment Experience with web scraping Perks & Benefits Competitive salary package (including More ❯
cross-functional teams, and play a key role in optimising their data infrastructure. Requirements: Strong experience in Python, SQL, and big data technologies (Hadoop, Spark, NoSQL) Hands-on experience with cloud platforms (AWS, GCP, Azure) Proficiency in data processing frameworks like PySpark A problem-solver who thrives in a More ❯
london, south east england, united kingdom Hybrid / WFH Options
Intellect Group
such as Bloomberg, Refinitiv, or Open Banking. Experience with cloud platforms (AWS, GCP, or Azure) for model deployment. Understanding of big data technologies like Spark or Hadoop. Knowledge of algorithmic trading, credit risk modelling, or payment fraud detection . Benefits 💰 Competitive Salary & Bonus: £35,000 - £45,000 plus performance More ❯
london (hounslow), south east england, united kingdom
eTeam
including OAuth, JWT, and data encryption. • Fluent in English with strong communication and collaboration skills. Preferred Qualifications: • Experience with big data processing frameworks like Apache Flink or Spark. • Familiarity with machine learning models and AI-driven analytics. • Understanding of front-end and mobile app interactions with backend services. • Expertise More ❯