automation, ETL processes, and AI-driven data architecture design. Technical Skills Proficiency in Python and SQL, with experience in data processing frameworks such as ApacheSpark or Hadoop. Strong understanding of relational and NoSQL databases, as well as cloud-based data storage solutions (e.g., AWS S3, Google Cloud More ❯
Snowflake. Understanding of cloud platform infrastructure and its impact on data architecture. Data Technology Skills: A solid understanding of big data technologies such as ApacheSpark, and knowledge of Hadoop ecosystems. Knowledge of programming languages such as Python, R, or Java is beneficial. Exposure to ETL/ELT More ❯
Coalville, Leicestershire, East Midlands, United Kingdom Hybrid / WFH Options
Ibstock PLC
and BI solutions. Ensure data accuracy, integrity, and consistency across the data platform. Knowledge, Skills and Experience: Essentia l Strong expertise in Databricks and ApacheSpark for data engineering and analytics. Proficient in SQL and Python/PySpark for data transformation and analysis. Experience in data lakehouse development More ❯
East London, London, United Kingdom Hybrid / WFH Options
Asset Resourcing
programming languages such as Python or Java. Understanding of data warehousing concepts and data modeling techniques. Experience working with big data technologies (e.g., Hadoop, Spark) is an advantage. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Responsibilities: Design, build and maintain efficient and scalable data pipelines More ❯
programming languages such as Python or Java. Understanding of data warehousing concepts and data modeling techniques. Experience working with big data technologies (e.g., Hadoop, Spark) is an advantage. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Benefits Enhanced leave - 38 days inclusive of 8 UK Public More ❯
C++) and experience with DevOps practices (CI/CD). Familiarity with containerization (Docker, Kubernetes), RESTful APIs, microservices architecture, and big data technologies (Hadoop, Spark, Flink). Knowledge of NoSQL databases (MongoDB, Cassandra, DynamoDB), message queueing systems (Kafka, RabbitMQ), and version control systems (Git). Preferred Skills: Experience with More ❯
AWS (S3, Glue, Redshift, SageMaker) or other cloud platforms. Familiarity with Docker, Terraform, GitHub Actions, and Vault for managing secrets. Proficiency in SQL, Python, Spark, or Scala to work with data. Experience with databases used in Data Warehousing, Data Lakes, and Lakehouse setups, including both structured and unstructured data. More ❯
unstructured datasets. Engineering best practices and standards. Experience with data warehouse software (e.g. Snowflake, Google BigQuery, Amazon Redshift). Experience with data tools: Hadoop, Spark, Kafka, etc. Code versioning (Github integration and automation). Experience with scripting languages such as Python or R. Working knowledge of message queuing and … stream processing. Experience with ApacheSpark or Similar Technologies. Experience with Agile and Scrum Technologies. Familiarity with dbt and Airflow is an advantage. Experience working in a start-up or scale up environment. Experience working in the fields of financial technology, traditional financial services, or blockchain/cryptocurrency. More ❯
Newcastle Upon Tyne, Tyne and Wear, North East, United Kingdom Hybrid / WFH Options
Client Server
Data Engineer (Python Spark SQL) *Newcastle Onsite* to £70k Do you have a first class education combined with Data Engineering skills? You could be progressing your career at a start-up Investment Management firm that have secure backing, an established Hedge Fund client as a partner and massive growth … scientific discipline, backed by minimum A A B grades at A-level You have commercial Data Engineering experience working with technologies such as SQL, ApacheSpark and Python including PySpark and Pandas You have a good understanding of modern data engineering best practices Ideally you will also have … will earn a competitive salary (to £70k) plus significant bonus and benefits package. Apply now to find out more about this Data Engineer (Python Spark SQL) opportunity. At Client Server we believe in a diverse workplace that allows people to play to their strengths and continually learn. We're More ❯
Modelling: Designing dimensional, relational, and Document data lineage and recommend improvements for data ownership and stewardship. Qualifications Programming: Python, SQL, Scala, Java. Big Data: ApacheSpark, Hadoop, Databricks, Snowflake, etc. Cloud: AWS (Glue, Redshift), Azure (Synapse, Data Factory More ❯
Experience as a Data Engineer for Cloud Data Lake activities, especially in high-volume data processing frameworks, ETL development using distributed computing frameworks like ApacheSpark, Hadoop, Hive. Experience optimizing database performance, scalability, data security, and compliance. Experience with event-based, micro-batch, and batched high-volume, high More ❯
services experience is desired but not essential. API development (FastAPI, Flask) Tech stack : Azure, Python, Databricks, Azure DevOps, ChatGPT, Groq, Cursor AI, JavaScript, SQL, ApacheSpark, Kafka, Airflow, Azure ML, Docker, Kubernetes and many more. Role Overview: We are looking for someone who is as comfortable developing AI More ❯
london, south east england, united kingdom Hybrid / WFH Options
Aventis Solutions
services experience is desired but not essential. API development (FastAPI, Flask) Tech stack : Azure, Python, Databricks, Azure DevOps, ChatGPT, Groq, Cursor AI, JavaScript, SQL, ApacheSpark, Kafka, Airflow, Azure ML, Docker, Kubernetes and many more. Role Overview: We are looking for someone who is as comfortable developing AI More ❯
independently Experience in working with data visualization tools Experience in GCP tools – Cloud Function, Dataflow, Dataproc and Bigquery Experience in data processing framework – Beam, Spark, Hive, Flink GCP data engineering certification is a merit Have hands on experience in Analytical tools such as powerBI or similar visualization tools Exhibit More ❯
independently Experience in working with data visualization tools Experience in GCP tools – Cloud Function, Dataflow, Dataproc and Bigquery Experience in data processing framework – Beam, Spark, Hive, Flink GCP data engineering certification is a merit Have hands on experience in Analytical tools such as powerBI or similar visualization tools Exhibit More ❯
ll Bring 5+ years in data/analytics engineering, including 2+ years in a leadership or mentoring role. Strong hands-on expertise in Databricks , Spark , Python , PySpark , and Delta Live Tables . Experience designing and delivering scalable data pipelines and streaming data processing (e.g., Kafka , AWS Kinesis , or Azure More ❯
ll Bring 5+ years in data/analytics engineering, including 2+ years in a leadership or mentoring role. Strong hands-on expertise in Databricks , Spark , Python , PySpark , and Delta Live Tables . Experience designing and delivering scalable data pipelines and streaming data processing (e.g., Kafka , AWS Kinesis , or Azure More ❯
Scala). Experience with one or more scripting language (e.g., Python, KornShell). PREFERRED QUALIFICATIONS Experience with big data technologies such as: Hadoop, Hive, Spark, EMR. Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for More ❯
or similar role. Proficiency with Databricks and its ecosystem. Strong programming skills in Python, R, or Scala. Experience with big data technologies such as ApacheSpark, Databricks. Knowledge of SQL and experience with relational databases. Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud). Strong analytical and More ❯
Product Or Domain Expertise Blend of technical expertise with 5+ years of experience, analytical problem-solving, and collaboration with cross-functional teams Azure DevOps ApacheSpark, Python Strong SQL proficiency Data modeling understanding ETL processes, Azure Data Factory Azure Databricks knowledge Familiarity with data warehousing Big data technologies More ❯
Richmond, North Yorkshire, Yorkshire, United Kingdom
Datix Limited
knowledge of programming languages, specifically Python and SQL. Expertise in data management, data architecture, and data visualization techniques. Experience with data processing frameworks like ApacheSpark, Hadoop, or Flink. Strong understanding of database systems (SQL and NoSQL) and data warehousing technologies. Familiarity with cloud computing platforms (AWS, Azure More ❯
data, analytics, and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake, and MLflow. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of More ❯
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive More ❯
technologies like Docker and Kubernetes. Ideally, some familiarity with data workflow management tools such as Airflow as well as big data technologies such as ApacheSpark/Ignite or other caching and analytics technologies. A working knowledge of FX markets and financial instruments would be beneficial. What we More ❯
Glue, Athena, Redshift, Kinesis, Step Functions, and Lake Formation. Strong programming skills in Python and PySpark for data processing and automation. Extensive SQL experience (Spark-SQL, MySQL, Presto SQL) and familiarity with NoSQL databases (DynamoDB, MongoDB, etc.). Proficiency in Infrastructure-as-Code (Terraform, CloudFormation) for automating AWS data More ❯