languages eg. Python, R, Scala, etc. (Python preferred) Strong proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies eg. pySpark, Hive, etc. Experienced working with structured and unstructured data eg. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques and More ❯
Oriented & Communicative Master's degree with at least 5 years of experience in data modeling (including Kimball methodology); Strong knowledge of SQL, Python, and Pyspark; Experience with modern data platforms and BI tools (Cognos, Power BI, Tableau); Ability to think abstractly and translate concepts into models and code; A More ❯
Warwickshire, West Midlands, United Kingdom Hybrid / WFH Options
Hays
with strong technical expertise and a passion for solving complex business problems. You'll bring: Strong experience with SQL, SQL Server DB, Python, and PySpark Proficiency in Azure Data Factory, Databricks is a must, and Cloudsmith Background in data warehousing and data engineering Solid project management capabilities Outstanding communication More ❯
SSMS Data warehousing, ETL processes, best practice data management Azure cloud technologies (Synapse, Databricks, Data Factory, Power BI) Microsoft Fabric (not essential) Python/PySpark Proven ability to work in hybrid data environments Experience in finance or working with finance teams Ability to manage and lead on and offshore More ❯
GitHub Actions specifically. Must Have Version Control Systems: Proficiency in tools like Git and understanding branching, merging, and pull requests. Data Engineer with (Python, PySpark, Gitlab, Azure Data bricks) with affinity for data engineering. CI/CD Pipelines: Hands-on experience with tools like Jenkins, GitHub Actions. Automating builds More ❯
GitHub Actions specifically. Must Have Version Control Systems: Proficiency in tools like Git and understanding branching, merging, and pull requests. Data Engineer with (Python, PySpark, Gitlab, Azure Data bricks) with affinity for data engineering. CI/CD Pipelines: Hands-on experience with tools like Jenkins, GitHub Actions. Automating builds More ❯
coding languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques More ❯
BI tools (e.g. Looker, Looker Data Studio, Tableau, AWS QuickSight) Experience on Ecommerce and digital marketing metrics Expertise in using scalable analytics techniques (e.g., PySpark, Dask, Modin, Koalas) and MPP databases (e.g., BigQuery, Redshift, Snowflake) for complex data assembly and transformation What's in it for you? Work with More ❯
coding languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques More ❯
coding languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques More ❯
coding languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques More ❯
coding languages eg. Python, R, Scala, etc.; (Python preferred) Proficiency in database technologies eg. SQL, ETL, No-SQL, DW, and Big Data technologies e.g. pySpark, Hive, etc. Experienced working with structured and also unstructured data eg. Text, PDFs, jpgs, call recordings, video, etc. Knowledge of machine learning modelling techniques More ❯
London, England, United Kingdom Hybrid / WFH Options
Young's Employment Services
understanding of data definition, observability and data quality best practices. Proficiency in development languages suitable for intermediate-level data engineers, such as: Python/PySpark/SQL etc. Exposure to data science concepts and techniques is highly desirable. Strong problem-solving skills and attention to detail. Excellent communication and More ❯
requirements. Preferred Skills and Experience Databricks Azure Data Factory Data Lakehouse Medallion architecture Microsoft Azure T-SQL Development (MS SQL Server 2005 onwards) Python, PySpark Experience of the following systems would also be advantageous: Azure DevOps MDS Kimball Dimensional Modelling Methodology Power Bi Unity Catalogue Microsoft Fabric Experience of More ❯
London, England, United Kingdom Hybrid / WFH Options
Lumenalta
have delivered business-critical software to large enterprises You are comfortable manipulating large data sets and handling raw SQL Experience using technologies such as Pyspark/AWS/Databricks is essential Experience creating ETL pipelines from scratch E-commerce and Financial Services industry experience preferred English fluency, verbal and More ❯
Leeds, England, United Kingdom Hybrid / WFH Options
Skills for Care
external data engineers and data scientists, who may not be familiar with the datasets, to accelerate development. Our technology stack consists of: Python and Pyspark AWS glue jobs assembled into Step functions Pydeequ for data quality testing Amazon Athena for querying data Hosted on AWS, using S3, Glue, Step More ❯
Airflow/Argo) Prior experience writing documentation for senior stakeholders; the ability to accurately abstract and summarize technical information is critical Python programming skills: PySpark, Pandas, Jupyter Notebooks (3+ years in a professional environment) Prior experience working with git in a professional environment Ability to work independently in a More ❯
to optimize relevant workflow components for advanced artificial intelligence applications. The Contractor shall lead work to optimize cloud-based computing technologies, such as leveraging pyspark, distributed computation, and model training/inference, and integrate solutions into relevant delivery mechanisms or partner systems. The Contractor shall build tools and scripts More ❯
and reliable operations. Required: 6-10 years of experience in software development with a focus on production-grade code. Proficiency in Java, Python, and PySpark; experience with C++ is a plus. Deep expertise in Azure services, including Azure Storage, and familiarity with AWS S3. Strong understanding of data security More ❯
and reliable operations. Required: 6-10 years of experience in software development with a focus on production-grade code. Proficiency in Java, Python, and PySpark; experience with C++ is a plus. Deep expertise in Azure services, including Azure Storage, and familiarity with AWS S3. Strong understanding of data security More ❯
and reliable operations. Required: 6-10 years of experience in software development with a focus on production-grade code. Proficiency in Java, Python, and PySpark; experience with C++ is a plus. Deep expertise in Azure services, including Azure Storage, and familiarity with AWS S3. Strong understanding of data security More ❯
Cloud Technologies, AWS Data Lake. Experienced in a variety of Database technologies (Postgres, Oracle, Snowflake). Awareness of scripting languages such as Python and PySpark and uses within the context of data transfer/manipulation. Ability to assimilate technical and business information and to help translate these into practical More ❯
Bournemouth, England, United Kingdom Hybrid / WFH Options
LV=
the financial services sector. •Proficient in data profiling, data cleansing, and data validation techniques. •Hands-on experience with SQL and working knowledge of Python, PySpark, or data wrangling tools. •Familiarity with modern data quality platforms (e.g. Purview, Informatica, Talend, Collibra DQ). •Ability to analyse and interpret data to More ❯
and DBA) Azure Databricks Azure Data Factory Microsoft Fabric Power BI, DAX Azure Data Lake Supporting: Azure ML Azure AI Services Azure Infrastructure Python, PySpark Microsoft Purview Principles: Relational and Dimensional Modelling Data Warehouse Theory Data platform Architecture models such as Lakehouse Data Science Master Data Management Data Governance More ❯
and reliable operations. Required: 6-10 years of experience in software development with a focus on production-grade code. Proficiency in Java, Python, and PySpark; experience with C++ is a plus. Deep expertise in Azure services, including Azure Storage, and familiarity with AWS S3. Strong understanding of data security More ❯