SQL Server and Azure Experience creating data pipelines with Azure Data Factory Databricks experience would be beneficial Experience working with Python/Spark/PySpark This is just a brief overview of the role. For the full information, simply apply to the role with your CV, and I will more »
Leeds, England, United Kingdom Hybrid / WFH Options
Fruition IT
technical stakeholders. Experience Required: Extensive experience in training, deploying and maintaining Machine Learning models. Data Warehousing and ETL tools. Python and surrounding ML tech; PySpark, Snowflake, Scikit Learn, TensorFlow, PyTorch etc. Infrastructure as Code - Terraform, Ansible. Stakeholder Management - Tech and Non-Technical. The Offer: Base Salary more »
Leeds, England, United Kingdom Hybrid / WFH Options
Mastek
OBIEE, Workato and PL/SQL. Design and build data solutions on Azure, leveraging Databricks, Data Factory, and other Azure services. Utilize Python and PySpark for data transformation, analysis, and real-time streaming. Collaborate with cross-functional teams to gather requirements, design solutions, and deliver insights. Implement and maintain … Technologies: Databricks, Data Factory: Expertise in data engineering and orchestration. DevOps, Storage Explorer, Data Studio: Competence in deployment, storage management, and development tools. Python, PySpark: Advanced coding skills, including real-time data streaming through Autoloader. Development Tools: VS Code, Jira, Confluence, Bitbucket. Service Management: Experience with ServiceNow. API Integration more »
Lead Data Engineer: We need some strong Lead data engineer profiles… they need good experience with Python, SQL, ADF and preferably Azure Databricks experience Job description: Building new data pipelines and optimizing data flows using the Azure cloud stack. Building more »
scalable, automated ETL pipelines in an AWS cloud environment using AWS S3 Cloud Object Storage Strong coding skills using Hive SQL, Spark SQL, Python, PySpark and Bash Experience of working with a wide variety of structured and unstructured data. You and your role As a Data Engineer at DWP … you'll be executing code across a full tech stack, including Azure, Databricks, PySpark and Pandas, helping the department to move towards a cloud computing environment, working with huge data sets as part of our DataWorks platform - a system that provides Universal Credit data to our Data Science team more »
Leeds, West Yorkshire, Yorkshire and the Humber Hybrid / WFH Options
DWP
scalable, automated ETL pipelines in an AWS cloud environment using AWS S3 Cloud Object Storage Strong coding skills using Hive SQL, Spark SQL, Python, PySpark and Bash Experience of working with a wide variety of structured and unstructured data. You and your role As a Data Engineer at DWP … you'll be executing code across a full tech stack, including Azure, Databricks, PySpark and Pandas, helping the department to move towards a cloud computing environment, working with huge data sets as part of our DataWorks platform - a system that provides Universal Credit data to our Data Science team more »