Senior Data Engineer - SQL Azure/Databricks/Data Factory/Data Lake An opportunity is available through organic growth for an experienced Senior Data Engineer to join a prestigious company renowned for its commitment to excellence in the realm more »
understand consumers Hands-on data engineering/development experience, preferably in a cloud/big data environment Skilled in at least one of Python, PySpark, SQL or similar Experience in guiding or managing roles in insight or data functions, delivering data projects and insight to inspire action and drive more »
modern NLP methods required. Specifically: Transformers models (ex. BERT), LLMs, RAG & Fine-Tuning, OpenAI Stack, Langchain etc. Experience with Big Data technologies a plus —PySpark, H20.ai, Cloud AI platforms, Kubernetes Must be able to translate business requirements into analytical problems Must have proven ability to merge and transform disparate more »
hierarchical algorithms) Expert in SQL and Python, DBT, and standard data science libraries and frameworks such as pandas, numpy, scikit-learn, TensorFlow/PyTorch, PySpark, etc Experience with Infrastructure-as-Code, Continuous Integration & Deployment patterns Experience with PostgreSQL, Google Cloud Platform, and BigQuery Excellent problem-solving skills and the more »
understand the benefits, pros and cons of various technical options. Required skills: Strong experience within a Data Engineering role Excellent understanding of Databricks and Pyspark Strong knowledge of Azure Cloud Services Excellent understanding of SQL Good exposure to Azure Data Lake technologies such as ADF, HDFS and Synapse Good more »
Mart. Utilize Vector Databases, Cosmos DB, Redis, and Elasticsearch for efficient data storage and retrieval. Demonstrate proficiency in programming languages including Python, Spark, Databricks, Pyspark, SQL, and ML Algorithms. Implement Machine Learning models and algorithms using Pyspark, Scikit Learn, and other relevant tools. Manage Azure DevOps, CI/… environments, Azure Data Lake, Azure Data Factory, Microservices architecture. Experience with Vector Databases, Cosmos DB, Redis, Elasticsearch. Strong programming skills in Python, Spark, Databricks, Pyspark, SQL, ML Algorithms, Gen AI. Knowledge of Azure DevOps, CI/CD pipelines, GitHub, Kubernetes (AKS). Experience with ML/OPS tools such more »
London, UK Do you strive for engineering excellence and are ready to lead a modern tech transformation ? Do you have experience as a Azure, Pyspark, Cloud? We have the perfect role for you- Data Engineer - Tech Lead Careers at TCS: It means more TCS is a purpose-led transformation … design, development, and maintenance of Azure-based data pipelines and analytical solutions using Databricks, Synapse, and other relevant services. Perform Data Transformations: Leverage SQL, PySpark, and Python to perform data transformations, aggregations, and analysis on large datasets. Architect Data Storage Solutions: Architect data storage solutions using Azure SQL Database more »
Principal Data Engineer – National Salary Circa £80,000. London Salary Circa £100,000. Do you like working with the latest technology and are interested in enhancing your tech abilities? We have an exciting opportunity for an Engineer Manager with significant more »
Overview: Are you passionate about advancing your career in Data Engineering? This role offers an exceptional opportunity for professional growth, collaborating with some of the industry’s brightest minds. As a Data Engineer at Hexegic, your primary responsibility will be more »
London, England, United Kingdom Hybrid / WFH Options
E.ON Next
segmentation , and AI , we want to hear from you! At E.ON Next, you'll have the opportunity to leverage your skills in Databricks and PySpark to tackle operational and customer experience-related challenges, driving impactful solutions in a dynamic and collaborative environment. Join us in revolutionising the energy sector … stakeholders and fostering strong business partnerships. Leveraging predictive modeling, segmentation techniques, and advanced AI algorithms to unlock valuable insights. Demonstrating proficiency in Databricks and PySpark to streamline data processing and analysis. A taste of what you’ll be doing: ● Consultative Leadership: Build a strategic understanding of the business, employ … Strong communication skills with the ability to engage with non-technical stakeholders. ● Expertise in predictive modeling, segmentation, and AI techniques. ● Proficiency in Databricks and PySpark for data manipulation and analysis. ● Experience solving operational or customer experience-related problems such as workforce management, demand forecasting, or root cause analysis. ● BSc more »
in key technologies related to Data Management, e.g. SQL, Spark, Python· Experience with the Azure cloud platform· Experience with Advance Python Libraries I.e., Pandas, Pyspark· Experience with development best practices (git, testing, coding standards, CI/CD, documentation, code refactoring...)· A love of data and understanding of some algorithmic more »
cross functional teams entrusted with business-critical platforms.Desirable skills & experienceWorking to an Agile methodology and familiarity with Azure DevOpsDeep automation knowledge with PythonSkilled in Pyspark and SynapseExperience with data modelling and visualisation in Power BI (or alternative)A strong understanding of architecting data platforms, BI, MI, or analytics solutionsStrong more »
candidate needs experience and confidence to speak up when several teams can be involved in delivering the project. There is a massive emphasis on Pyspark and Databricks for this particular role. Technical Skills Required: Azure (ADF, Functions, Blob Storage, Data Lake Storage, Azure Data Bricks) Databricks Spark Delta Lake … SQL PythonPySpark ADLS Day To Day Responsibilities: Extensive experience in designing, developing, and managing end-to-end data pipelines, ETL (Extract, Transform, Load), and ELT (Extract, Load, Transform) solutions. Maintains a proactive approach to staying updated with emerging technologies and a strong desire to continuously learn and adapt more »
to 3 years of experience required Degree in Statistics, Maths, Physics, Economics or similar field Programming skills (Python and SQL are a must have, Pyspark is recommended) Analytical Techniques and Technology Experience with and passion for connecting your work directly to the customer experience, making a real and tangible more »
London, England, United Kingdom Hybrid / WFH Options
Harnham
continual improvement of their performance. REQUIRED SKILLS AND EXPERIENCE Proficiency in Python, including its associated data and machine learning packages such as numpy, pandas, pyspark, matplotlib, scikit-learn, Keras, TensorFlow, and more. Experience working with object-oriented programming languages. Experience in Forecasting and propensity. Expertise in utilizing SQL for more »
team might face. Skills and Experience Strong stakeholder management Experience implementing data strategy Strong experience managing Data Engineering teams Hands on experience working with Pyspark and Azure What's in it for you? £90,000 - £100,000 Please send your CV in for initial screening. Data Engineering Manager Desired more »
Are you looking for an exciting opportunity in Solution Architecture ? Are you passionate about everything cloud and data ? Join us as an AWS Cloud Data Platform Solution Architect Careers at TCS: It means more TCS is a purpose-led transformation more »
An industry-leading organisation are looking for an experienced Senior Data Engineer who is well-versed in Databricks, PySpark and SQL to join their growing Data Engineering team. This role can be largely remote, with some travel to their Central London Head Office, likely around 2 times per month. … a wide variety of technical work. Your primary focus will be working with Databricks - you will be building data pipelines in Databricks, coding in PySpark, and supporting internal applications as they are moving away from a legacy application into new bespoke applications which rely on Databricks. This is a … to provide thought leadership, and to innovate and experiment with new technologies to deliver benefits to the business. Requirements Excellent skills in Databricks and Pyspark Excellent experience with SQL and T-SQL programming Experience designing, developing and maintaining data warehouses and data lakes Knowledge of Azure technologies including Data more »
An industry-leading organisation are looking for an experienced Senior Data Engineer who is well-versed in Databricks, PySpark and SQL to join their growing Data Engineering team. This role can be largely remote, with some travel to their Central London Head Office, likely around 2 times per month. … a wide variety of technical work. Your primary focus will be working with Databricks - you will be building data pipelines in Databricks, coding in PySpark, and supporting internal applications as they are moving away from a legacy application into new bespoke applications which rely on Databricks. This is a … to provide thought leadership, and to innovate and experiment with new technologies to deliver benefits to the business. Requirements Excellent skills in Databricks and Pyspark Excellent experience with SQL and T-SQL programming Experience designing, developing and maintaining data warehouses and data lakes Knowledge of Azure technologies including Data more »
for data engineering ie Azure Functions * Core skills in coding with SQL, Python and Spark * Proven experience using Databricks ie lakehouse, delta live tables, Pyspark etc more »
Data Lake Storage, Azure Data Factory, Azure Synapse Analytics, Azure Databricks, Azure SQL Database, Azure Stream Analytics, etc. Strong Python or Scala with Spark, PySpark experience Experience with relational databases and NoSQL databases Significant experience and in-depth knowledge of creating data pipelines and associated design principles, standards, Data more »
experience with data modelling, data warehousing, and ETL/ELT processes.A fluency and development experience in at least one of the following: Java, Python, PySpark or Scala.Experience working with a variety of data formats such as JSON, Parquet, XML etc.Experience with or developed understanding of the application of ETL more »
Engineering Hands-on experience in designing and developing scripts for custom ETL processes and automation in Azure Data Factory, Azure Databricks, Azure Synapse, Python, Pyspark etc. Experience being customer-facing on numerous data focused projects with a consultative approach Ability to deliver high to low-level designs for Data more »
Engineering Hands-on experience in designing and developing scripts for custom ETL processes and automation in Azure Data Factory, Azure Databricks, Azure Synapse, Python, Pyspark etc. Experience being customer-facing on numerous data focused projects with a consultative approach Ability to deliver high to low-level designs for Data more »
PowerBi would also be useful. Engineer with past experience with Java, Data, and Infrastructure (DevOps). Java is a key skill Programming: Java, Python, PySpark Storage Mechanisms: MongoDB, Redshift, AWS S3 Cloud Environments/Infra: AWS (required), [AWS Lambda, Terraform] (nice to have) Data Platforms: Creating data pipelines within more »