implementing data pipelines using Databricks Optimising ETL processes Collaborating with data scientists and analysts Ensuring data security and governance Requirements: Proven experience with Databricks, Spark, and SQL Familiarity with cloud platforms (AWS, Azure, Google Cloud) Strong problem-solving skills Excellent communication skills Preferred Qualifications: Experience in the retail sector More ❯
classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
management, code repositories, and automation. Requirements 5+ Years experience in the Data and Analytics Domain. Previous Management experience is preferred. Strong expertise in Databricks (Spark, Delta Lake, Notebooks). Advanced knowledge of SQL development. • Familiarity with Azure Synapse for orchestration and analytics. Working experience with Power BI for reporting More ❯
management, code repositories, and automation. Requirements 5+ Years experience in the Data and Analytics Domain. Previous Management experience is preferred. Strong expertise in Databricks (Spark, Delta Lake, Notebooks). Advanced knowledge of SQL development. • Familiarity with Azure Synapse for orchestration and analytics. Working experience with Power BI for reporting More ❯
elevate technology and consistently apply best practices. Qualifications for Software Engineer Hands-on experience working with technologies like Hadoop, Hive, Pig, Oozie, Map Reduce, Spark, Sqoop, Kafka, Flume, etc. Strong DevOps focus and experience building and deploying infrastructure with cloud deployment technologies like Ansible, Chef, Puppet, etc. Experience with More ❯
etc Cloud Computing : AWS, Azure, Google Cloud for scalable data solutions. API Strategy : Robust APIs for seamless data integration. Data Architecture : Finbourne LUSID, Hadoop, Spark, Snowflake for managing large volumes of investment data. Cybersecurity : Strong data security measures, including encryption and IAM. AI and Machine Learning : Predictive analytics, risk More ❯
etc Cloud Computing : AWS, Azure, Google Cloud for scalable data solutions. API Strategy : Robust APIs for seamless data integration. Data Architecture : Finbourne LUSID, Hadoop, Spark, Snowflake for managing large volumes of investment data. Cybersecurity : Strong data security measures, including encryption and IAM. AI and Machine Learning : Predictive analytics, risk More ❯
Familiarity with cloud platforms like AWS, GCP, or Azure. Strong written and spoken English skills. Bonus Experience: Experience with big data tools (e.g., Hadoop, Spark) and distributed computing. Knowledge of NLP techniques and libraries. Familiarity with Docker, Kubernetes, and deploying machine learning models in production. Experience with visualization tools More ❯
clients to deliver these analytical solutions Collaborate with stakeholders and customers to ensure successful project delivery Write production-ready code in SQL, Python, and Spark following software engineering best practices Coach team members in machine learning and statistical modelling techniques Who we are looking for We are looking for More ❯
/B testing. Strong machine learning and statistical knowledge. Preferred Qualifications: Proficient with Machine learning frameworks such as Tensorflow, PyTorch, MLlib. Experience with Databricks, Spark, Tecton, Kubernetes, Helm, Jenkins. Familiarity with standard methodologies in large-scale DL training/Inference. Experience with reducing model serving latency, memory footprint. Experience More ❯
Web App. Good knowledge in real-time streaming applications preferably with experience in Kafka Real-time messaging or Azure Stream Analytics/Event Hub. Spark processing and performance tuning. File formats partitioning for e.g. Parquet, JSON, XML, CSV. Azure DevOps, GitHub actions. Hands-on experience in at least one More ❯
build. Dojo home and away We believe our best work happens when we collaborate in-person. These "together days" foster communication, drive innovation and spark our brightest ideas. That's why we have an office-first culture. This means working from the office 4+ days per week. With offices More ❯
of Java and its ecosystems, including experience with popular Java frameworks (e.g. Spring, Hibernate). Familiarity with big data technologies and tools (e.g. Hadoop, Spark, NoSQL databases). Strong experience with Java development, including design, implementation, and testing of large-scale systems. Experience working on public sector projects and More ❯
Design and Maintenance, Apps, Hive Metastore Management, Network Management, Delta Sharing, Dashboards, and Alerts. Proven experience working with big data technologies, i.e., Databricks and Apache Spark. Proven experience working with Azure data platform services, including Storage, ADLS Gen2, Azure Functions, Kubernetes. Background in cloud platforms and data architectures, such … experience of ETL/ELT, including Lakehouse, Pipeline Design, Batch/Stream processing. Strong working knowledge of programming languages, including Python, SQL, PowerShell, PySpark, Spark SQL. Good working knowledge of data warehouse and data mart architectures. Good experience in Data Governance, including Unity Catalog, Metadata Management, Data Lineage, Quality More ❯
month initial contract Job Description: Seeking a Data Engineer with a strong understanding of data concepts, including data types, data structures, schemas (JSON and Spark), and schema management. Key Skills and Experience: Strong understanding of complex JSON manipulation Experience with Data Pipelines using custom Python/PySpark frameworks Knowledge … Frameworks: JSON YAML Python (advanced proficiency, Pydantic bonus) SQL PySpark Delta Lake Bash Git Markdown Scala (bonus) Azure SQL Server (bonus) Technologies: Azure Databricks ApacheSpark Delta Tables Data processing with Python PowerBI (Data ingestion and integration) JIRA Additional Notes: Candidates with current or past high-level security More ❯
Remote Active SC clearance required £640 per day inside ir35 REQUIRED Strong understanding of data concepts - data types, data structures, schemas (both JSON and Spark), schema management etc Strong understanding of complex JSON manipulation Experience working with Data Pipelines using a custom Python/PySpark frameworks Strong understanding of … PySpark Delta Lake Bash (both CLI usage and scripting) Git Markdown Scala DESIRABLE Azure SQL Server as a HIVE Metastore DESIRABLE TECHNOLOGIES Azure Databricks ApacheSpark Delta Tables Data processing with Python PowerBI (Integration/Data Ingestion) JIRA If you meet the above requirements, please apply for the More ❯
MONTH INITIAL CONTRACT Seeking a Data Engineer who has a strong understanding of data concepts - data types, data structures, schemas (both JSON and Spark), schema management etc - Strong understanding of complex JSON manipulation - Experience working with Data Pipelines using a custom Python/PySpark frameworks - Strong understanding of the … Lake - Bash (both CLI usage and scripting) - Git - Markdown - Scala (bonus, not compulsory) - Azure SQL Server as a HIVE Metastore (bonus) Technologies - Azure Databricks - ApacheSpark - Delta Tables - Data processing with Python - PowerBI (Integration/Data Ingestion) - JIRA Due to the nature and urgency of this post, candidates More ❯
+ 10% bonus + benefits Purpose: Build and maintain large, scalable Data Lakes, processes and pipelines Tech: Python, Iceberg/Kafka, Spark/Glue, CI/CD Industry: Financial services/securities trading Immersum continue to support a leading SaaS securities trading platform, who are hiring their first Data … Infra tooling using Terraform, Ansible and Jenkins whilst automating everything with Python Tech (experience in any listed is advantageous) Python Cloud: AWS Lake house: ApacheSpark or AWS Glue Cloud Native storage: Iceberg, RDS, RedShift, Kafka IaC: Terraform, Ansible CI/CD: Jenkins, Gitlab Other platforms such as More ❯
+ 10% bonus + benefits Purpose: Build and maintain large, scalable Data Lakes, processes and pipelines Tech: Python, Iceberg/Kafka, Spark/Glue, CI/CD Industry: Financial services/securities trading Immersum continue to support a leading SaaS securities trading platform, who are hiring their first Data … Infra tooling using Terraform, Ansible and Jenkins whilst automating everything with Python Tech (experience in any listed is advantageous) Python Cloud: AWS Lake house: ApacheSpark or AWS Glue Cloud Native storage: Iceberg, RDS, RedShift, Kafka IaC: Terraform, Ansible CI/CD: Jenkins, Gitlab Other platforms such as More ❯
Saffron Walden, Essex, South East, United Kingdom Hybrid / WFH Options
EMBL-EBI
Experience in developing web infrastructure (Solr, kubernetes) Experience in git and basic Unix Commands You may also have Experience with large data processing technologies (ApacheSpark) Other helpful information: The team work in a hybrid working pattern and spends 2 days per week in office Apply now! Benefits More ❯
Birmingham, Staffordshire, United Kingdom Hybrid / WFH Options
Investigo
advanced visualisations, ML model interpretation, and KPI tracking. Deep knowledge of feature engineering, model deployment, and MLOps best practices. Experience with big data processing (Spark, Hadoop) and cloud-based data science environments. Other: Ability to integrate ML workflows into large-scale data pipelines. Strong experience in data preprocessing, feature More ❯
Birmingham, West Midlands, West Midlands (County), United Kingdom Hybrid / WFH Options
Investigo
advanced visualisations, ML model interpretation, and KPI tracking. Deep knowledge of feature engineering, model deployment, and MLOps best practices. Experience with big data processing (Spark, Hadoop) and cloud-based data science environments. Other: Ability to integrate ML workflows into large-scale data pipelines. Strong experience in data preprocessing, feature More ❯
Technology, or related field. Proficiency in software engineering with experience in Java & Spring or other major programming languages. Preferred Qualifications: Experience with Spring Boot, Spark (Big Data), and Message Bus Architecture. Familiarity with containerisation (e.g., Kubernetes), AWS Cloud, and CICD pipelines (Jenkins). If you meet the above criteria More ❯