Manchester, England, United Kingdom Hybrid / WFH Options
Made Tech
and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to more »
Bristol, England, United Kingdom Hybrid / WFH Options
Made Tech
and able to guide how one could deploy infrastructure into different environments. Knowledge of handling and transforming various data types (JSON, CSV, etc) with ApacheSpark, Databricks or Hadoop Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes) Ability to more »
requires candidates to go through SC Clearance, so you must be eligible. Experience of AWS tools (e.g Athena, Redshift, Glue, EMR) Java, Scala, Python, Spark, SQL Experience of developing enterprise grade ETL/ELT data pipelines. NoSQL Databases. Dynamo DB/Neo4j/Elastic, Google Cloud Datastore. Snowflake Data more »
emphasis on Pyspark and Databricks for this particular role. Technical Skills Required: Azure (ADF, Functions, Blob Storage, Data Lake Storage, Azure Data Bricks) Databricks Spark Delta Lake SQL Python PySpark ADLS Day To Day Responsibilities: Extensive experience in designing, developing, and managing end-to-end data pipelines, ETL (Extract more »
or more of the following tools: Informatica PowerCenter, SAS Data Integration Studio, Microsoft SSIS, Ab Initio, etc. • Ideally, you have experience in Hadoop ecosystem (Spark, Kafka, HDFS, Hive, HBase, …), Docker and orchestration platform (Kubernetes, Openshift, AKS, GKE...), and noSQL Databases (MongoDB, Cassandra, Neo4j) • Any experience with cloud platforms such more »
and Public Services, Healthcare, Life Sciences, and Transport. Essential Skills & Experience: Design and deploy data pipelines in big data architecture using Java, Python, Scala, Spark, and SQL. Execute tasks involving scripting, API data extraction, and SQL queries. Proficient in data cleaning, wrangling, visualization, and reporting. Specialised in AWS cloud more »
NumPy, scikit-learn). Understanding of database technologies (ETL) and SQL proficiency for data manipulation, data mining and querying. Knowledge of Big Data Tools (Spark or Hadoop a plus). Power BI, Dashboard design/development. Regulatory Awareness/Compliance Uphold Regulatory/Compliance requirements relevant to your role more »
Google Cloud Professional Cloud Architect or Professional Cloud Developer certification Very Disrable to have hands-on experience with ETL tools, Hadoop-based technologies (e.g., Spark), and batch/streaming data pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data warehouse solutions utilising technologies more »
Google Cloud Professional Cloud Architect or Professional Cloud Developer certification Very Disrable to have hands-on experience with ETL tools, Hadoop-based technologies (e.g., Spark), and batch/streaming data pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data warehouse solutions utilising technologies more »
DynamoDB, Aurora) Knowledge and experience with Snowflake and other databases (PostgreSQL, MS SQL Server, MySQL) Experience with Big Data Batch and Streaming technologies like Spark, Kafka, Flink, Beam, Kinesis SnowPro Certification or equivalent from AWS Comfort working within an agile development cycle and exposure to: Linux development Git and more »
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
Leo Recruitment Limited
in programming languages and tools for data analysis, such as Python, R, and SQL You must be proficient in big data technologies, such as Spark, Kafka and/or Hadoop. A strong understanding of statistical analysis, predictive modelling, machine learning algorithms, and data development and optimisation is essential You more »
Staines-Upon-Thames, England, United Kingdom Hybrid / WFH Options
IFS
with data ingestion tools such as Airbyte and Fivetran, accommodating a wide array of data sources. Mastery of large-scale data processing techniques using Spark or Dask. Strong programming skills in Python, Scala, C#, or Java, and adeptness with cloud SDKs and APIs. Deep understanding of AI/ML more »
to ensure efficient and accurate data delivery. Optimize data workflows for performance, scalability, and cost-effectiveness. Technical Expertise: Demonstrate in-depth expertise in Databricks, ApacheSpark, and related big data technologies. Stay informed about the latest industry trends and advancements in data engineering. Quality Assurance: Conduct thorough testing … projects. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience in data engineering with a focus on Databricks and Apache Spark. Strong programming skills, preferably in Python or Scala. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and associated data services. Excellent communication skills more »
and AI models. Data Engineer Required Experience Data engineering experience (2+ years) Cloud platform proficiency (e.g., AWS, Azure, GCP) Data pipeline development (e.g., Airflow, ApacheSpark) SQL proficiency, database design Visualization tools knowledge (e.g., Tableau, PowerBI, Looker) Data Engineer Application Process This is a 1 year contract requirement more »
for seamless data integration. * Understanding of DevOps best practices for SQL and Power BI projects, including DACPAC, CI/CD, and versioning. * Familiarity with ApacheSpark for big data processing. * Additional development experience in Python or related technologies. * Experience gained within a Media, Travel or Broadcast Media sectors more »
Employment Type: Permanent
Salary: £65000 - £70000/annum Hybrid, Health, Dental, Extra Hols
and expertise in tools like Informatica & Talend MDM Big data – Hadoop eco-system, Distributions like Cloudera/Hortonworks, Pig and HIVE Data processing frameworks – Spark & Spark streaming Hands-on experience with multiple databases like PostgreSQL, Snowflake, Oracle, MS SQL Server, NOSQL (HBase/Cassandra, MongoDB), is required Knowledge more »
value through improved data handling and analysis. Responsibilities: Build predictive models using machine-learning techniques that generate data-driven insights on modern data platforms (Spark, Hadoop and other map-reduce tools); Develop and productionalize containerized algos for deployment in hybrid cloud environments (GCP, Azure) Connect and blend data from more »
quality testing frameworks. Proficiency in Python and familiarity with modern software engineering practices, including 12factor, CI/CD, and Agile methodologies. Deep understanding of Spark (PySpark), Python (Pandas), orchestration software (e.g. Airflow, Prefect) and databases, data lakes and data warehouses. Experience with cloud technologies, particularly AWS Cloud services, with more »
pivotal role in designing, building, and maintaining their data infrastructure while collaborating closely with senior stakeholders across the organisation. Your expertise in Azure, Databricks, Spark, Python, and data modelling will be critical in driving the success of their data initiatives. Key Responsibilities: Lead the complete development cycle of data … comprehensive of data modelling, data warehousing principles, and the innovative Lakehouse architecture. Exceptional proficiency in ETL methodologies, preferably utilising Azure Databricks or equivalent technologies (Spark, Spark SQL, Python, SQL), including deep insight into ETL/ELT design patterns. Proficient in Databricks, SQL, and Python, with a robust understanding more »
a qualified Data Engineer to join our team, where your responsibilities will include: Designing, optimizing, and maintaining scalable data pipelines and ETL processes using Spark, ensuring streamlined data processing and integration. Collaborating cross-functionally to translate complex data requirements into actionable technical solutions that drive business objectives. Leveraging Microsoft … the Midlands. Ideal Candidate Profile: We are seeking an individual who have the following attributes: Proven expertise as a Data Engineer, demonstrating proficiency in ApacheSpark and cloud-based technologies, particularly Microsoft Azure and Databricks. Strong programming skills, with a focus on Python, along with proficiency in ETL more »
Better Placed Ltd - A Sunday Times Top 10 Employer in 2023!
warehousing technologies (e.g., Redshift, Snowflake) Strong analytical and problem-solving skills Experience with cloud platforms (AWS, Azure, or GCP) and big data frameworks (Hadoop, Spark) is a plus Data Engineer – London more »
Azure Solutions Architect Expert. Experience with other cloud platforms such as AWS or Google Cloud Platform. Knowledge of big data technologies such as Hadoop, Spark, etc. If you are passionate about leveraging Azure technologies to drive data-driven insights and solutions, we encourage you to apply for this exciting more »
skills include: Experience deploying, securing and supporting cloud infrastructure platforms Understanding of security frameworks/standards Understanding of data streaming and messaging frameworks (Kafka, Spark, etc.) and modern database technologies (Cockroach etc.) Understanding of distributed tracing and monitoring (Zipkin, OpenTracing, Prometheus, ELK stack, Micrometer metrics, etc.) Experience with containers more »