HBase, Elasticsearch). Build, operate, maintain, and support cloud infrastructure and data services. Automate and optimize data engineering pipelines. Utilize big data technologies (Databricks, Spark). Develop custom security applications, APIs, AI/ML models, and advanced analytic technologies. Experience with threat detection in Azure Sentinel, Databricks, MPP Databases More ❯
HBase, Elasticsearch). Build, operate, maintain, and support cloud infrastructure and data services. Automate and optimize data engineering pipelines. Utilize big data technologies (Databricks, Spark). Develop custom security applications, APIs, AI/ML models, and advanced analytic technologies. Experience with threat detection in Azure Sentinel, Databricks, MPP Databases More ❯
HBase, Elasticsearch). Build, operate, maintain, and support cloud infrastructure and data services. Automate and optimize data engineering pipelines. Utilize big data technologies (Databricks, Spark). Develop custom security applications, APIs, AI/ML models, and advanced analytic technologies. Experience with threat detection in Azure Sentinel, Databricks, MPP Databases More ❯
knowledge of data modelling, warehousing, and real-time analytics. Proficiency in SQL, Python, Java, or similar programming languages. Familiarity with big data technologies (e.g., Spark, Hadoop) and BI tools (e.g., Power BI, Tableau). Excellent problem-solving and stakeholder engagement skills. Desirable: Experience in research-driven or complex data More ❯
Extensive development experience using SQL. Hands-on experience with MPP databases such as Redshift, BigQuery, or Snowflake, and modern transformation/query engines like Spark, Flink, Trino. Familiarity with workflow management tools (e.g., Airflow) and/or dbt for transformations. Comprehensive understanding of modern data platforms, including data governance More ❯
Knutsford Contract Role Job Description: AWS Services: Glue, Lambda, IAM, Service Catalogue, Cloud Formation, Lake Formation, SNS, SQS, Event Bridge Language & Scripting: Python and Spark ETL: DBT Good to Have: Airflow, Snowflake, Big Data (Hadoop), and Teradata Responsibilities: Serve as the primary point of contact for all AWS related More ❯
Cassandra, and Redis. In-depth knowledge of ETL/ELT pipelines, data transformation, and storage optimization. Skilled in working with big data frameworks like Spark, Flink, and Druid. Hands-on experience with both bare metal and AWS environments. Strong programming skills in Python, Java, and other relevant languages. Proficiency More ❯
related field. 5+ years of experience in data engineering and data quality. Strong proficiency in Python/Java, SQL, and data processing frameworks including Apache Spark. Knowledge of machine learning and its data requirements. Attention to detail and a strong commitment to data integrity. Excellent problem-solving skills and More ❯
Systems, Cloudera/Hortonworks, AWS EMR, GCP DataProc or GCP Cloud Data Fusion. Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. Experience of working with CI/CD technologies, Git, Jenkins, Spinnaker, GCP Cloud Build, Ansible etc. and experience building and deploying solutions to More ❯
related field. 5+ years of experience in data engineering and data quality. Strong proficiency in Python/Java, SQL, and data processing frameworks including Apache Spark. Knowledge of machine learning and its data requirements. Attention to detail and a strong commitment to data integrity. Excellent problem-solving skills and More ❯
HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing More ❯
communication and collaboration skills. Azure certifications such as Azure Data Engineer Associate or Azure Solutions Architect Expert. Experience with big data technologies like Hadoop, Spark, or Databricks. Familiarity with machine learning and AI concepts. If you encounter any suspicious mail, advertisements, or persons who offer jobs at Wipro, please More ❯
to support business insights, analytics, and other data-driven initiatives. Job Specification ( Technical Skills) : Cloud Platforms: Expert-level proficiency in Azure (Data Factory, Databricks, Spark, SQL Database, DevOps/Git, Data Lake, Delta Lake, Power BI), with working knowledge of Azure WebApp and Networking. Conceptual understanding of Azure AI More ❯
Nottingham, Nottinghamshire, East Midlands, United Kingdom Hybrid / WFH Options
Profile 29
proposal development Experience in Data & AI architecture and solution design Experience working for a consultancy or agency Experience with data engineering tools (SQL, Python, Spark) Hands-on experience with cloud platforms (Azure, AWS, GCP) Hands-on experience with data platforms (Azure Synapse, Databricks, Snowflake) Ability to translate clients business More ❯
derby, midlands, united kingdom Hybrid / WFH Options
Profile 29
proposal development Experience in Data & AI architecture and solution design Experience working for a consultancy or agency Experience with data engineering tools (SQL, Python, Spark) Hands-on experience with cloud platforms (Azure, AWS, GCP) Hands-on experience with data platforms (Azure Synapse, Databricks, Snowflake) Ability to translate clients business More ❯
mansfield, midlands, united kingdom Hybrid / WFH Options
Profile 29
proposal development Experience in Data & AI architecture and solution design Experience working for a consultancy or agency Experience with data engineering tools (SQL, Python, Spark) Hands-on experience with cloud platforms (Azure, AWS, GCP) Hands-on experience with data platforms (Azure Synapse, Databricks, Snowflake) Ability to translate clients business More ❯
HiveQL, SparkSQL, Scala) - Experience with one or more scripting languages (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for More ❯
Cardiff, South Glamorgan, United Kingdom Hybrid / WFH Options
RVU Co UK
most of the following: Strong knowledge of SQL and Python programming. Extensive experience working within a cloud environment. Experience with big data technologies (e.g. Spark, Databricks, Delta Lake, BigQuery). Experience with alternative data technologies (e.g. duckdb, polars, daft). Familiarity with eventing technologies (Event Hubs, Kafka etc ). More ❯
managing technical teams. Designing and architecting data and analytic solutions. Developing data processing pipelines in python for Databricks including many of the following technologies: Spark, Delta, Delta Live Tables, PyTest, Great Expectations (or similar). Building and orchestrating data and analytical processing for streaming data with technologies such as More ❯
is the industry-leading cloud big data platform for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks such as ApacheSpark, Trino, Hadoop, Hive, and HBase. Amazon Athena is a serverless query service that simplifies analyzing data directly in Amazon S3 using standard … Experience designing or architecting (design patterns, reliability, and scaling) of new and existing systems Master's degree in computer science or equivalent Experience with Apache Hadoop ecosystem applications: Hadoop, Hive, Presto, Spark, and more Our inclusive culture empowers Amazonians to deliver the best results for our customers. If More ❯
Trafford Park, Trafford, Greater Manchester, United Kingdom Hybrid / WFH Options
ISR RECRUITMENT LIMITED
cloud solutions Handling real-time data processing and ETL jobs Applying AI and data analytics to large datasets Working with big data tools like ApacheSpark and AWS technologies such as Elastic MapReduce, Athena and Lambda Please contact Edward Laing here at ISR Recruitment to learn more about More ❯
Employment Type: Permanent
Salary: £75000 - £85000/annum (plus excellent company benefits)
hoc analytics, data visualisation, and BI tools (Superset, Redash, Metabase) Experience with workflow orchestration tools (Airflow, Prefect) Experience writing data processing pipelines & ETL (Python, ApacheSpark) Excellent communication skills and ability to work collaboratively in a team environment Experience with web scraping Perks & Benefits Competitive salary package (including More ❯
hoc analytics, data visualisation, and BI tools (Superset, Redash, Metabase) Experience with workflow orchestration tools (Airflow, Prefect) Experience writing data processing pipelines & ETL (Python, ApacheSpark) Excellent communication skills and ability to work collaboratively in a team environment Experience with web scraping Perks & Benefits Competitive salary package (including More ❯
Github integration and automation). Experience with scripting languages such as Python or R. Working knowledge of message queuing and stream processing. Experience with ApacheSpark or Similar Technologies. Experience with Agile and Scrum Technologies. Familiarity with dbt and Airflow is an advantage. Experience working in a start More ❯