Nottingham, Nottinghamshire, East Midlands, United Kingdom Hybrid / WFH Options
Profile 29
proposal development Experience in Data & AI architecture and solution design Experience working for a consultancy or agency Experience with data engineering tools (SQL, Python, Spark) Hands-on experience with cloud platforms (Azure, AWS, GCP) Hands-on experience with data platforms (Azure Synapse, Databricks, Snowflake) Ability to translate clients business More ❯
mansfield, midlands, united kingdom Hybrid / WFH Options
Profile 29
proposal development Experience in Data & AI architecture and solution design Experience working for a consultancy or agency Experience with data engineering tools (SQL, Python, Spark) Hands-on experience with cloud platforms (Azure, AWS, GCP) Hands-on experience with data platforms (Azure Synapse, Databricks, Snowflake) Ability to translate clients business More ❯
derby, midlands, united kingdom Hybrid / WFH Options
Profile 29
proposal development Experience in Data & AI architecture and solution design Experience working for a consultancy or agency Experience with data engineering tools (SQL, Python, Spark) Hands-on experience with cloud platforms (Azure, AWS, GCP) Hands-on experience with data platforms (Azure Synapse, Databricks, Snowflake) Ability to translate clients business More ❯
HiveQL, SparkSQL, Scala) - Experience with one or more scripting languages (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for More ❯
part of Core Account team within Services group (TTS) and is responsible for building a scalable, high-performance data platform on Big Data technologies (Spark, Scala, Hive, Hadoop) along with Kafka/Java and AI technologies to support core account data needs across multiple lines of businesses. As a More ❯
managing technical teams. Designing and architecting data and analytic solutions. Developing data processing pipelines in python for Databricks including many of the following technologies: Spark, Delta, Delta Live Tables, PyTest, Great Expectations (or similar). Building and orchestrating data and analytical processing for streaming data with technologies such as More ❯
like Ansible, Terraform, Docker, Kafka, Nexus Experience with observability platforms: InfluxDB, Prometheus, ELK, Jaeger, Grafana, Nagios, Zabbix Familiarity with Big Data tools: Hadoop, HDFS, Spark, HBase Ability to write code in Go, Python, Bash, or Perl for automation. Work Experience 5-7+ years of proven experience in previous More ❯
is the industry-leading cloud big data platform for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks such as ApacheSpark, Trino, Hadoop, Hive, and HBase. Amazon Athena is a serverless query service that simplifies analyzing data directly in Amazon S3 using standard … Experience designing or architecting (design patterns, reliability, and scaling) of new and existing systems Master's degree in computer science or equivalent Experience with Apache Hadoop ecosystem applications: Hadoop, Hive, Presto, Spark, and more Our inclusive culture empowers Amazonians to deliver the best results for our customers. If More ❯
Trafford Park, Trafford, Greater Manchester, United Kingdom Hybrid / WFH Options
ISR RECRUITMENT LIMITED
cloud solutions Handling real-time data processing and ETL jobs Applying AI and data analytics to large datasets Working with big data tools like ApacheSpark and AWS technologies such as Elastic MapReduce, Athena and Lambda Please contact Edward Laing here at ISR Recruitment to learn more about More ❯
Employment Type: Permanent
Salary: £75000 - £85000/annum (plus excellent company benefits)
hoc analytics, data visualisation, and BI tools (Superset, Redash, Metabase) Experience with workflow orchestration tools (Airflow, Prefect) Experience writing data processing pipelines & ETL (Python, ApacheSpark) Excellent communication skills and ability to work collaboratively in a team environment Experience with web scraping Perks & Benefits Competitive salary package (including More ❯
hoc analytics, data visualisation, and BI tools (Superset, Redash, Metabase) Experience with workflow orchestration tools (Airflow, Prefect) Experience writing data processing pipelines & ETL (Python, ApacheSpark) Excellent communication skills and ability to work collaboratively in a team environment Experience with web scraping Perks & Benefits Competitive salary package (including More ❯
Github integration and automation). Experience with scripting languages such as Python or R. Working knowledge of message queuing and stream processing. Experience with ApacheSpark or Similar Technologies. Experience with Agile and Scrum Technologies. Familiarity with dbt and Airflow is an advantage. Experience working in a start More ❯
cross-functional teams, and play a key role in optimising their data infrastructure. Requirements: Strong experience in Python, SQL, and big data technologies (Hadoop, Spark, NoSQL) Hands-on experience with cloud platforms (AWS, GCP, Azure) Proficiency in data processing frameworks like PySpark A problem-solver who thrives in a More ❯
cross-functional teams, and play a key role in optimising their data infrastructure. Requirements: Strong experience in Python, SQL, and big data technologies (Hadoop, Spark, NoSQL) Hands-on experience with cloud platforms (AWS, GCP, Azure) Proficiency in data processing frameworks like PySpark A problem-solver who thrives in a More ❯
cross-functional teams, and play a key role in optimising their data infrastructure. Requirements: Strong experience in Python, SQL, and big data technologies (Hadoop, Spark, NoSQL) Hands-on experience with cloud platforms (AWS, GCP, Azure) Proficiency in data processing frameworks like PySpark A problem-solver who thrives in a More ❯
Birmingham, England, United Kingdom Hybrid / WFH Options
Talent
Python, R, or SQL. • Experience with machine learning frameworks (e.g., Scikit-learn, TensorFlow, PyTorch). • Proficiency in data manipulation and analysis (e.g., Pandas, NumPy, Spark). • Knowledge of data visualization tools (e.g., Power BI, Tableau, Matplotlib). • Understanding of statistical modelling, hypothesis testing, and A/B testing. • Experience More ❯
the ability to work in a fast-paced, collaborative environment. Strong communication and interpersonal skills. Preferred Skills: Experience with big data technologies (e.g., Hadoop, Spark). Knowledge of machine learning and AI integration with data architectures. Certification in cloud platforms or data management. More ❯
related field. - Proven experience (5+ years) in developing and deploying data engineering pipelines and products - Strong proficiency in Python - Experienced in Hadoop, Kafka or Spark - Experience leading/mentoring junior team members - Strong communication and interpersonal skills, with the ability to effectively communicate complex technical concepts to both technical More ❯
london, south east england, united kingdom Hybrid / WFH Options
JSS Search
the ability to work in a fast-paced, collaborative environment. Strong communication and interpersonal skills. Preferred Skills: Experience with big data technologies (e.g., Hadoop, Spark). Knowledge of machine learning and AI integration with data architectures. Certification in cloud platforms or data management. More ❯
processing frameworks such as Kafka, NoSQL, Airflow, TensorFlow, or Spark. Finally, experience with cloud platforms like AWS or Azure, including data services such as Apache Airflow, Athena, or SageMaker, is essential for the Mid-level. The Role: Build and maintain scalable data pipelines. Design/implement optimised data architecture. More ❯
to influence. A drive to learn new technologies and techniques. Experience/aptitude towards research and openness to learn new technologies. Experience with Azure, Spark (PySpark), and Kubeflow - desirable. We pay competitive salaries based on experience of the candidates. Along with this, you will be entitled to an award More ❯
in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Experience in database (e.g., SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) Experience in consulting, design, and implementation of serverless distributed solutions Experience in software development with object-oriented language PREFERRED QUALIFICATIONS AWS experience More ❯
management, and data dictionaries Familiar with modern data visualisation tools (e.g. QuickSight, Tableau, Looker, QlikSense) Desirable Skills Exposure to large-scale data processing tools (Spark, Hadoop, MapReduce) Public sector experience Experience building APIs to serve data Familiarity with other public cloud platforms and data lakes AWS certifications (e.g. Solutions More ❯
or a related field. Strong proficiency in machine learning techniques, including model training, testing, and feature selection. Experience with big data tools such as Spark and Hadoop. Hands-on coding experience in Python, Bash, and either Java or Scala. Knowledge of statistical methods and data visualization techniques. Familiarity with More ❯