Information Systems, or a related discipline. Desirable Experience Background or internship experience within financial services or technology. Exposure to Java. Experience managing on-premise or hybrid data infrastructure (e.g. Hadoop). Knowledge of workflow orchestration tools such as Apache Airflow. Postgraduate degree in Computer Science, Data Science, or related field. Benefits Comprehensive health, dental, and vision coverage Flexible approach More ❯
Information Systems, or a related discipline. Desirable Experience Background or internship experience within financial services or technology. Exposure to Java. Experience managing on-premise or hybrid data infrastructure (e.g. Hadoop). Knowledge of workflow orchestration tools such as Apache Airflow. Postgraduate degree in Computer Science, Data Science, or related field. Benefits Comprehensive health, dental, and vision coverage Flexible approach More ❯
in Python, R, SQL, and experience with cloud-based ML platforms such as Azure ML, AWS, or GCP. Hands-on experience with data pipeline technologies (Azure Data Factory, Spark, Hadoop) and business intelligence tools (Power BI, Tableau, or SAP Analytics Cloud). Strong understanding of machine learning model lifecycle management, from design through deployment and monitoring. Exceptional communication and More ❯
Birmingham, West Midlands, England, United Kingdom
TXP
for data engineering. A detail-oriented mindset and strong problem-solving skills. Degree in Computer Science , Engineering , or a related field. Bonus Skills: Experience with big data tools (e.g., Hadoop , Spark ). Exposure to machine learning workflows . Understanding of prompt engineering concepts. Benefits: 25 days annual leave (plus bank holidays). An additional day of paid leave for More ❯
data visualization tools (e.g., Power BI). Experience in database administration or performance tuning. Knowledge of data orchestration tools like Apache Airflow. Exposure to big data technologies such as Hadoop or Spark. Why Join Synechron? Be part of a dynamic, innovative team driving digital transformation in the financial sector. We offer competitive compensation, opportunities for professional growth, and a More ❯
years of experience in a customer facing technical role and a working experience in: Distributed systems and massively parallel processing technologies and concepts such as Snowflake, Teradata, Spark, Databricks, Hadoop, Oracle, SQL Server, and performance optimisation Data strategies and methodologies such as Data Mesh, Data Vault, Data Fabric, Data Governance, Data Management, Enterprise Architecture Data organisation and modelling concepts More ❯
on experience with building data pipelines in a programming language like Python Hands-on experience with building and maintaining Tableau dashboards and/or Jupyter reports Working understanding of Hadoop and Big data analytics Ability to understand the needs of and collaborate with stakeholders from analytics and business teams Education : Bachelors or Masters degree in Computer Science, Engineering, Management More ❯
not all, but the majority of the below: Databases & SQL: SQL, Oracle DB, Postgres, SQL Server Messaging & Monitoring: ActiveMQ, Zabbix, Grafana, Ambari Cloud Platforms: AWS, Azure Big Data & Processing: Hadoop DevOps Tools: Jenkins, Puppet, BitBucket BPM & SOA: Oracle SOA, Oracle BPM, ActiviBPM Web & Application Servers: IIS Collaboration & Tracking: Jira, Confluence Other Technologies: CI tools and cloud-based technologies Desirable More ❯
for shaping data ecosystems and driving high impact solutions. Direct experience with cloud data technologies (Snowflake, AWS, Azure) highly valuable; background in SQL Server, MySQL, Postgres, NoSQL, Oracle or Hadoop also welcome. In depth knowledge of database structures, data analysis and data mining. Strong understanding of data warehousing, data lakes, ETL/ELT processes and big data technologies. Proficiency More ❯
the IC. Expert proficiency in Python (or similar languages) and experience with data science libraries (TensorFlow, PyTorch, Pandas, NumPy). Strong experience with big data processing tools (e.g., Spark, Hadoop, AWS or Azure cloud platforms). Expertise in working with geospatial data formats (e.g., GeoTIFF, Shapefiles, WMS, WFS) and spatial libraries (e.g., GeoPandas, Rasterio, GDAL). Advance experience in More ❯
predictive modelling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau More ❯
understanding of classical and modern ML techniques, A/B testing methodologies, and experiment design. Solid background in ranking, recommendation, and retrieval systems. Familiarity with large-scale data tools (Hadoop, BigQuery, Amazon EMR, etc.). Experience with BI tools and visualization platforms such as Tableau, Qlik, or MicroStrategy. Bonus: Experience with geospatial data and advanced analytics platforms. More ❯
understanding of classical and modern ML techniques, A/B testing methodologies, and experiment design. Solid background in ranking, recommendation, and retrieval systems. Familiarity with large-scale data tools (Hadoop, BigQuery, Amazon EMR, etc.). Experience with BI tools and visualization platforms such as Tableau, Qlik, or MicroStrategy. Bonus: Experience with geospatial data and advanced analytics platforms. More ❯
a data science team, mentoring junior colleagues and driving technical direction. Experience working with Agile methodologies in a collaborative team setting. Extensive experience with big data tools, such as Hadoop and Spark, for managing and processing large-scale datasets. Extensive experience with cloud platforms, particularly Microsoft Azure, for building and deploying data science solutions. Why Join? You'll be More ❯
What You'll Bring • 6 to 10 years' IT Architecture experience working in a software development, technical project management, digital delivery, or technology consulting environment • Platform implementation experience (ApacheHadoop - Kafka - Storm and Spark, Elasticsearch and others) • Experience around data integration & migration, data governance, data mining, data visualisation, database modelling in an agile delivery-based environment • Experience with at More ❯
e.g. MS SQL, Oracle) NoSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J) Data exchange and processing skills (e.g. ETL, ESB, API) Development (e.g. Python) skills Big data technologies knowledge (e.g. Hadoop stack) Knowledge in NLP (Natural Language Processing) Knowledge in OCR (Object Character Recognition) Knowledge in Generative AI (Artificial Intelligence) would be advantageous Experience in containerisation technologies (e.g. Docker) would More ❯
bring: Significant experience in data engineering, including leading or mentoring technical teams. Deep understanding of cloud environments such as Azure, AWS, or Google Cloud Platform, and tools like Synapse, Hadoop, or Snowflake. Hands-on experience with programming languages such as Python, Java, or Scala. Strong knowledge of data architecture, modelling, and governance. A track record of delivering complex data More ❯
Apache Commons Suite & Maven, SQL Database such as Oracle MySQL, PostgreSQL etc. Hands-on experience in utilizing Spring Framework (Core, MVC, Integration and Data) Experience with Big Data/Hadoop and NoSQL Database is a big plus Experience with Play framework, Angular is a big plus Business Acumen: Strong problem solving abilities and capable of articulating specific technical topics More ❯
languages (Python, Bash) and programming languages (Java). Hands-on experience with DevOps tools : GitLab, Ansible, Prometheus, Grafana, Nagios, Argo CD, Rancher, Harbour. Deep understanding of big data technologies : Hadoop, Spark, and NoSQL databases. Nice to Have Familiarity with agile methodologies (Scrum or Kanban). Strong problem-solving skills and a collaborative working style. Excellent communication skills , with the More ❯
and social benefits (e.g. UK pension schema) What do you offer? Strong hands-on experience working with modern Big Data technologies such as Apache Spark, Trino, Apache Kafka, ApacheHadoop, Apache HBase, Apache Nifi, Apache Airflow, Opensearch Proficiency in cloud-native technologies such as containerization and Kubernetes Strong knowledge of DevOps tools (Terraform, Ansible, ArgoCD, GitOps, etc.) Proficiency in More ❯
Programming Involve in planning, designing and strategizing the roadmap around On-premise and cloud solutions. Experience in designing and developing real time data processing pipelines Expertise in working with Hadoop data platforms and technologies like Kafka, Spark, Impala, Hive and HDFS in multi-tenant environments Expert in Java programming ,SQL and shell script, DevOps Good understanding of current industry More ❯
MongoDB, InfluxDB, Neo4J). Familiarity with data exchange and processing methods (e.g. ETL, ESB, API). Proficiency in development languages such as Python. Knowledge of big data technologies (e.g. Hadoop stack). Understanding of NLP (Natural Language Processing) and OCR (Object Character Recognition). Knowledge of Generative AI would be advantageous. Experience in containerisation technologies (e.g. Docker) would be More ❯
commercial impact. Understanding of ML Ops vs DevOps and broader software engineering standards. Cloud experience (any platform). Previous mentoring experience. Nice to have: Snowflake or Databricks Spark, PySpark, Hadoop or similar big data tooling BI exposure (PowerBI, Tableau, etc.) Interview Process The process is fully structured, transparent, and efficient: Video call – high-level overview and initial discussion In More ❯
commercial impact. Understanding of ML Ops vs DevOps and broader software engineering standards. Cloud experience (any platform). Previous mentoring experience. Nice to have: Snowflake or Databricks Spark, PySpark, Hadoop or similar big data tooling BI exposure (PowerBI, Tableau, etc.) Interview Process Video call - high-level overview and initial discussion In-person technical presentation - based on a provided example More ❯
Stevenage, Hertfordshire, England, United Kingdom Hybrid/Remote Options
MBDA
e.g. MS SQL, Oracle...) noSQL technologies skills (e.g. MongoDB, InfluxDB, Neo4J...) Data exchange and processing skills (e.g. ETL, ESB, API...) Development (e.g. Python) skills Big data technologies knowledge (e.g. Hadoop stack) Knowledge in NLP (Natural Language Processing) Knowledge in OCR (Object Character Recognition) Knowledge in Generative AI (Artificial Intelligence) would be advantageous Experience in containerisation technologies (e.g. Docker) would More ❯