Scala programming on the JVM Experience with concurrency, memory management and I/O Experience with Linux or other Unix-like systems Experience with distributed databases, DataStax Enterprise or Apache Cassandra in particular Experience with distributed computing platforms, Apache Spark in particular ABOUT BUSINESS UNIT IBM Software infuses core business operations with intelligence-from machine learning to generative More ❯
sensor fusion techniques Comfortable working independently with minimal supervision in a dynamic setting Desirable Experience Familiarity with coordinate systems and transforms Exposure to workflow orchestration tools such as Flyte, Apache Spark, or Databricks Experience working with autonomous vehicle (AV) datasets or multi-sensor rigs in production Prior work involving debugging sensor metadata issues (e.g., misaligned extrinsics, inaccurate timestamps More ❯
LibraryLink Product Specialist Exegesis Team, Idox Geospatial Home based About the role This is fantastic opportunity for a Product Specialist an expertise in GIS applications to join the Exegesis team within Idox Geospatial, working specifically with our LibraryLink product (integrated More ❯
Reading, England, United Kingdom Hybrid / WFH Options
Areti Group | B Corp™
support for Data Analysts with efficient and performant queries. • Skilled in optimizing data ingestion and query performance for MSSQL or other RDBMS. • Familiar with data processing frameworks such as Apache Spark. • Highly analytical and tenacious in solving complex problems. More ❯
software Strong programming skills in Rust , C , or C++ Solid understanding of operating systems, file systems, and storage internals Experience working with modern data formats or infrastructure tools (e.g., Apache Arrow, Parquet, DuckDB, ClickHouse) A passion for infrastructure and performance problems Willingness and ability to work on-site 5 days/week in London or New York City More ❯
software Strong programming skills in Rust , C , or C++ Solid understanding of operating systems, file systems, and storage internals Experience working with modern data formats or infrastructure tools (e.g., Apache Arrow, Parquet, DuckDB, ClickHouse) A passion for infrastructure and performance problems Willingness and ability to work on-site 5 days/week in London or New York City More ❯
Interface with customers to gather requirements, define data specifications, and provide solutions, including converting raw data into usable formats. Work with technical writers and testers. Minimum Qualifications: • Familiar with Apache NIFI • Educational & Certification Requirements: Bachelor's degree with 4 years of related experience (or 4 years of experience in lieu of degree). Requires IAT Level II certification. • Technical More ❯
/or Confluence knowledge AWS experience and/or knowledge TS/SCI w/FSP These Qualifications Would Be Nice to Have: Scrum, SaFE and Kanban experience preferred. Apache PDFBox JUnit Data Analytics Dashboard Tool Integration Data Governance and Management Integration AWS DevOps and/or AWS Certified Practitioner certification Commercial or national security sector DevOps knowledge More ❯
Bristol, Avon, England, United Kingdom Hybrid / WFH Options
Hays Specialist Recruitment Limited
ll need to be a hands-on technical data engineer with experience across a range of modern technologies. Experience with Microsoft fabric is preferable and good working knowledge on Apache Spark is essential - PySpark, SparkSQL and T-SQL. You'll need good working knowledge of cloud data, specifically designing scalable solutions in Azure Data Lake, Synapse & Delta Lake. Data More ❯
/Python Experience performing System Integration tasks including installation, configuration, and sustainment of various COTS/GOTS/FOSS software, packages, and libraries in a Unix environment Experience using Apache NiFi to process and distribute data Experience with Corporate data flow processes and tools Experience with NoSQL databases including Elasticsearch and MongoDB Experience with containerization technologies such as Docker More ❯
software development experience with a focus on backend or data oriented applications. • Proficiency in Python development. • Experience integrating new technology stacks into existing software environments. • Hands-on experience with Apache NiFi, Kafka, or Logstash. (Preferred) • Familiarity with SQL and relational database environments. (Preferred) • Experience building modular, scalable software systems. (Preferred) • Understanding of cybersecurity principles and secure software practices. (Preferred More ❯
a developer background is a plus. Knowledge in big data technologies. Working with SQL-like query languages, Python, and/or R with large datasets. Experience with Zeppelin and Apache Spark is an advantage. A focus on details and a willingness to learn. What We Offer Awesome team events throughout the year. It represents a strategic role where you More ❯
Data Explorer . • Experience working with data in a variety of structured and unstructured formats. • Experience with data visualization tools, computing platforms, and applications such as: Juptyer, Elasticsearch, DataBricks, Apache Zeppelin, Kibana, and/or Tableau • Experience supporting the development of AI/ML algorithms, such as natural language processing in a production environment • Experience configuring and utilizing data More ❯
/AWS). Solid working knowledge of Oracle and SQL. Experience with UNIX/Windows shell scripting . Familiarity with GitHub , ClearCase , or similar version control systems. Exposure to Apache open source tools , Hibernate , Web Services , and cloud-native architectures . Vendor platform integration (e.g. ServiceNow, Azure, AWS). IAM, cybersecurity, or enterprise identity systems. Estimating software project timelines More ❯
TensorFlow, PyTorch, scikit-learn) Experience with cloud platforms (AWS, GCP, Azure) Experience with CI/CD pipelines for machine learning (e.g., Vertex AI) Familiarity with data processing tools like Apache Beam/Dataflow Strong understanding of monitoring and maintaining models in production environments Experience with containerization tools (e.g., Docker) Problem-solving skills with the ability to troubleshoot model and More ❯
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
ADLIB Recruitment
Party APIs (GraphQL and/or REST). Strong experience with HTML5, CSS3, JavaScript, and Liquid is a must. Experience with SQL Databases (Postgres, MongoDB) and Web Servers (nginx, Apache, etc). Experience with Git/Version Control Ideally experience with other Backend Languages such as PHP is ideal. Comfortable being client facing for client calls, sprint reviews, kick More ❯
users, and engineering colleagues to create end-to-end solutions. Learn from experts, mentor, and coach junior team members. Utilize data-streaming technologies including Kafka CDC, Kafka, EMS, and Apache Flink. Innovate and incubate new ideas. Work on a broad range of problems involving large data sets, real-time processing, messaging, workflows, and UI/UX. Drive the full More ❯
features. Rapid Prototyping: Create interactive AI demos and proofs-of-concept with Streamlit, Gradio, or Next.js for stakeholder feedback; MLOps & Deployment: Implement CI/CD pipelines (e.g., GitLab Actions, Apache Airflow), experiment tracking (MLflow), and model monitoring for reliable production workflows; Cross-Functional Collaboration: Participate in code reviews, architectural discussions, and sprint planning to deliver features end-to-end. More ❯
features. Rapid Prototyping: Create interactive AI demos and proofs-of-concept with Streamlit, Gradio, or Next.js for stakeholder feedback; MLOps & Deployment: Implement CI/CD pipelines (e.g., GitLab Actions, Apache Airflow), experiment tracking (MLflow), and model monitoring for reliable production workflows; Cross-Functional Collaboration: Participate in code reviews, architectural discussions, and sprint planning to deliver features end-to-end. More ❯
engineering workflows like CI/CD and containerised environments Skilled at working with both structured and unstructured data to unlock insights and power models Hands-on experience with Databricks, Apache Spark, or similar tools used in large-scale data processing Exposure to machine learning model deployment using APIs or lightweight serving frameworks like Flask or Keras Familiarity with geospatial More ❯
engineering workflows like CI/CD and containerised environments Skilled at working with both structured and unstructured data to unlock insights and power models Hands-on experience with Databricks, Apache Spark, or similar tools used in large-scale data processing Exposure to machine learning model deployment using APIs or lightweight serving frameworks like Flask or Keras Familiarity with geospatial More ❯
Sheffield, Yorkshire, United Kingdom Hybrid / WFH Options
Reach Studios Limited
Azure etc.) What You'll Need Must-haves: Comprehensive experience in a DevOps or SRE role, ideally in a multi-project environment Deep experience with web stacks: Nginx/Apache, PHP-FPM, MySQL, Redis, Varnish, Elasticsearch Proven expertise in managing and optimising Cloudflare across DNS, security, performance, and access Experience with Magento 2 infrastructure and deployment CI/CD More ❯
R, scala) - Stakeholder management - Dashboarding (Excel, Quicksight, Power BI) - Data analysis and statistics - KPI design PREFERRED QUALIFICATIONS - Power BI and Power Pivot in Excel - AWS fundamentals (IAM, S3, ) - Python - Apache Spark/Scala Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during More ❯
Integration Pipelines (GitLab CI, Travis CI, etc.) Proficiency in one or more scripting languages (Shell, Python, Ruby, etc.) Experience with Source Control Management (Git, SVN) Experience with web servers (Apache, NGINX) Experience producing technical and process documentation Qualifications: Bachelor of Science degree in Computer Science, Management Information Systems, or related fields is desirable but not essential. Nice to have More ❯
solution alternative recommendations, and overseeing all aspects of project delivery and risk in lieu of a degree Additional Qualifications Experience in designing and developing ETL workflows using tools, including Apache Spark or AWS Glue Experience with different data storage technologies and databases, including Amazon S3 or Amazon Redshift Experience with supporting the IC- and national-level system security initiatives More ❯