|
76 to 100 of 285 Permanent Apache Spark Jobs
Greater Bristol Area, United Kingdom DiverseJobsMatter
complex data warehouses and/or data lakes. Familiarity with cloud-based analytics platforms such as AWS, Azure, Snowflake, Google Cloud Platform (Big Query), Spark, and Splunk. Proficiency in SQL and experience using one or more of the following languages: R, Python, Scala, and Julia, including relevant frameworks/ more »
London Area, United Kingdom X4 Technology
design and coding skills (e.g. Python, Scala, JavaScript). Experience with Microsoft or AWS data stack e.g. Microsoft Azure Data Lake, Hadoop (preferably with Spark), Cosmos DB, HDInsight/HBase, MongoDB, Redis, Azure Table/Blob stores etc. Exposure to tools like SAP technologies and Alteryx always useful. Experience more »
United Kingdom Hybrid / WFH Options Trust In SODA
APIs Messaging systems Python Java Databases and SQL Data warehousing Data modeling Data governance Cloud platforms and services (e.g., AWS) Big data technologies (e.g., Spark, Kafka) Continuous Data Integrity/Testing platforms If you are looking for a new path where you get to work on some amazing projects more »
London Area, United Kingdom Ascent
an Azure Solution Architect in Microsoft Data and AI. Extensive, hands-on, current experience in Azure data and AI technologies, like Fabric, Synapse, Databricks, Spark, Python, GitHub, Data Factory, Azure Data Lake, Power BI, Cognitive Services, Purview, etc. Strong data analysis and modeling skills. High-level understanding of Azure more »
Greater London, England, United Kingdom InterEx Group
classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau more »
Surrey, England, United Kingdom Hybrid / WFH Options Hawksworth
working in the world of Data Science You're more than capable with SQL & Python You have exposure to big data technologies such as Spark Ideally you will have experience with statistical analysis, machine learning algorithms, and data mining techniques You have excellent communication skills and can communicate well more »
London Area, United Kingdom Amber Labs
multiple tasks and projects simultaneously. Preferred Qualifications AWS Certified Solutions Architect or other relevant AWS certifications. Experience with big data technologies such as Hadoop, Spark, or similar. Knowledge of data governance and data quality best practices. Familiarity with machine learning and AI concepts and tools. more »
United Kingdom Databuzzltd
Bricks setup using Terraform experience. * Experience of MLOps and DataOps. * Experience of using container technologies, cloud platforms (ideally AWS), and distributed processing frameworks like Spark and Dask. * Experience in Javascript application development and UI design. * Expertise in developing mobile applications. * Familiarity with the agile software development process. If you more »
Reigate, England, United Kingdom Hybrid / WFH Options esure Group
refining them to strong results. Exposure to Python data science stack Knowledge and working experience of AGILE methodologies. Proficient with SQL. Familiarity with Databricks, Spark and geospatial data/modelling are a plus. We’ll help you gain… Experience working in a high-performance environment where collaboration and business more »
England, United Kingdom Hybrid / WFH Options iO Associates - UK/EU
Skills & Experience At least 10 years experience working with JavaScript or Python/Java Previous experience deploying Software into the Cloud EKS, Docker, Kubernetes Apache Spark or NiFi Microservice architecture experience Experience with AI/ML systems more »
United Kingdom Bazaarvoice
designing and building robust, scalable, distributed data systems and pipelines, using open source and public cloud technologies. Strong experience with data orchestration tools: e.g. Apache Airflow, Dagster. Experience with big data storage and processing technologies: e.g. DBT, Spark, SQL, Athena/Trino, Redshift, Snowflake, RDBMSs (PostgreSQL/MySQL … . Knowledge of event-driven architectures and streaming technologies: e.g. Apache Kafka, Kafka Streams, Apache Flink. Experience with public cloud environments: e.g. AWS, GCP, Azure, Terraform. Strong knowledge of software engineering practices: e.g. testing, CI/CD (Jenkins, Github Actions), agile development, git/version control, containers etc. more »
Manchester, England, United Kingdom Disney Entertainment & ESPN Technology
objectives. So each team leverages the technology that fits their needs best. You’ll see us working with data processing/streaming frameworks like Apache Flink and Spark; Database technologies like MySQL, PostgreSQL, DynamoDB and Redis; and breaking things using in-house chaos principles and tools such as … latency, near real-time products: Java and Scala based Web Services, Databricks Data Lakes (Delta Lakes), AWS Kinesis and MSK, AWS ElasticSearch, AWS RDS, Apache Flink & Spark, scripting using Python, Terraform’s infrastructure as a code. What You Will Do Be part of an Agile team building one more »
London Area, United Kingdom HCLTech
master and meta data management Experience with Azure SQL Database, Azure Data Factory, Azure Storage, Azure IaaS/PaaS related database implementations. Experience with Apache spark and new Fabric framework would be a plus. more »
United Kingdom Hybrid / WFH Options XONAI
a Senior Software Engineer for this role, you will collaborate with the founding team to contribute to the broader integration of our product with Apache Spark and maintain the solution up to date and compatible with a variety of supported runtimes. Your contributions to our core solution will more »
London Area, United Kingdom Harnham
DevOps experience in CI/CD. Experience using Tensorflow, Kubernetes, MLFlow, Kafka, and Airflow. Experience using Python is a must (tools like AWS and Spark are beneficial) Excellent communication skills and team and colleague engagement. A keen interest in problem-solving and using scalable machine learning to solve the more »
London Area, United Kingdom Hybrid / WFH Options Harnham
DevOps experience in CI/CD. Experience using Tensorflow, Kubernetes, MLFlow, Kafka, and Airflow. Experience using Python is a must (tools like AWS and Spark are beneficial) Excellent communication skills and team and colleague engagement. A keen interest in problem-solving and using scalable machine learning to solve the more »
London Area, United Kingdom Foxley Talent
and implement pre-processing pipelines for large data, create visualisation and reports for model performance, whilst collaborating with various engineers to improve knowledge and spark innovation. As the Machine Learning Engineer you will ideally have a degree in a relevant industry; Computer Science, Maths, AI, or similar, at least more »
CANDIDATES MUST HAVE SC CLEARANCE a. Overall 5-7 Years of IT experience b. Strong experience to Dev-Ops principles c. Strong experience in Spark, Tableau , Hadoop, PL/SQL d. Good experience working with AWS platform e. Good exposure of ITIL processes which includes incident, problem, change management more »
Belfast, Northern Ireland, United Kingdom Search 5.0
issues The person Degree in Engineering, Technology, or related fields/equivalent 3+ years in AI solution delivery Experienced in relevant technologies (Python, TensorFlow, Spark, Azure Cloud, Git, Docker) Strong analytical and communication skills This comes with a fantastic salary and full benefits package – happy to discuss in full. more »
London Area, United Kingdom Algo Capital Group
ingestion pipelines. Requirements: Proven experience working with Python or Java or C# Experience working with ELT/ELT technologies such as Airflow, Argo, Dagster, Spark, Hive Strong technical expertise, especially in data processing and exploration, with a willingness to learn new technologies. A passion for automation and driving continual more »
Luton, England, United Kingdom Hybrid / WFH Options Ventula Consulting
science and analytics team in deploying pipelines. Coach and mentor the team to improve development standards. Key requirements: Strong hands-on experience with Databricks, Spark, SQL or Scala. Proven experience designing and building data solutions on a cloud based, big data distributed system (AWS/Azure etc.) Hands-on … models and following best practices. The Ability to develop pipelines using SageMaker, MLFlow or similar frameworks. Strong experience with data programming frameworks such as Apache Spark. Understanding of common Data Science and Machine Learning models, libraries and frameworks. This role provides a competitive salary plus excellent benefits package. In more »
Manchester Area, United Kingdom Mobysoft
data platform from a legacy system to one based on AWS EMR, with Amazon RDS and DynamoDB ingestion converted to Parquet files, interrogatable through Spark and MapReduce. This modern platform will support rapid data insight generation, data experiments for new product development, our live Machine Learning solutions and live … to-target mappings) to testing and service optimisation.) Good familiarity with our developing key services/applications - AmazonRDS, Amazon DynamoDB, AWS Glue, MapReduce, Hive, Spark, YARN, Airflow. Ability to work with a range of structured, semi-structured and unstructured file formats including Parquet, json, csv, pdf, jpg. Accomplished data more »
Complexio is Foundational AI. This works to automate business activities by ingesting whole company data – both structured and unstructured – and making sense of it. Using proprietary models and algorithms, Complexio forms a deep understanding of how humans are interacting and more »
Cheshire East, England, United Kingdom Wipro
in a technical and analytical role Experience of Data Lake/Hadoop platform implementation Hands-on experience in implementation and performance tuning Hadoop/ Spark implementations Experience Apache Hadoop and the Hadoop ecosystem Experience with one or more relevant tools (Sqoop, Flume, Kafka, Oozie, Hue, Zookeeper, Hcatalog, Solr … Avro) Experience with one or more SQL-on-Hadoop technology (Hive, Impala, Spark SQL, Presto) Experience developing software code in one or more programming languages (Java, Python, etc.) Preferred Qualifications Masters or PhD in Computer Science, Physics, Engineering or Math Hands on experience leading large-scale global data warehousing more »
London Area, United Kingdom Hybrid / WFH Options Venn Group
improvements Key Skills 3+ years of Python experience Highly statistical and Analytical Exposure to Google Cloud Platform ( BigQuery, GCS, Datalab, Dataproc, Cloud ML (desirable) Spark & Hadoop experience Strong communication skills Good problem solving skills Qualifications Bachelor's degree or equivalent experience in a quantative field (Statistics, Mathematics, Computer Science … classification techniques, and algorithms Fluency in a programming language (Python, C,C++, Java, SQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau) This is a permanent position, and offers flexibility with Hybrid working, 2-3 days per week in the office, depending on workload more »
|
Salary Guide Apache Spark - 10th Percentile
- £47,500
- 25th Percentile
- £63,750
- Median
- £80,000
- 75th Percentile
- £102,500
- 90th Percentile
- £118,750
|