to apply statistical techniques to validate models and algorithms. Data Manipulation & Analysis: Proficient in data manipulation and analysis using tools like Pandas, NumPy, and Jupyter Notebooks. Experience with data visualization tools such as Matplotlib, Seaborn, or Tableau to communicate insights effectively. Our offer: Cash: Depends on experience Equity: Generous equity More ❯
to apply statistical techniques to validate models and algorithms. Data Manipulation & Analysis: Proficient in data manipulation and analysis using tools like Pandas, NumPy, and Jupyter Notebooks. Experience with data visualization tools such as Matplotlib, Seaborn, or Tableau to communicate insights effectively. Our offer: Cash: Depends on experience Equity: Generous equity More ❯
for tasks like recommendations, segmentation, forecasting, and optimising marketing spend. Proficiency in Python, SQL, Bash, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, PyTorch, and more. Experience with A/B testing and other experimentation methods to validate model performance and business impact. Experience with cloud More ❯
in machine learning applications such as recommendation systems, segmentation, and marketing optimisation. Proficiency in Python, SQL, Bash, and Git, with hands-on experience in Jupyter notebooks, Pandas, and PyTorch. Familiarity with cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong problem-solving skills and a passion for More ❯
Google BigQuery/BigTable, Apache Spark), Parallel Computing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices Architectures A university degree - numbers based, Computer Science or Geography Relevant industry sector knowledge ideal but not essential A strong communicator More ❯
Google BigQuery/BigTable, Apache Spark), Parallel Computing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices Architectures A university degree - numbers based, Computer Science or Geography Relevant industry sector knowledge ideal but not essential A strong communicator More ❯
Google BigQuery/BigTable, Apache Spark), Parallel Computing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices Architectures. A university degree - numbers based, Computer Science or Geography. Relevant industry sector knowledge ideal but not essential. A strong communicator More ❯
Google BigQuery/BigTable, Apache Spark), Parallel Computing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices Architectures A university degree – numbers based, Computer Science or Geography Relevant industry sector knowledge ideal but not essential A strong communicator More ❯
Google BigQuery/BigTable, Apache Spark), Parallel Computing (e.g. Apache Spark, Kubernetes, Databricks), Cloud Engineering (AWS, GCP, Azure), Spatial Query Optimisation, Data Storytelling with (Jupyter) Notebooks, Graph Computing, Microservices Architectures. A university degree – numbers based, Computer Science or Geography. Relevant industry sector knowledge ideal but not essential. A strong communicator More ❯
engineering skills, including version control (Git), code reviews, and unit testing. Familiarity with common data science libraries and tools (e.g., NumPy, Pandas, Scikit-learn, Jupyter). Experience in setting up and managing continuous integration and continuous deployment pipelines. Proficiency with containerization technologies (e.g., Docker, Kubernetes). Experience with cloud services More ❯
engineering skills, including version control (Git), code reviews, and unit testing. Familiarity with common data science libraries and tools (e.g., NumPy, Pandas, Scikit-learn, Jupyter). Experience in setting up and managing continuous integration and continuous deployment pipelines. Proficiency with containerization technologies (e.g., Docker, Kubernetes). Experience with cloud services More ❯
cases and understand data thoroughly. Programming skills in Python or Ruby, with experience in AWS S3, MongoDB, PostgreSQL, AWS Redshift, or similar. Experience with Jupyter notebooks and visualization tools like Excel, Qlik Sense, or Tableau. Sensor Tower offers a flexible work environment, benefits like flexible time off, health stipends, internet More ❯
learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing More ❯
learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing More ❯
learning applications such as recommendations, segmentation, forecasting, and marketing spend optimisation. Proficiency in Python, SQL, and Git, with hands-on experience in tools like Jupyter notebooks, Pandas, and PyTorch. Expertise in cloud platforms (AWS, Databricks, Snowflake) and containerisation tools (Docker, Kubernetes). Strong leadership skills with experience mentoring and managing More ❯
for IoT data processing and real-time analytics specific to energy storage. Programming knowledge: Scripting (Python, Batch), relationship & time series databases, writing automated tests, Jupyter, Deepnote. Experience with Azure and AWS cloud infrastructure. Personal Qualifications: Bachelor’s or Master’s degree in Electrical Engineering, Mechanical Engineering, Computer Science, Data Science More ❯
series analysis, econometrics, and machine learning techniques Strong Python and SQL skills with experience in numerical computing and statistical analysis Experience using notebooks (e.g. Jupyter), Tableau or other data science tools for analysis and visualization A self-starter, with a track record of working independently and achieve targets in a More ❯
the ability to deliver timely solutions to portfolio and risk managers within the firm. Mandatory Requirements 3+ years Python development experience (pandas, numpy, polars, jupyter notebooks, FAST API) Experience with AWS services, such as: S3, EC2, AWS Batch and Redshift Proficiency in relational and non-relational database technologies BA or More ❯
Experience with cloud-based ML services, preferably on GCP Strong understanding of classical and modern Deep Learning algorithms Proficiency in SQL and Python ecosystem (Jupyter, Pandas, Scikit-Learn, etc.) Software development skills, especially in Python Experience deploying ML/AI services using Kubernetes & KubeFlow Leadership and management experience Excellent stakeholder More ❯
of Generative LLM's • Fundamental knowledge of ML, and basic knowledge of AI, NLP, and Large Language Models (LLM) • Comfortable working with Python and Jupyter Notebooks • Should have in-depth knowledge and familiarity with cloud platforms like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Technical Skills More ❯
of Generative LLM's • Fundamental knowledge of ML, and basic knowledge of AI, NLP, and Large Language Models (LLM) • Comfortable working with Python and Jupyter Notebooks • Should have in-depth knowledge and familiarity with cloud platforms like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Technical Skills More ❯
NLP, Transfer Learning etc) and modern Deep Learning algorithms (e.g. BERT, LSTM, etc) Solid knowledge of SQL and Python's ecosystem for data analysis (Jupyter, Pandas, Scikit Learn, Matplotlib, etc) Understanding of model evaluation, data pre-processing techniques, such as standardisation, normalisation, and handling missing data Solid understanding of summary More ❯
NLP, Transfer Learning, etc.), and modern Deep Learning algorithms (e.g., BERT, LSTM, etc.) Solid knowledge of SQL and Python's ecosystem for data analysis (Jupyter, Pandas, Scikit Learn, Matplotlib, etc.) Understanding of model evaluation, data pre-processing techniques, such as standardisation, normalisation, and handling missing data Solid understanding of summary More ❯
NLP, Transfer Learning, etc.), and modern Deep Learning algorithms (e.g., BERT, LSTM, etc.) Solid knowledge of SQL and Python's ecosystem for data analysis (Jupyter, Pandas, Scikit Learn, Matplotlib, etc.) Understanding of model evaluation, data pre-processing techniques, such as standardisation, normalisation, and handling missing data Solid understanding of summary More ❯
Synapse Analytics, and large-scale data warehouses (Snowflake, Redshift, Presto). Proficiency in data visualization tools (Databricks, PowerBI) and the Python data science ecosystem (Jupyter, Pandas, Numpy, Matplotlib). Plusses: Financial services background Degree in cybersecurity Any advanced Data bricks qualifications Have lead teams of more than 10 people Recent More ❯