Simply put; Skimlinks is a big data technology and product startup, building cool stuff, at scale.
We build highly transactional, high throughput platforms that connect publishers, merchants, and audiences; which allows some of the most popular sites in the world to efficiently and easily monetise their curated journalistic content. We partner with companies like Huffington Post, The New York Times, The Independent and Hearst to diversify their content monetisation strategies; which helps them create more of the journalistic content that we all consume and enjoy, without relying upon additional banner advertising.
We’re also leveraging our massive amount of anonymous behavioural data to create greenfield data platforms. We employ a diverse array of advanced Machine Learning and Classical Statistical methodologies to interrogate our dizzying amounts of data. Though our platforms, we have a direct view of the browsing and shopping behaviors of over 650 million users; amassing over 1TB of data daily across over a billion incoming events.
About the Role:
Skimlinks is growing fast and we are looking for an experienced DevOps Engineer to join our team in London. You will be working with a broad range of technologies across a variety of projects including API’s that serve thousands of requests per second to Hadoop clusters running analytics on hundreds of terabytes of data. Our infrastructure is primarily hosted in Google Cloud and AWS, spanning three regions and making heavy use of BigQuery, EC2, EMR, RDS, Redshift and everything in between.
- Working as part of a cross functional engineering team to deliver products of exceptional quality.
- Writing maintainable and extendable code to build and manage infrastructure and participate in code reviews.
- Oversee release of change to infrastructure and code release.
- Respond to alerts and troubleshoot issues.
- Communicate with technical and non-technical people to gather requirements and understand and resolve issues.
- Manage continuous integration services.
- Persistent/tenacious problem-solver adept at critical thinking.
- Effectively prioritise multiple tasks based on short and long term goals.
- Fast learner/adaptable/desire to learn and improve.
- Linux system administration (RH/CentOS preferable, all flavours considered though)
- Shell scripting – Python or Ruby (or similar)
- DevOps/Automation – deployment workflow
- Google Cloud or AWS experience
- Configuration management – Puppet preferable.
- Assessing suitability of new tools and technologies and integrating them into our product workflow.
It would be a bonus if you have experience with:
- Big Data tools (Hadoop, Spark, etc.)
- Database Administration – MySQL, Postgres
- IPSEC & Tunnelling
- OS Package building
A flavour of our technology stack:
- Google Cloud Platform
- AWS (ASG’s/ELB’s/RDS)
- Jenkins pipeline