Beschreibung
Main Responsibilities:
- Deliver large scale algorithms in production
- Building suitable environments for deployment
- Parse and handle huge data sets
- Build and manage data pipeline architecture
- Design and implement internal process improvements
- Manage data integration with third party applications
- Manage requests and stakeholder requirements in a prioritised manner
Desired Skills and Experience:
- Bachelor's degree in computer science, engineering, or equivalent
- Experience with Scala/Java and Python
- Experience using Spark, PySpark
- Experience working with AWS (EMR, S3, Athena, Redshift)
- Previous experience working in an Agile Environment
- TDD and Automation