Big Data Engineer (Hadoop/Spark/Python)

Berlin  ‐ Vor Ort
Dieses Projekt ist archiviert und leider nicht (mehr) aktiv.
Sie finden vakante Projekte hier in unserer Projektbörse.

Beschreibung

My client is urgently looking for a Big Data Engineer (Hadoop/Spark/Python) to join their team in Berlin, Germany on a six month contract.

The ideal Big Data Engineer (Hadoop/Spark/Python) is experienced in working with large data sets and enjoys optimising and building data systems from the ground up.

Role Responsibilities:

  • Create and maintain optimal data pipeline architecture,
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies.

Skills Required:

  • 4+ years' experience building and optimising big data' data pipelines and large data sets.
  • Big data tools: Hadoop, Spark, Kafka, etc
  • SQL and NoSQL databases, eg. Cassandra.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift
  • Scripting languages: Python, Java, C++, Scala, etc.

For instant consideration, please contact Kirsty and email your CV to (see below)

Start
ab sofort
Dauer
6 months
Von
Next Ventures Ltd
Eingestellt
18.07.2018
Projekt-ID:
1600406
Vertragsart
Freiberuflich
Um sich auf dieses Projekt zu bewerben müssen Sie sich einloggen.
Registrieren