Data Engineer

Hessen, Frankfurt am Main  ‐ Remote
Dieses Projekt ist archiviert und leider nicht (mehr) aktiv.
Sie finden vakante Projekte hier in unserer Projektbörse.

Beschreibung

Data Engineer Hadoop / Spark / Scala developer


Required skills:

- At-least 5 years of Hadoop development and implementation experience - Experience with MapReduce and tools based on MR such as Pig etc. - Experience with Apache Spark, Hive, HDFS, - Experience with Cloudera, Sentry, Hue, Impala - Test prototypes and oversee handover to operational teams - Propose best practices/standards. - Loading from disparate data sets. - Pre-processing using Hive and Pig. - Translate complex functional and technical requirements into detailed design. - Perform analysis of vast data stores and uncover insights. - Maintain security and data privacy. - High-speed querying - must be able to implement Spark Scripts in Scala (maintain, extend) -

Nice to have skills:
- Managing and deploying HBase. - Create scalable and high-performance web services for data tracking.
Start
ab sofort
Dauer
12 Monate
(Verlängerung möglich)
Von
Trilogy International
Eingestellt
30.06.2021
Ansprechpartner:
Jimmy Walker
Projekt-ID:
2148624
Vertragsart
Freiberuflich
Einsatzart
100 % Remote
Um sich auf dieses Projekt zu bewerben müssen Sie sich einloggen.
Registrieren