Big Data Spezialisten (m/w) für Frankfurt am Main / sehr gute Englisch Kenntnisse

Hessen, Frankfurt am Main  ‐ Vor Ort
Dieses Projekt ist archiviert und leider nicht (mehr) aktiv.
Sie finden vakante Projekte hier in unserer Projektbörse.

Beschreibung

Für unseren Klienten ein mittelständisches IT Bratungshaus suchen wir zwei Big Data Spezialisten.

Start: ab sofort oder ab Januar 2017
Dauer: 12 Monate
Branche: Bank
Einsatzort: Frankfurt am Main

Details:
System engineer / solution architect with deep understanding of big data solutions (Hadoop, Cloudera), and in particular building data lakes in an big enterprise environment. As part of the engineering team his/her task would be to support Engineers and Developers integrating different kind of data sources in a large, distributed data lake, and cover especially the data specifications part of it.


Roles and Responsibilities:
* Work closely with solution designers to understand approach, requirements and solution design of current project
* Collect, prepare and process data source specifications from various data source owners
* Cover all significant aspects of data sources like type, volume, velocity, frequency, content description, etc.
* Coordinate with systems engineers and other stakeholders outside the project to align requirements and design
* Support project management and preparing routines
* Provide clear qualitative and quantitative information and knowledge to the team
* Take part in and lead workshops
* Document and visualize source descriptions and other artifacts
* Drive the source management and on-boarding processes for the project
* Support the general solution architecture and design efforts of the project

Skills:
Must have:
* Several years of proven experience as architect/engineer in complex Big Data projects (Hadoop, Kafka, Spark, Hbase, HIVE, Impala, etc.) or other mass data applications at Petabyte scale
* At least 3 years of experience with IT Projects in large enterprises
* At least 3 years of experience with (big) data integration projects
* At least 1 year of experience with performance critical data processing at large scale using files, data streams and databases
* Very good documentation and visualization skills
* Excellent oral and written communication skills in English (German nice to have)
* Team player, experience with agile development approach and tools (SCRUM, KANBAN, JIRA)
* Must be able to effectively and professionally communicate within the team

Beneficial:
* At least 3 years of experience with large Hadoop clusters
* Experience with batch and streamed data flows in and out Cloudera (Spark, Kafka)
* Experience with testing IT Systems and Data Integration Systems
* Experience with working in internationally distributed cross-functional teams
* Experience with the Cloudera framework

Bei Interesse freue ich mich auf ein kurzes Feedback und ggfs. einen Terminvorschlag für ein erstes Telefonat.
Vorab können Sie mir Ihr Profil samt Verfügbarkeit und Stundensatz zukommen lassen.

Beste Grüße
Marius Meisel
APRIORI - business solutions AG
Telefon:
E-Mail:
Start
12.2016
Dauer
12 Monate
(Verlängerung möglich)
Von
APRIORI - business solutions AG
Eingestellt
30.11.2016
Ansprechpartner:
Marius Meisel
Projekt-ID:
1247562
Vertragsart
Freiberuflich
Um sich auf dieses Projekt zu bewerben müssen Sie sich einloggen.
Registrieren