Stefan Anders verfügbar

Stefan Anders

DevOps & Big Data Architect / Engineer

verfügbar
Profilbild von Stefan Anders DevOps & Big Data Architect / Engineer aus Fehmarn
  • 23769 Fehmarn Freelancer in
  • Abschluss: nicht angegeben
  • Stunden-/Tagessatz: 110 €/Std. 880 €/Tag
  • Sprachkenntnisse: deutsch (Muttersprache) | englisch (verhandlungssicher)
  • Letztes Update: 15.07.2019
SCHLAGWORTE
PROFILBILD
Profilbild von Stefan Anders DevOps & Big Data Architect / Engineer aus Fehmarn
SKILLS
  • Hadoop
  • Spark
  • Kafka
  • Flume
  • Oozie
  • Akka
  • Hive
  • Impala
  • Nifi
  • Cloudera, Hortonworks
  • Pivotal Cloud Foundry
  • AWS
  • Terraform
  • Java, Scala, Python
  • Jenkins, Atlassian Bamboo
REFERENZEN
Experience
2018.10 – present: DEVOPS & Cloud Ops Architect / Developer at Consist
Migration of a Java monolith to an AWS distributed system


A large Java Spring Boot monolith was migrated to an AWS distributed system. Back-end rest services were migrated to ECS services for front-end access. Other functions were implemented in Lambda and chained together using SQS.
The CICD-processes for the build and deployment processes have been implemented in Jenkins. The deployment of the AWS resources were described in Terraform (> 250 resources). The regression testing of feature branches has been implemented.
The project was implement by 3 team, 2 development teams and 1 cross-functional team. I was part of the cross-functional team for implementing DEVOPS
Roles:
  • Cloud Ops Developer
Technologies:
  • AWS
      • ECS (EC2 + Fargate)
      • Lambda
      • SQS
      • SNS
      • RDS
      • Redis
      • EFS
      • EC2
      • S3
      • CloudWatch
      • CloudFormation
    • Terraform
    • Jenkins
                 

2018.08 – 2018.09: Cloud Ops Developer at Deep Data Ocean
Prototype Predictive Maintenance
Earlier this year we’ve developed a prototype for a German Car Producer who wants to use car information to predict maintenance needs of the cars. The system contained MicroServices implemented in Spring Boot. It gathered reference input data from legacy systems via Rest API. At that point of time the system was deployed to Pivotal Cloud Foundry. GitLab was the CI/CD system used.
The MicroServices are now containerized and deployed to AWS EKS and / or AWS Lambda.
Roles:
  • Cloud Ops Developer
Technologies:
  • REST API
  • Spring Boot
  • Hadoop
  • AWS EKS
  • AWS Lambda
  • GitLab
2018.06 – 2018.10: Big Data Architect at an Automotive Supplier
Car Measurement File Management
The automotive supplier wants to build a product to manage car measurement files. The main architecture’s feature is Apache Nifi from Hortonworks Data Flow (HDF). A test system needed to be setup. JMeter is used for performance test execution. Performance results are stored in elasticsearch. KPI Dashboards are implemented in Kibana. The entire application was an IOT application.
Roles:
  • Big Data Architect
Technologies:
  • Java
  • Hortonworks (HDP & HDF)
  • Hadoop
  • Nifi
  • JMeter + Plugin Development
  • Rest API
  • elasticsearch
  • Kibana
  • Jenkins
2018.03 – 2018.06: Automotive: Big Data Architect
Car information Pipelines into Data Lake
A German car manufacturer gathers information from their cars. They are delivered via a rest-endpoint. The raw input files in Google’s binary ProtoBuf format. The files are feeded in Kafka with a Kafka producer that reads the rest-endpoint and delivers Kafka Topic. The raw files are split into data formats using a custom Kafka Source and delivered via Flume into HDFS landing area. Oozie workflows are used to update the data in Hive and Impala and pushing data into master Impala database. The system has reached production grade.
Roles:
  • Big Data Architect
Technologies:
  • Hadoop
  • Cloudera
  • Kafka
  • Flume
  • Hive
  • Impala
  • Oozie
  • ProtoBuf
  • Java
  • Bash
  • Linux Shell Scripting
2018.02 – 2018.03: Big Data Developer at Deep Data Ocean
Prototype Predictive Maintenance
We’ve developed a prototype for a German Car Producer who wants to use car information to predict maintenance needs of the cars. The system contained MicroServices implemented in Spring Boot. It gathered reference input data from legacy systems via Rest API. The system was deployed to Pivotal Cloud Foundry
Roles:
  • Big Data Developer
Technologies:
  • REST API
  • Spring Boot
  • Hadoop
  • Kafka
  • Pivotal Cloud Foundry

2018.01 – 2018.02: Data Scientist at Deep Data Ocean
Visitor counting for shops
Clothing shops want to know the conversion rate (revenue per visitor). The system extends normal counting system by identifying the gender of the visitors. The image classification is done by YOLO 2
Roles:
  • Data Scientist, Developer
Technologies:
  • YOLO 2
Neural Network Playground
TensorFlow Playground is an interactive visualization of neural networks, written in typescript using d3.js. It contains a tiny neural network library that meets the demands of this educational visualization. You can simulate, in real time, in your browser, small neural networks and see the results. Node.js was used to run the playground on the workstation.
Roles:
  • Data Scientist, Developer
Technologies:
  • Tensorflow
  • Typescript d3.js
  • Node.js
Text recognition on pharmaceutical prescription papers
The text on pharamceutical prescription papers was to be OCRed. For this we’ve implemented a convolutional neural network using Tensorflow and PHP. The scanned images were trained against a part of the data set and then tested against the rest.
Roles:
  • Data Scientist, Developer
Technologies:
  • Tensorflow
  • PHP
2016.07 – 2017.12: Big Data Developer at a Market Researcher
Regional Pharma Market
The Regional Pharma Market is a German data offering based on wholesaler input data. Data is checked, pre-processed, bridged against reference data and outputted for Pharma clients based on client’s sales rep geographies and market needs. The system was completely re-implemented on Big Data technologies. The input files are transferred from FTP-server to Hadoop using FTP-file-watcher (see below). Before FTP-file-watcher was finished the file-transfer was done using a shell-script.  The data is loaded into Spark for processing an outputting. Configurations of input files and client deliveries are made in JSON format. The process is started by a Rest-API. A Jenkins CI/CD-pipeline assures automatic builds and test-execution.
Roles:
  • Big Data Developer
Technologies:
  • Scala
  • Cloudera
  • Hadoop
  • Spark
  • JSON
  • Jenkins
2017.04 – 2017.05: Big Data Architect at a German Market Researcher
FTP file watcher
The FTP file watcher is used to automatically transfer input files delivered on FTP-server to Hadoop. Main challenges: Detection of the file-upload-complete-event and scalable solution for setting up the microservice. The FTP-sites watched are configured in a JSON file. Inter-process-communication is done using Akka-Remote. The FTP-access is encapsulated by using rdp4j framework
Roles:
  • Big Data Architect
Technologies:
  • FTP-file-servers
  • Hadoop
  • Scala
  • YARN
  • rdp4j
  • Akka
  • JSON
  • Jenkins CI/CD-Pipelines
2016.01 – 2016.07: Big Data Developer at a Market Researcher
Migration of Legacy ETL Tool to Hadoop / Spark
The Market Researcher was running an Exasol-based system processing pharmacy coding center based input data. The system was migrated to Hadoop / Spark
Roles:
  • Big Data Developer
Technologies:
  • Scala
  • Hadoop
  • Spark
ZEITLICHE UND RÄUMLICHE VERFÜGBARKEIT
D2
KONTAKTANFRAGE VERSENDEN

Ja, ich akzeptiere die AGB

Ja, ich akzeptiere die Datenschutzbestimmungen

Hinweis: der Versand ihrer Kontaktanfrage ist komplett kostenfrei