Profilbild von Wout Maaskant Senior Data & ML Engineer | Databricks Expert aus Hamburg

Wout Maaskant

verfügbar

Letztes Update: 21.03.2025

Senior Data & ML Engineer | Databricks Expert

Abschluss: M.Sc. Human Media Interaction (Computer Science)
Stunden-/Tagessatz: anzeigen
Sprachkenntnisse: Deutsch (verhandlungssicher) | Englisch (verhandlungssicher) | Niederländisch (Muttersprache)

Schlagwörter

Databricks Java Künstliche Intelligenz Amazon Web Services Datenbanken Continuous Delivery Continuous Integration Data Integration ETL Data Warehousing + 15 weitere Schlagwörter anzeigen

Dateianlagen

CV-Wout-Maaskant-deutsch_210325.pdf
CV-Wout-Maaskant-english_210325.pdf

Skills

Databricks ● Data Lakehouse ● Extract-Load-Transform (ELT) ● Amazon Web Services (AWS) ● Apache Spark und PySpark ● Python ● SQL, PostgreSQL, MongoDB ● CI/CD, GitOps und MLOps ● Java, Kotlin und Scala ● Terraform ● Software-Architektur ● Produktentwicklung ● Machine Learning & künstliche Intelligenz ● Stakeholder-Kommunikation
 

Projekthistorie

03/2021 - 12/2024
Senior Data Engineer
Risk Ident (Internet und Informationstechnologie, 50-250 Mitarbeiter)

I was both the technical project lead as well as a lead engineer tocreate a data platform to allow data scientists to run explorative data analyses and train ML models.

As technical project lead, my responsibilities included helping team members to understand technical aspects of the project, facilitating the exchange of ideas and making decisions, encouraging and monitoring engineering quality, communicating with stakeholders and structuring the process through work breakdowns, estimation and planning.

As an engineer, I was deeply involved in all phases of the project. After exploring and evaluating many technologies, we settled on Databricks with Apache Spark and Delta Lake on AWS as the basis for the platform. Some of the alternatives that I explored and evaluated included Google BigQuery, Airflow, dbt, Tableau and Looker.

My most significant contributions were in the areas of technology evaluation, the creation of architectural concepts (e.g. for infrastructure, deployment and data modeling and data access control), as well as implementation. A particular challenge was to ensure data separation and access control, as the platform collected data from multiple customers, whose data had to be kept separate. This challenge was solved using Databricks’ integrated data catalog.

The platform made it much easier for data scientists and others in the company to access and work with data, gain insights and it helped to standardize data science processes and technologies.

I did Requirements engineering, technology selection, proof of concepts, conceptual and architectural work, implementation of infrastructure (IaC), ETL pipelines and CI/CD pipelines, data modeling, dashboarding, integrating operational systems, onboarding users and evaluation. Test-driven development, architecture documentation, presenting, Scrum.

I used Databricks, Delta Lake, lakehouse architecture, AWS (IAM, EC2, Lambda, S3, CloudWatch, SNS), Apache Spark (Scala, Python, SQL), Unity Catalog, DataFrame API, BigQuery, Tableau, Metabase, Scala (Cats Effect, FS2, ScalaTest), Python (library development, PyTest), Apache Kafka, Prometheus, Terraform, PostgreSQL, GitHub, CI/CD, Confluence, Jira, arc42


05/2019 - 03/2021
Data Engineer
Risk Ident (Internet und Informationstechnologie, 50-250 Mitarbeiter)

I designed and implemented microservices that made production data available to data scientists, as well as a microservice that scored transactions using the ML models the data scientists had trained and was part of the real-time pathway of the core product.

Furthermore, I took the initiative to design and roll out a GitOps based continuous deployment process that ensured the core product was deployed in a reproducible and safe way. A particular challenge was finding a way to allow data scientists to deploy ML models through this process outside of the regular deployment cycle.

I did Design and implementation of real-time and non-real time microservices, design and implementation of a GitOps CI/CD deployment pipeline, MLOps. Test-driven development, domain and architecture documentation, presenting, Scrum.

I used Scala (Cats Effect, Monix, ZIO, ScalaTest), MongoDB, Apache Kafka, Grafana, InfluxDB, Prometheus, Python, Apache Spark (Scala), Kubernetes, Ansible, Jenkins, Nexus, Confluence, Jira

01/2016 - 07/2018
Senior Software Engineer
Deposit Solutions (now: Raisin) (Banken und Finanzdienstleistungen, 50-250 Mitarbeiter)

As engineer and technical lead in various teams in a fast-growing company, I worked on both the original monolithic application as well as various microservices that should replace it.
My most significant contributions were in designing the architecture for the new REST-based core microservices, collaborating with stakeholders from non-engineering teams to define functional and technical requirements and serving as technical contact for customer integration projects. Further, I onboarded and mentored many new software engineers.

I did Microservice development, customer contact, RESTful API design, introduce a container-based delivery pipeline, domain-driven design, test-driven development, domain and architecture documentation, onboarding, mentoring, presenting, Scrum, OKR

I used Java, Spring Boot, REST, Swagger (OpenAPI), MariaDB, Hibernate, Dropwizard, Docker Compose, Ansible, Jenkins (Java/Groovy API), GitLab, Confluence, Jira, arc42

Zertifikate

Optimizing Apache Spark on Databricks
Databricks
2023
Big Data Architecture
Fraunhofer IAIS
2015
iSAQB Certified Professional for Software Architecture – Foundation Level
iSAQB
2014

Reisebereitschaft

Weltweit verfügbar
Profilbild von Wout Maaskant Senior Data & ML Engineer | Databricks Expert aus Hamburg Senior Data & ML Engineer | Databricks Expert
Registrieren