Beschreibung
Freelance Opportunity: Data Engineer
Location: Berlin, 2 days on-site, 3 days remote
Start Date: ASAP
Duration: 6 months
About Us
Our client is a fast-growing technology company based in Berlin, focused on building scalable, data-driven products that empower our users to make smarter decisions. The team is international, collaborative, and passionate about leveraging data to create real impact.
Role Overview
They are looking for a freelance Data Engineer to help them scale their data infrastructure. You’ll be instrumental in designing, building, and optimising our data pipelines, ensuring high-quality, reliable data flows across the organisation.
Key Responsibilities
Design, implement, and maintain scalable ETL/ELT pipelines.
Optimise data workflows for performance and reliability.
Work with analytics, product, and engineering teams to gather requirements.
Set up and maintain data warehousing solutions (e.g., BigQuery, Redshift, Snowflake).
Ensure data quality and implement monitoring/alerting solutions.
Document architecture and technical decisions clearly.
Requirements
Proven experience as a Data Engineer (3+ years preferred).
Strong experience with Python and SQL.
Experience with modern data stack tools (e.g., dbt, Airflow, Prefect, Fivetran).
Cloud platform experience (GCP, AWS, or Azure).
Experience working with APIs and structured/unstructured data sources.
Familiar with version control (Git) and CI/CD practices.
Nice to Have
Experience with real-time data processing (Kafka, Spark Streaming, etc.).
Knowledge of data governance and GDPR best practices.
Familiarity with Looker, Power BI, or other BI tools.
If you are interested in this role or would like to hear about other opportunities, please send in your CV or get in touch at
Location: Berlin, 2 days on-site, 3 days remote
Start Date: ASAP
Duration: 6 months
About Us
Our client is a fast-growing technology company based in Berlin, focused on building scalable, data-driven products that empower our users to make smarter decisions. The team is international, collaborative, and passionate about leveraging data to create real impact.
Role Overview
They are looking for a freelance Data Engineer to help them scale their data infrastructure. You’ll be instrumental in designing, building, and optimising our data pipelines, ensuring high-quality, reliable data flows across the organisation.
Key Responsibilities
Design, implement, and maintain scalable ETL/ELT pipelines.
Optimise data workflows for performance and reliability.
Work with analytics, product, and engineering teams to gather requirements.
Set up and maintain data warehousing solutions (e.g., BigQuery, Redshift, Snowflake).
Ensure data quality and implement monitoring/alerting solutions.
Document architecture and technical decisions clearly.
Requirements
Proven experience as a Data Engineer (3+ years preferred).
Strong experience with Python and SQL.
Experience with modern data stack tools (e.g., dbt, Airflow, Prefect, Fivetran).
Cloud platform experience (GCP, AWS, or Azure).
Experience working with APIs and structured/unstructured data sources.
Familiar with version control (Git) and CI/CD practices.
Nice to Have
Experience with real-time data processing (Kafka, Spark Streaming, etc.).
Knowledge of data governance and GDPR best practices.
Familiarity with Looker, Power BI, or other BI tools.
If you are interested in this role or would like to hear about other opportunities, please send in your CV or get in touch at