Profilbild von Sean MacDonnchadha Freiberuflicher Informatiker (bitte kein AÜG / ANÜ) aus Griesheim

Sean Mac Donnchadha

verfügbar

Letztes Update: 11.03.2024

Freiberuflicher Informatiker (bitte kein AÜG / ANÜ)

Abschluss: B.Eng. (Hons), Mechanical Engineering, The University of Edinburgh
Stunden-/Tagessatz: anzeigen
Sprachkenntnisse: deutsch (gut) | englisch (Muttersprache) | französisch (gut) | niederländisch (Grundkenntnisse)

Dateianlagen

AWS-Certified-Cloud-Practitioner-certificate_180423.pdf

Skills

Development :
Java SE / EE, C++ (incl. STL, Boost, ACE), C, C#, Pascal, LISP, FORTRAN, PL/SQL, Transact SQL (Sybase, MS), Perl, Shell / Korn Shell, Powershell, TCP/IP Sockets, multi-threading, Spring Boot, middleware - Kafka, Tuxedo, WebLogic, Reliable Queues

Cloud:
AWS Certified Cloud Practitioner
AWS: Direct experience with EC2 / S3 / VPC / VPN / NAT / IAM / API Gateway / Lambda / DynamoDB / SQS / SNS / SES / SecretsManager / Cognito, CertificateManager, CloudFormation, CloudWatch, data centre migration, deployment with Terraform

System administration :
    UNIX - SUN / DEC / SGI / IBM / HP
    Linux - RedHat / SuSE / Ubuntu / CentOS / Amazon Linux
    MS - Windows / Windows Server / Active Directory

Development Tools :
    GIT (Stash/GitLab/GitHub), SVN, CVS, SCCS, Visual Source Safe, Azure DevOps, Hudson, Jenkins

Projekthistorie

01/2022 - 02/2023
Backend Engineer, eBay Kleinanzeigen

The first project was the migration of the eBay Kleinanzeigen website and associated services from a legacy Kafka cluster to multiple new Kafka clusters. The systems were running under Kubernetes, written in Java (8, 11, 17) and Kotlin, and used Spring and Spring Boot to provide both microservices and a monolith. The migration involved new Kafka and Mirror Maker 2 configurations, code cleanup and code modernisation. eBay Kleinanzeigen is part of Adevinta and some interaction with centrally  provided services was also required.
The second project was the migration of analytics data from a Hadoop-based platform to a DataBricks-based platform. My part in this project was the analysis of the existing Hadoop data feeds (Kafka / Flume), the onboarding of a new engineer, and the implementation of Kafka configurations and backend changes.

01/2020 - 12/2021
Senior Software Engineer, EnBW

The project context is the area of commodities trading, in this case the trading of energy (power and gas), under an EnBW programme to replace on-prem legacy systems with cloud-based systems. The contract was to architect, design and develop a high volume, low-latency connector between a cloud-based Volue DeltaXE instance and EnBW on-premises backend systems (Endur, EV) / EnBW cloud-based systems (Trayport autoTRADER, Exxeta, EMSys VPP for renewables). The implementation involved AWS Lambda to provide scalable microservices. KRITIS relevance was important in the architecture and design of the Connector and the way in which DeltaXE is operated. The DeltaXE instance is used for Position Keeping, Asset dispatch and TSO / SSO Nominations. I was responsible for all security aspects of the project as well as for defining the deployment (Azure DevOps) and operations criteria and methodology.
The system was written in Java, defined a REST API and used AWS components: API Gateway, S3, Lambda, DynamoDB, Cognito, Certificate Manager, IAM, VPC, VPC Endpoints, NAT Gateway, Cloudwatch Logs/Metrics/Events, SNS and used Terraform and OnePlatform for provisioning.

07/2018 - 03/2020
Development Lead, Siemens
Siemens (>10.000 Mitarbeiter)

The contract was to architect, design and develop systems to aid in the automation of software quality assurance.
The first system acted as a message broker between systems executing software security tests and Atlassian Jira. The system was written in Java, defined a REST API and used AWS components (Api Gateway, Lambda, DynamoDB, LoadBalancer, EC2, S3, VPC, VPC Endpoints, IAM, SecretsManager, Cloudwatch Logs/Metrics/Events, SNS). CD/CD using GitLab, automated unit/system/load tests, deployment using Terraform, security, network configuration aspects, cost-control and operations procedures were part of the deliverable, including an NGINX reverse proxy which could re-sign AWS requests.
Messages sent to the system included requests to create / update Jira issues, requests to upload Cucumber testcases and test executions, requests to download Confluence content, and triggers to indicate the start and end of a Cucumber testcase run (in order to trigger Jira issue validation). The system made use of static data which was read periodically from Confluence pages (HTML) and stored in DynamoDB. The system supported many projects, some of which had specific message handling requirements.
The second system made use of AWS scale to search log files for regular expressions (regex). It broke an incoming request into sub-requests, executed these in parallel, and collated the end results. The system was asynchronous.
This was a service contract. Development was off-site, communication was by email / Circuit (similar to Skype).

01/2017 - 04/2018
Configuration Manager, NTT Data Services
NTT DATA Services (>10.000 Mitarbeiter)

The contract was to manage and implement a successful transition of Dell Services IT systems to NTT DATA in Germany and Slovakia, liaising with the new owners in the US and with management in Germany and India. This involved re-branding, change of email system, change of networks, change to Office 365, change of Active Directory domain (re-imaging of user laptops), etc.
A major part of the work was to take over a Datacentre / office in Munich, understand and document the systems and configuration, correct various configuration and security issues, consolidate and clean up the systems, and prepare all Virtual Machines (VMware based) for a move to the cloud. The workloads (VMs) were Windows systems and multiple flavours of Linux. This was done with minimum disruption to the running systems. The cloud provider chosen, after a cost-benefit analysis of the main providers, was AWS. I architected / designed / configured the cloud security and backup scheme and then moved the Virtual Machines and data into the cloud using the AWS CLI, AWS VMware Connector, and SCP / SFTP. I set up VPN access to the VPC using SoftEther. All assets from Munich were recovered, and the office was then closed.
The final part of the contract was an office move in Frankfurt. My part in this was the specification, purchase and configuration of the internet connection, network and telephone (VoIP) systems, and to ensure that the office was usable on the first working day.

06/2015 - 06/2016
Automation Engineer, Clearstream
Clearstream Services & Deutsche Börse

Clearstream is part of the Deutsche Börse Group. The contract was in the production support team, working with in-sourced customer systems, introducing DevOps practices, automating support functions through scripting based on a Jenkins server. I designed a reliable deployment system based on Perl, and put it in use throughout the software development lifecycle (Development, Test, Production) for energy markets. I also put in place full documentation and procedures for Energy Market operations under Confluence.
I designed and developed an automated execution system where in-sourcing customers delivered procedures which were defined in an Excel spreadsheet (with VisualBasic coding), which were then automatically routed to the appropriate environment, and executed.
Various other scripts were also developed in Powershell to automate tasks when were previously manual.

04/2014 - 07/2014
Principal Engineer / Performance Architect, Dun & Bradstreet
Dun & Bradstreet

The contract had two aspects: Being a member of the Enterprise Services team working on the performance aspects of a large capacity expansion project and also on more general development process setup. Personnel were spread between Ireland, India and the US.
The performance work included managing development aspects of software performance and scalability, performance testing, identification and remediation of performance bottlenecks, hardware acquisition. The system was written in Java SE in a SOA fashion, with much use of multi-threading, multi-processing, HTTP calls, Oracle databases, OSB, and Spring components (Batch). The containers were JBOSS and Apache.
A second piece of performance work involved remediation on the Audit system and DB, optimisation of SQL, implementation of partitioning and additional indexes.
The development process work included a source code repository migration (Serena Dimensions to Atlassian Stash GIT), writing development standards and their governance procedures, documenting the systems in UML with Enterprise Architect, helping release management refine release procedures, and putting procedures and structure in place for managing offshore teams (internal and external) who were doing the actual coding.

09/2013 - 03/2014
CI / CD Engineer, HP International Bank
HP International Bank

The contract was to clean up and automate the build and deployment processes for the bank's systems after a move from StarTeam & CruiseControl to SVN & Jenkins. The systems software was C# built with msbuild (on top of Plumtree), packaged originally with Wix, now moved to InstallShield, and the build process controlled by nant, with msbuild as the triggering interface. The contract also involved day-to-day troubleshooting of build / deployment failures, and putting in place measures to improve stability. This also involved a move away from using PsExec for remote execution, in favour of PowerShell. Personnel were spread between Ireland, China and the US.

03/2011 - 09/2012
Analyst / Architect / Software Engineer, Main First Bank
MainFirst Bank AG

The MainFirst Group of companies comprises Equity Brokerage, Fixed Income, Asset Management and Corporate Advisory departments. The bank has offices in several countries, but with all Equities and Fixed Income trading taking place in Frankfurt. This contract related to Equities and Fixed Income Brokerage processing systems. The contract had two phases.
The first phase of this contract was to analyse and document the in-place bank Equities Brokerage processing systems, based around a legacy AS400 / DB2 / RPG system, with files handled by a peripheral MS Windows 2003 server. This was followed by a risk analysis and the production of a report and presentation to the board of this analyis and recommendations of actions to reduce these risks. This first phase was complicated by the illness of the maintainer of the legacy systems.
The second phase of the contract involved the architecture, design, implementation and testing of a replacement for the legacy systems. The replacement system was based on an Oracle database (running on a Windows 2008 R2 server) and PL/SQL, using the Fidessa JOAL API and a Java SE application running as a windows service to receive data from the Fidessa trading system in order to feed the database. This receipt of trading data then triggered data processing and the feeding of the back office Sungard RIMS system, BNP Paribas and KAS Bank settlement systems, and the bank accounting systems (EFDIS and Sun). Real-time trading data was processed in an STP manner, file-based data (received via FTP and ConnectDirect) was processed by timed Oracle jobs. Position keeping, risk, P&L, and accounting system feeds were also prepared. The design, optimisation, and documentation of the Oracle database schema and the Java application receiving data from Fidessa was aided by the use of the Enterprise Architect application (similar to Rational Rose). This system is in production and had been handed over to MainFirst staff. It is monitored using the MDSL monitoring system.
A source code repository was also set up (SubVersion, replicated to a backup server) for all bank development (Equities Brokerage and Risk).

10/2002 - 03/2006
Associate Director, UBS
UBS Investment Bank

The position was in the IT Equities division (incorporating Fixed Income brokerage), Core Trading Systems group, It involved analysis, designing, planning, implementation and maintenance of interfaces from UBS’s capital markets trading system (CaTS) to order originating or order routing systems based in Switzerland, London and the US. The work involved using C++, STL, Boost, ACE, XML, CppUnit, RogueWave, Sybase, Oracle, ToolTalk, X-Windows, socket based TCP-IP connections, IPCs, and multithreading on SUN Solaris and Red Hat Linux platforms. The development environment used Clearcase, with Sun Workshop on Solaris, and DDD on Linux. Design work was done in UML using Rational Rose.
In chronological order: I was responsible for the design and implementation of a server which handled the processing of trades from a new trade feed, and for bridging / translation servers for orders and trades in and out of the previous trading system to the new system. I was also responsible for the improvement and re-definition of the build and release procedures for the trading systems development groups, was responsible for a compiler upgrade, and was the code manager for the core trading systems group during this period. I later handed this function over to a dedicated configuration manager.
I was responsible for the analysis, design and implementation of a flexible calculation engine for an automated trading application as part of the new Trading System, and then went on to be responsible for, and project manager for a new project for the transfer of Fixed Income business from the old trading system to the new, and for the automation and matching / internalisation of as much trading as possible. The system connects to multiple markets, including a FIX and market data link to Bloomberg. I was personally responsible for the analysis, specification, and the design of the trading system, using Haley Rules Engine technology for evaluating orders for automatic execution. I was also the team leader for the group responsible for the implementation and test of this system, with a varying size of team (3 to 15 people) working for me depending on what was required at various times during the project. The project involved much liaison with the business management, traders, and back office people for the specification, design and test of the system. From go-live (January 2005) until leaving UBS I was responsible for the further development of the Fixed Income trading systems, while still retaining development duties. The system was developed in C++ (server side) and C# (GUI).

09/2001 - 06/2002
Team Leader / Senior Software Engineer, Clearstream
O2

The position involved leading the System-to-System Connectivity (SSC) group, using Clearcase, CONNECT:Direct, C/C++, Java, Tibco RendezVous, shell scripting, multi-processing, multi-threading, RogueWave, Oracle and Pro*C, on a SUN Solaris platform. I lead the team of SSC software engineers in the development and support of the Clearstream SSC servers. I was also tasked with establishing and agreeing development requirements with the business or with other IT projects within the company. I provided development and testing estimates to the SSC Development manager and produced project plans for this development. I was also responsible for the architecture and design of the system to support the required volume, performance, and availability targets. I also had to ensure that all development was delivered within time and within budget, and that there were clear procedures for build and release or the group’s systems.

Zertifikate

AWS Certified Cloud Practitioner
2023

Reisebereitschaft

Nur Remote verfügbar
Profilbild von Sean MacDonnchadha Freiberuflicher Informatiker (bitte kein AÜG / ANÜ) aus Griesheim Freiberuflicher Informatiker (bitte kein AÜG / ANÜ)
Registrieren