Your Area of Work
We’re looking for a hands-on Data Pipeline Engineer to build, run, and maintain robust data pipelines that power our fund data operations. You will implement business rules directly in pipelines, operationalize and schedule scripts, monitor data quality, and ensure timely, reliable delivery for downstream stakeholders. The ideal candidate brings fund data domain knowledge and experience in banking, asset management, or fund administration.
Your Responsibilities
- Build & Maintain Pipelines: Design, implement, and operate batch/streaming data pipelines (ingestion, transformation, validation, and delivery) across cloud/on‑prem environments. Handle nested data structures, conditional logic, loops and dynamic field mappings.
- Data Transformation & Mapping Business Rules: Translate fund data requirements into executable rules (e.g., validation, enrichment, exception handling) directly in code and/or orchestration layers. Normalize, format, and validate data during transformation.
- Run & Monitor Jobs: Own daily runs, scheduling, and monitoring, including incident response, root‑cause analysis, and recovery.
- Data Quality & Controls: Define and maintain DQ checks (completeness, timeliness, accuracy, conformity); implement automated alerting, lineage, and audit trails.
- Scripting & Automation: Develop and maintain Ruby/Python scripts, reusable components and helper functions; automate routine tasks and reduce manual touchpoints.
- Domain Collaboration: Work closely with Operations, IT, and Data/AI teams to align logic with fund data standards (NAVs, share classes, benchmarks, holdings, fees, EMT/EPT, ESG/SFDR).
- Documentation & SOPs: Maintain technical docs, runbooks, and standard operating procedures; support change management and release readiness.
- Security & Compliance: Follow bank-grade standards for access control, PII handling, segregation of duties, and production change workflows (ITIL).
- Continuous Improvement: Proactively identify stability, performance, and cost optimization opportunities.
Your Profile
Required Qualifications
- Experience: 3–5 years in data engineering or production data ops, with at least 2+ years in fund data within a bank, asset manager, or fund administrator.
- Tech Stack (hands-on):
- Languages: Python/Ruby for scripts, SQL (advanced).
- Transformations: Scriban or similar templating engines.
- Versioning & CI/CD: Git, pull requests, branching strategies.
- Data Quality: Experience building validation rules, reference/master data controls, and reconciliation (e.g., vendor data vs. internal golden sources).
- Run-the-business Mindset: Comfortable with SLAs and incident management.
- Communication: Can translate business rules into technical logic and explain trade-offs to non-technical partner
Preferred Good-to-have Qualifications
- Fund Data & Vendors: Familiarity with NAV production cycles, share‑class metadata, benchmarks, holdings, fees, corporate actions; vendor feeds such as Bloomberg, Refinitiv, Morningstar, fund docs (KIID/KID), and templates (EMT/EPT).
- Regulatory Context: Awareness of MiFID II, SFDR, UCITS/AIFMD data implications and downstream publication requirements.
- Frameworks & Practices: ITIL, SDLC, change/release management in regulated environments.
- Security: Secrets management, key vaults, least-privilege access, data masking.