Data Lake Engineer (f/m/x)
Data Lake Engineer (f/m/x)
Our client is an international financial services provider, serving companies and private clients in a wide range of industries in many European countries. To support the existing team, our customer is currently looking for a Data Lake Engineer (f/m/x).
These exciting tasks are waiting for you:
- Work with data and analytics experts to strive for greater functionality in the data systems.
- Collaborate across the enterprise to enable and share best practices and reusable and scalable tools and code for our analyst community.
- Design, use and test the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies (DevOps & Continuous Integration); Build data integration from various sources and technologies to the data lake infrastructure as part of an agile delivery team.
- Drive the advancement of the data infrastructure by designing and implementing the underlying logic and structure for how data is set up, cleansed, and ultimately stored for organizational usage.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data deliveries, re-designing infrastructure for greater scalability.
- Build and select analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Monitor the capabilities and react on unplanned interruptions ensuring that environments are provided & loaded in time.
- Manage incidents, reported by data deliverers or data consumers and provide service reporting
- End to End responsibility for tasks from assignment to completion.
Your experience so far:
- Adequate technical education (Technical School or suitable University degree).
- About 3 years of experience in implementing integration components and applications, especially in the area of big data projects or systems integration.
- 2 or more years of Hadoop experience (Hortonworks and Cloudera preferred, AWS Cloud as advantage) in building data ingestion and transformation (Knowledge about database components such as ETL, Relational Database Management Systems, BI Tools, Data).
- Hands-on experiences in data discovery, blending data and data cleansing for analytical purposes from various sources of data (e.g. internal data warehouses, weblogs, social media, data market providers).
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Strong analytic skills related to working with unstructured datasets.
- Profound experience with CD / DevOps methodology and good overview of related tools or tool chains.
- Demonstrated ability to build processes that support data transformation, data structures, metadata, dependency and workload management (Mining Tools, Analytic Applications, Metadata Management Advantageous).
- Practical experience in managing, monitoring and support of an application including job scheduling (e.g. UC4).
- Combined technical ability, business acumen, and an excellent understanding of industry standards and technology trends.
- Fluent knowledge of English; German is appreciated, but not mandatory.
What our client offers:
- Collaboration in a challenging and international environment
- Flexible working hours
- Home office in coordination
- A lot of additional benefits in a corporate environment