We are looking for Data Engineers to join our Big Data & Analytics team that has the objective to support Wind Tre to become a Data Driven Company. As a Data Engineer, you’ll collaborate in the Data Governance & Information Services Team in a challenging and customer oriented environment.
If you’re passionate about Data Modeling, Data Lifecycle and Data Management and you'd like to work in agile mode, we would like to meet you!
Main responsibilities and activities
Architect highly scalable data pipelines
Process large-scale multi-structured information and guarantee optimized data sets
Work tightly with Data Scientists in an agile and collaborative environment to deliver advanced analytics and AI use cases
Required Skills
Experience with Data Management processes and architectures: DWH, ETL, Data Quality, Data Modeling, Metadata Management, MDM, Data Security, Data Enrichment
Experience with object-oriented and functional programming, software engineering and testing patterns for large-scale data processing
Proficiency with Hadoop Stack (Apache Hadoop, Spark, Beam, Kafka) and open source ecosystem
Knowledge of NoSQL Database such as MongoDB and Neo4j
Preferred knowledge of GCP such as BigQuery, DataProc, DataFlow, Pub/Sub
Familiarity with Python, Java, JavaScript, SQL
2-5 years’ experience in Data Management and Big Data Engineering fields in major consulting firms or big international companies
Education
MSc in Computer Engineering, Computer Science or other related disciplines
Candidati per questo lavoro →
If you’re passionate about Data Modeling, Data Lifecycle and Data Management and you'd like to work in agile mode, we would like to meet you!
Main responsibilities and activities
Architect highly scalable data pipelines
Process large-scale multi-structured information and guarantee optimized data sets
Work tightly with Data Scientists in an agile and collaborative environment to deliver advanced analytics and AI use cases
Required Skills
Experience with Data Management processes and architectures: DWH, ETL, Data Quality, Data Modeling, Metadata Management, MDM, Data Security, Data Enrichment
Experience with object-oriented and functional programming, software engineering and testing patterns for large-scale data processing
Proficiency with Hadoop Stack (Apache Hadoop, Spark, Beam, Kafka) and open source ecosystem
Knowledge of NoSQL Database such as MongoDB and Neo4j
Preferred knowledge of GCP such as BigQuery, DataProc, DataFlow, Pub/Sub
Familiarity with Python, Java, JavaScript, SQL
2-5 years’ experience in Data Management and Big Data Engineering fields in major consulting firms or big international companies
Education
MSc in Computer Engineering, Computer Science or other related disciplines