Sr Data Integration Engineer
Posted on: May 15, 2022
Sr Data Integration Engineer
Senior Data Integration Engineer
Kimberley-Clark is on a mission to transform to become a data
driven and AI-First company.
It is the vision of this team to embed an algorithm into every K-C
Decision, process and product.
We are looking for a hardworking, aspirational and innovative
engineering leader for the Senior Data Integration Engineer
position in our AI engineering and innovation team. The Senior Data
Integration Engineer will play a diverse and far-reaching role
across organizations providing leadership and influencing adoption
of technical solutions, data processing, and design patters across
multiple teams and partners within Kimberley-Clark.
- Work with Technical architects, Product Owners and Business
teams to translate requirements into technical design for data
modelling and data integration
- Demonstrate deep background in data warehousing, data modelling
and ETL/ELT data processing patterns
- Design and develop ETL/ELT pipelines with reusable patterns and
- Design and build efficient SQLs to process and curate the data
sets in HANA, Azure and Snowflake
- Design and review data ingestion frameworks leveraging Python,
Spark, Azure Data Factory, Snowpipe, etc.,
- Design and build Data Quality models and ABCR frameworks to
ingest, validate, curate and prepare the data for consumption
- Understand the functional domain, business needs and able to
identify the gaps in the requirements proactively prior to
- Work with platform teams to design and build processes for
automation in pipeline build, testing and code migrations
- Demonstrate exceptional impact in delivering projects, products
and/or platforms in terms of scalable data processing and
application architectures, technical deliverables and delivery
throughout the project lifecycle.
- Provide design and guiding principles on building data models
and semantic models in Snowflake - enabling true self-service
- Responsible for ensuring the effectiveness of the ingestion and
data delivery frameworks and patterns.
- Build and maintain data development standards and principles,
provide guidance and project specific recommendations as
- Must be conversant with DevOps delivery approach and tools and
have a track record of delivering products in agile model.
- Provide insight and direction on roles and responsibilities
required for platform/ product operations
- 10+ years of experience designing, developing, and building
ETL/ELT pipelines, procedures and SQLs on MPP platforms such as
HANA, Snowflake and Teradata
- Experience in designing and building metadata driven data
ingestion frameworks, building SAP BO/Data Services, Azure Data
Factory, SnowSQL, Snowpipe - as well as building mini-batch,
real-time and event-driven data processing jobs
- Proficient in distributed computing principles, modular
application architecture, and various types of data processing
patterns - real-time, batch, lambda, and other architectures
- Experience with a broad range of data stores - Object stores
(Azure ADLS, HDFS, GCP Cloud Storage), Row and Columnar databases
(Azure SQL DW, SQL Server, Snowflake, Teradata, PostgresSQL,
Oracle), NoSQL databases (CosmosDB, MongoDB, Cassandra),
ElasticSearch, Redis, and Data processing platforms - Spark,
Databricks, and SnowSQL
- Hands-on experience with Docker, Kubernetes and the cloud infra
like Azure, AWS, and GCP.
- Experience with one or more programming languages such Python,
Java and Scala is preferred
- Familiarity in leveraging Azure Stream Analytics, Azure
Analysis Services, Data Lake Analytics, HDInsight, HDP, Spark,
Databricks, MapReduce, Pig, Hive, Tez, SSAS, Watson Analytics,
- Strong Knowledge on source code management, configuration
management, CI/CD, security and performance.
- Ability to look ahead to identify opportunities and thrive in a
culture of innovation
- Self-starter who can see the big picture, and prioritize your
work to make the largest impact on the business' and customer's
vision and requirements
- Experience in building, testing, and deploying code to run on
Azure cloud datalake
- Ability to Lead/nurture/mentor others in the team.
- A can-do attitude in anticipating and resolving problems to
help your team to achieve its goals.
- Must have experience in Agile development methods
Global VISA and Relocation Specifications:
Keywords: Kimberly-Clark, Marietta , Sr Data Integration Engineer, Engineering , Marietta, Georgia
Didn't find what you're looking for? Search again!