hero

Senior Data Solution Architect-Data Fabric/Mesh

IBM

IBM

IT
Posted on Friday, September 29, 2023
Introduction
At IBM, work is more than a job – it’s a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you’ve never thought possible. Are you ready to lead in this new era of technology and solve some of the world’s most challenging problems? If so, lets talk.

Your Role and Responsibilities
Data Solution Architect supporting data transformation approach leveraging Data Fabric and Mesh concepts for modernized/transformed US Govt programs. The Data Solution Architect will work on large, complex US Govt programs where the lifecycle of data needs to be addressed and an overall data platform or capability needs to be integrated with an overall technical solution including leading design, prototyping, and pilot implementation as well as estimation of lifecycle solution scope and cost elements.

The Data Solution Architect must have familiarity with traditional data technologies (Data Warehouse, Data Lake) as well as transformative capabilities such as Data Lakehouse, Data Fabric, and Data Mesh in order to define go-forward strategies which are compliant for US Govt clients – including addressing Federal Cloud security requirements, data privacy, data security, and data integration approaches which are approved for US Govt use.

During implementation, the Data Solutions Architect will work with both the broader IBM Consulting data leadership team and key project stakeholders to gather data requirements, define, and execute the data strategy. You will help lead multiple agile sprints to support the execution of data management, data cataloging, and ETL/Data Conversion activities leading into building of a modernized Data Fabric and Platform that supports and feeds business benefits including automation, AI/ML, analytics, and data visualization efforts.

The Data Solution Architect will be responsible for:

  • Requirements capture (functional, and non-functional)
  • Designing technical solution for data including data ingestion, integration, transformations, and storage
  • Help design data management and data governance implementations
  • Develop and recommend modern architecture components in support of data catalogs, data governance, data pipelines, and data storage for consumption by data analysts and data scientists
  • Successfully implement the recommended data architecture for a given use case

#IBMReferred_NorthAmerica#SAPFED_23


Required Technical and Professional Expertise

  • At least 5 years of professional experience in data architecture
  • Experience with Data Lakehouse, Data Fabric, and Data Mesh concepts
  • Solution Architecture with focus on Big Data; Requirements capture, conceptual and contextual architectures, technology selection, detailed design, and planning
  • Data Fabric/Mesh/Lakehouse knowledge or other data implementation experience
  • Strong knowledge in ETL methods and tools
  • Experience with data architecture, data modeling, schema design, and development
  • Good understanding of at least one coding language (e.g. Python, Scala, etc)
  • Knowledge in Apache Spark, and/or Databricks
  • Knowledge and experience working with SQL and databases
  • Knowledge and experience working with Kafka
  • Experience and knowledge working with AWS or Azure (certified practitioner preferred)
  • Excellent communication skills with the ability to convey complex, technical information in an understandable manner
  • Ability to logically troubleshoot issues, determine root causes, and present suggested solutions clearly and concisely
  • Ability to handle competing priorities flexibly and address each in an effective and timely manner in a fast-paced working environment
  • Must be a US Citizen OR a Green Card Holder


Preferred Technical and Professional Expertise

  • Experience with Public Sector Clients (US Federal, Higher Education, State and Local Governments)
  • Experience and knowledge working with Data Catalogs (e.g. Watson Knowledge Catalog, Collibra, Apache Atlas)
  • Experience working with Data Warehouse technologies (e.g. Redshift, Snowflake)