Connecting...

W1siziisijiwmtkvmdevmtcvmdkvndqvntyvmjexl2ljddiuanbnil0swyjwiiwidgh1bwiilciymdawedcwmcmixv0
Job

Data Architect / Modeler

  • Job ref:

    151190

  • Location:

    Atlanta

  • Sector:

    Information Technology

  • Contact:

    Patrick Dyer

  • Published:

    6 months ago

  • Consultant:

    Patrick Dyer

We are an international company that is actively looking for an Enterprise Data Modeler / Architect to join our team on a permanent basis. You will be responsible for design, development and expansion of enterprise data models that support business requirements. Additionally, you will develop, maintain and support an enterprise-wide data model to show the entity relation across business functions and applications. A successful candidate will have the experience to understand source system data models, profile source data and find any potential gaps. This is an exciting opportunity with high visibility to senior leadership and to have a direct impact on our business.

We offer a complete benefits package including medical, dental, vision, 401k, and PTO.

Requirements:

  • 10+ years of experience in data analytics applications such as EDW, BI, Analytics or MDM
  • 5+ years of strong in-depth experience in data modeling, dimensional modeling, Hadoop architecture and analytics required
  • 3+ years of experience in the following technologies required:
    • MS Azure Data Lake Storage (ADLS), HDInsight, Databricks or equivalent technology
    • MS Power BI or equivalent BI reporting technology
    • Informatica suite of products such as BDM, IDQ, MDM and IICS
    • Erwin data modeler or equivalent data modeling technology
  • At least 2 end to end implementation project experience required
  • Experience in creating business metadata repository and data catalog, using ERWin and creating ERD, and designing and working with complex data models required
  • Deep knowledge of dimensional, transactional and operational modeling required
  • Must be an analytical thinker who can translate business rules for information / data content needs into high performance data models.
  • Comfortable in multi-terabyte production environments and highly proficient with large data sets and clusters.

fp-it1