I had a very interesting catastrophe a little while back. Working with a customer deploying into production we basically broke the email of a bunch of senior VIPs. How? We defaulted the mDBUseDefaults attribute! How does this happen? The state-based nature of FIM is awesome and powerful. But it has far-reaching ramifications on down-stream systems, especially as the organisation gets larger and the project/programme has smaller, or less-direct, visibility of aspects of the downstream systems.
Case in point. We have a drop-down list in the FIM Portal that allows a requestor to select the size of a mailbox: normal (default), extended, large and gargantuan (these aren’t the real labels but you get the point). This makes sense. Behind the scenes an action WF runs when the drop-down value is updated and updates integer values that map to the actual AD DS attributes mDBStorageQuota, mDBOverQuotaLimit and mDBOverHardQuotaLimit. Unfortunately, it turns out, there’s also an advanced flow rule on the authoritative database MA that works out whether or not mDBUseDefaults should be set (the FIM Service generally solely contributes this value however we have to manage it during on-boarding). This rule checks the aforementioned quota attributes and –here’s the key point– decides whether or not mDBUseDefaults is TRUE or FALSE depending on whether the values in the quota attributes map to our default, extended, large and very large configurations. See the issue?
What happens when the downstream process, or lack thereof, is not truly understanding the actual attribute and values? For example, mDBStorageQuota, et al. are expressed in KB. So a 1GB mailbox = 1024^3 (or 1024 * 1024 * 1024). Now, what if the 1st line account management/Exchange team are expressing this as 1,000,000,000? Or better yet, what if they have no idea about the actual corporate default|extended|large|vlarge values and simply arbitrarily increase? “oh sure, your manager agrees you can have a bigger mailbox, I’ll increase it to 9 GB for you”.
Deploying FIM is fun. The actual initial deployment – the turning on – is a high risk task; often misunderstood and trivialised. How does one truly understand the actual impact of all attribute changes in a corporate environment? My customer is large and disparate. There are thousands of applications, varying from the big ones we know about like Exchange, SharePoint and SAP down to Access databases and Excel workbooks full of old macros that we have no idea about.
I remember years ago turning on the Exchange 2000/2003 Active Directory Connector (ADC) to co-exist with Exchange 5.5. That process wrote a load of old phone numbers from the Exchange 5.5 LDAP directory to AD. Logical and fine. Except the VPN solution was storing some required value (I forget the specifics, probably a thumbprint) in the otherPhone attribute! What happens after the ADC is turned on (we turned it on when everyone left for the day on Friday of course)? VPN broke. For everyone.
So while we have to initialise and default values, doing so is a risk. And in many cases an unquantifiable risk. In our case we felt we’d performed considerable due-diligence in the form of analysing the drop files, pre-empting many data changes over multiple nights (to ensure no offline address book full downloads were triggered), etc. but we missed this one issue. VIPs have exceptions that we didn’t grasp…
Deploying an identity management system, especially a state-based system, into the production environment carries massive risk. Early on in the process shy away from the “big bang” deployment approach. Introduce connectors in phases and break user populations down into smaller sets. Avoid intricate dependencies on references such as hierarchy. Go-live with provisioning turned off, and activate provisioning after some time (for smaller customers days, for larger customers weeks) running scheduled delta synchronisation profiles and monitoring. During on-boarding work hard to minimise data defaulting. Take what you need and write what you need. Nothing more.