In the last post Data Damnation – how do I get message across that there is a problem? I explored how you go about getting buy-in to proposals re: implementation of data quality management solutions for client-facing product data.
The next step is to focus on what to do once you have got the approval to start making changes….
So where do you start? Data governance is a likely candidate!
Governance is something you get a lot of confused commentary on – in particular there seems to be disparity of thought with respect to where governance ends and stewardship begins.
In my own simplified view of the world, I consider governance to be the specification of the standards (or processes), including the structures required to oversee the application of the process.
Stewardship is the application of the governance – obviously, it is imperative that you have both strong governance and strong stewardship – stewardship is the walk to the governance talk!
If you do not have a well-defined data governance structure you will need to get it up and running – easier said than done I hear you say – true, but you’re setting yourself up for failure if you do not have a structure in place.
Many organizations today have data quality management steering committees with a broad spectrum of ‘interested’ (hopefully senior executives) parties involved. Other organizations opt for the slightly more autocratic approach of appointing a Chief Data Officer, or “Data Czar”. As I have mentioned in earlier posts – the culture of each organization will naturally lend itself to either of the approaches, there is no wrong or right way to do this, only what is right for your own organization.
So if you have your ‘top-down’ management in place, you now need to start looking at the bottom-up side of the equation. Effective governance of data quality demands that you deal with data quality at the earliest possible point where your organization has direct control and/or influence over the data i.e. as close of the source of the data as possible.
To this end, data stewardship is a critical aspect of any governance structure. Data Stewards need to be identified for EACH piece of data which the governance applies to – obviously here we are talking about any data that forms part of your investment product data set – the product master for example for your retail funds or separately managed accounts . The ‘Data Steward’ should be tasked with taking ownership of the data with full responsibility for accuracy, timeliness and consistency (and security where applicable) of the data. Some ‘sources’ for an investment manager’s product master will naturally lie outside the organization e.g. third party rating providers like Lipper or Morningstar, or back-office service providers who generate the daily NAV. Who the ‘data steward’ should be in these scenarios really depends on your relationship with that third party. If you have a strictly client/supplier relationship model, you will find it difficult to get the supplier/service provider to take on the stewardship role, in which case you’ll need to appoint an internal steward to liaise with or monitor that source directly. If you have a more partnership type relationship, then this should not be such a struggle.
The Data Steward is where the buck stops. Their roles and responsibilities need to clearly define what is expected of them. Again this is easier said than done, so strong leadership is required, and the ‘selling’ hat needs to be donned to bring all of the process actors on board – this is where the top-down meets the bottom-up approach!
So now if you have decentralized your “ownership”, you’ll need to centralize your “oversight”. For efficient process management, it is critical that there is transparency and accountability with multiple tiers of oversight to ensure the process is working as expected. Clear MIS is needed, SMARTER (see http://en.wikipedia.org/wiki/SMART_criteria) objectives and targets need to be defined.
Technology can play a supporting role here! Remember in previous posts I discussed that use of technology needs to be carefully considered - Technology is not a panacea for all data quality ills – technology should be used to empower people to apply the process i.e. it should structure and frame the process, not be the process.
In my next post I will focus on the broader issues around process redesign and how to move away from current state (likely to be ‘just-in-time’ data management model) to a target operating model that delivers on the expectations of your business to deliver timely, accurate and consistent product data to your clients…