Excel Ninjas & Digital Alchemists – Delivering success in Data Science in FS

In February 150+ data practitioners from financial institutions, FinTech, academia, and professional services joined the Leading Point Data Kitchen community and were keen to discuss the meaning and evolving role of Data Science within Financial Services. Many braved the cold wet weather and made it across for a highly productive session interspersed with good pizza and drinks.

Our expert panellists discussed the “wild” data environment in Financial Services inhabited by “Excel Ninjas”, “Data Wranglers” and “Digital Alchemists”. But agreed that despite the current state of the art being hindered by legacy infrastructure and data siloes there are a number of ways to find success.

Here is the Data Kitchen’s ‘Recipe’ for delivering success in Data Science in Financial Services:

1. Delivery is key – There is a balance to strike between experimentation and delivery. In commercial environments, especially within financial services there is a cost of failure. ROI will always be in the minds of senior management, and practitioners need to understand that is the case. This means that data science initiatives will always be under pressure to perform, and there will be limits on the freedom to just experiment with the data.

2. Understand how to integrate with the business – Understanding what ‘good’ delivery looks like for data science initiatives requires an appreciation of how the business operates and what business problem needs to be solved. Alongside elements of business analysis, a core skill for practitioners is knowing how to ‘blend in’ with the rest the business – this is essential to communicate how they can help the business and set expectations. “Data translators” are emerging in businesses in response.

3. Soft skills are important – Without clear articulation of strategy and approach, in language they can understand, executives will often either expect ‘magic’ or will be too nervous to fully invest. Without a conduit between management and practitioners many initiatives will be under-resourced or, possibly worse, significantly over-resourced. Core competencies around stakeholder and expectation management, and project management is needed from data practitioners and to be made available to them.

4. Take a product mindset – Successful data science projects should be treated in a similar way to developing an App. Creating it and putting it on the ‘shelf’ is only the beginning of the journey. Next comes marketing, promotion, maintenance, and updates. Many firms will have rigorous approaches to applying data quality, governance etc. on client products, but won’t apply them internally. Many of the same metrics used for external products are also applicable internally e.g. # active users, adoption rates etc. Data science projects are only truly successful when everyone is using it the way it was intended.

5. Start small and with the end in mind – Some practitioners find success with ‘mini-contracts’ with the business to define scope and, later, prove that value was delivered on a project. This builds a delivery mindset and creates value exchange.

6. Conduct feasibility assessments (and learn from them) – Feasibility criteria need to be defined that take into account the realities of the business environment, such as:

  • Does the data needed exist?
  • Is the data available and accessible?
  • Is management actively engaged?
  • Are the technology teams available in the correct time windows?

If you run through these steps, even if you don’t follow through with a project, you have learned something – that learning needs to be recorded and communicated for future usage. Lessons from nearly 100+ use cases of data science in financial services and enterprises, suggest that implementing toll-gates for entry and exit criteria is becoming a more mature practice in organisations.

7. Avoid perfection - Sometimes ‘good’ is ‘good enough’. You can ‘haircut’ a lot of data and still achieve good outcomes. A lot of business data, while definitely not perfect, is being actively used by the business – glaring errors will have been fixed already or been through 2-3 existing filters. You don’t always need to recheck the data.

8. Doesn’t always need to be ‘wrangled’ – Data Scientists spend up to 80% of time on "data cleaning" in preparation for data analysis but there are many data cleansing tools now in the market that really work and can save a lot of time (e.g. Trifacta). Enterprises will often have legacy environments and be challenged to connect the dots. They need to look at the data basics – an end to end data management process, the right tools for ingestion, normalisation, analysis, distribution and embedding outputs as part of improving a business process or delivering insights.

Our chefs believed Data Science will evolve positively as a discipline in the next three years with more clarity on data roles, a better qualification process for data science projects, application of knowledge graphs, better education and cross pollination of business and data science practitioners and the need for more measurable outcomes. The lessons from failures are key to make the leap to data-savvy businesses.

Just a quick note to say thank you for your interest in The Data Kitchen!

We had an excellent turn out of practitioners from organisations including: Deutsche Bank, JPMorgan, HSBC, Schroders, Allianz Global Investors, American Express, Capgemini, University of East London, Inmarsat, One corp, Transbank, BMO, IHS Markit, GFT, Octopus Investments, Queen Mary University, and more.

And another Thank You to our wonderful panellists!

  • Peter Krishnan, JP Morgan
  • Ben Ludford, Efficio
  • Louise Maynard-Atem, Experian
  • Jacobus Geluk, Agnos.ai

…And Maître De – Rajen Madan, Leading Point FM
We would like to thank our chef’s again and to all participants for sharing plenty of ideas on future topics, games and live solutions.