Spotlight On: Marie Erwood

Reigniting our Spotlight series, we sat down with Fabacus data scientist, Marie Erwood. A curiously-minded problem solver and storyteller, Marie’s experience within data science crosses academic medical research, charity marketing and working with big data within the technology sector. 

We discussed the fundamentals of data science, its evolution, common challenges in embedding insightful data-driven strategies into businesses, and how data science intersects with technology, licensing and events…

Hi Marie, thanks for sitting down and speaking with us about yourself, your background and demystifying the world of data science.

Firstly, can you tell us a bit about yourself, your background and your experience? How did you get into the field of data science?

Sure, so I’ve been a data scientist for about six years or so now. I started working in academic medical research and the University of Cambridge as a Research Study Coordinator, and at the time there was a team doing some analysis on genetic data, pulling insights and discussing the data impacts and what it meant for next steps, and I thought “I want to be able to do analysis like that and learn how to do that.” So I did! I went straight into researching and taking multiple courses including a course at the University of Cambridge.

As well as continuing to learn and train independently, I then moved to University College London for a similar Data Manager role. It was there I decided to do a Masters in Data Science, which I studied at Birkbeck in the evenings, whilst working full time at UCL in the day. Which was pretty full on!

While I was finishing my MSc I moved out of the world of academic research and into working at Save the Children as part of their Marketing and Fundraising team. I was there for a couple of years, driving campaigns through insight, and then landed in the Technology sector working at ThinkData Works for a couple of years. That was certainly different, I honed a number of technology skills working with really big data. By that I mean the level and amount of data – for example dealing with hundreds of terabytes of data day to day.

Following this experience I then moved to Fabacus in summer this year to bring my expertise to the world of brands & licensing.

Amazing, such a varied background and clearly an incredibly honed skill set, applying your knowledge and insight to different sectors and industries. That said, what is it you love about what you do – at the core?

Ultimately, I love solving problems.

For me, being able to solve what may be keeping a business or project stuck and then move forward and transform businesses through the data output, and see the impact of what you’ve done and how you’ve helped is really brilliant and lights me up.

I love to be able to follow data from its raw form to something that’s actually insightful, understandable and business facing. Problem solving also links to storytelling – how can you take messy data and put it through a pipeline and get it to a state that it can tell a story to anyone looking at it. I guess, tangibly it’s about making a difference – it’s hugely rewarding.

I also love to learn new things, new skills, I’d say that’s a big part of my role within data science, always learning it really keeps things interesting and fresh.

You spoke there about the role, but can you tell us a bit more about what attributes it takes to make a good scientist?

Talking about learning news things and developing skills, I’d say you need to be curious – super curious. You need to be able to look at data and want to understand it – no matter what kind of data it is or how overwhelming or messy. You need to want to find answers and be curious enough to do so.

That said, you do need a good technical background to be a data scientist- whilst there are some things you can learn, there does need to be a base level of understanding… but you don’t need to already be an award-winning mathematician or anything of that level!

Finally, something I think is overlooked as a key attribute are communication skills. These are super important to be able to communicate the insights and analytics to different areas and stakeholders within a business, or externally. These teams aren’t necessarily data savvy so you need to be able to firstly craft and then importantly, clearly tell that story, and make it clear and understandable to anyone. This is the key to ensuring the data and insight is actually actionable and usable.

That’s a really paramount point and so key for business transformation and ensuring successful data-driven outcomes. So could you tell us more about the importance of collaboration with other departments and how you do so within Fabacus to ensure that data insights are effectively utilised across the organisation and clients?

It really is super important, I could make this amazing analysis of data but if it’s not useful for the business, the goals or client outcomes, then what’s the point? Also, what’s the point if people can’t understand it, so the way it’s communicated is key.

This is the same when it comes to data presentation too – something like a dashboard – I can make an amazing one, a beautiful, technical one, with every bell and every whistle, but if it’s not being used what’s the point… It needs to be aligned with understanding the ‘why’.

In terms of processes, whenever starting a new project, from my side I firstly ask who is this for and make sure everyone is on the same page as to what’s needed. This helps to make sure the data tells the story and the outcome that is needed.

From the other side, collaborating with me, I need to know what the business problems are, how they see me helping to solve them. This means I can then action something that is actually going to help our clients and the business to move forward.

In line with data processes being as useful as possible to drive business forward, through your experience, is there one main pain point you’ve consistently seen occurring when it comes to implementing data science into the heart of a business? Can you talk to some of the common challenges you’ve faced?

I’d say there’s a couple – one being down to the data itself.

It’s usually incredibly messy, in different places throughout businesses, in loads of different formats – or potentially not even being gathered in the first place so there’s nothing to be working with! The first challenge is understanding the data you have and then transforming that into a singular format, to be able to do something meaningful with it.

The amount of data is one thing but the quality of the data is the main thing, and whether it’s going to be fit for purpose for the outcome business are wanting. Sometimes it’s just not there and you’ve got to start somewhere in order to then implement what strategies are needed.

Outside of the data itself, the other would probably be business buy in.

Now this could be one of two ways; either, you have people within a business that are just not interested in data at all – be that they don’t understand it or don’t care! So that becomes a challenge to really implement insights & strategy and be truly data-driven if that is the viewpoint.

Equally, it can also be a challenge if people are really interested in data and want to do so much (which is great!) BUT the data they have in existence is not of the quality or level to be able to do what they want, this can actually be pretty common and can be a pain point to have to communicate and manage.

You spoke earlier about your love for learning and honing your skills; in an area that is constantly evolving, and with data and technology becoming so ingrained in businesses everyday – how do you ensure you are continuously learning and developing in your role & field?

I really do make a point to look for new courses and stay up-skilled.

But sometimes it’s even following different blogs, social media and accounts of influential data scientists that I look up to and respect that helps to be able to find out changes, and learn and absorb new things. This can also be through generally networking, keeping up with people I’ve previously worked with and what they’re doing and seeing.

In London there are good networks and meet ups within the data science community – and I think it’s so important to be involved and keep up with what’s going on as there are so many developments and evolutions happening at the moment.

Most definitely, machine learning as a whole is certainly a hot topic on the news agenda. What would you say is the biggest evolution you’ve seen play out during your time within data science? And which emerging technologies do you think have the potential to revolutionise the field of data science in the near future?

Over the past few years I’d say that big data is becoming much more wide spread. As I mentioned at the beginning, by that I mean the sheer size and availability of it. I gave the example of working with hundreds of terabytes of data, now that would not have been the case at all in a commercial business a few years ago! Tons of data is being generated, the cost to store and process it, and the tools to analyse that are far more prevalent to be able to do interesting things with it – and handle that amount of data in a proactive and insightful way!

Part of that is of course machine learning and namely the computation power to be able to run deep learning, computationally intensive analyses, that really on AI is going to be able to do. That’s where the evolution is really exciting, and revolutionary as such – is just the sheer computational power that machines can add and the opportunities that opens up for us – both within business but also academically, and for society and learning as a whole.

When you take things like ChatGPT, whilst they are interesting technology, in my opinion they are just not there yet in terms of fully widespread, day to day, helpful use for anyone and everyone. But, that said, the evolution in such a short space of time is super impressive and you can see where it is going for more day to day usefulness, and also getting people to understand data science and insight further, how things work and what the outcomes can be.

Speaking abou the future and how technology is aiding all fields in their evolution, how do you feel data science intersects with technology and licensing, in the context of Fabacus’s operations? And how does data play a role in the future of licensing as an industry?

One of the things that I’ve noticed coming into the industry from my experience elsewhere is that licensing appears to be less data driven as it stands; perhaps simply due to the sheer number of stakeholders and partners involved and different data sets living with different people. But things are certainly changing, particularly with what we’re driving at Fabacus and looking to achieve.

Recently I’ve been working on our dashboards for licensees and licensors on the Xelacore platform, to help tell the data story more visually as well as and make the data more commercially usable.

I’m excited to help change the status-quo and aid in providing data that is useful and easy to understand to ensure more data driven decisions and strategies for the industry and clients as a whole. And this isn’t just limited to licensing clients, it goes for any business and sector.

I also think the supply chain data and our work around Digital Product Passports is so interesting and integral to the future of licensing and retail – not just for businesses to be collecting and reporting that data but also for consumers to see too. This is where the storytelling element of data analytics really comes into play as well. How can we make this data digestible, visual, and interesting to consumers, so they engage and want to keep re-engaging to know more about the products they’re buying. 

Really interesting and how widespread the foundation of data will be across B2B and B2C channels. You spoke about dashboards and creation of those models, what specific data science techniques or methods are you looking to predominantly roll out and employ at Fabacus, and how do you assess their performance?

Well, I like to start simple! There are tons of different machine learning models and each will be different depending on the outcome and what you’re wanting to know. Starting with a simple model gives you a foundation to work from then fine tune and add complexities; you can see how well it predicts and that gives you a baseline level of accuracy and then you can go more complex and look at the data points you’re putting into it.

Another thing that will have an impact on the methods and assessment is the outcome! As before, speaking to all departments, collaborating both internally and with external partners and truly understanding what they want and what they’re using the data or dashboards for as this will have an impact as to what is added and how they’re created. That step is key. 

For evaluation, there are various metrics you could use to evaluate models. The main thing I would say is to optimise and remain agile through monitoring, tweaking and retraining the model over time based on new data, new inputs, and so ensure the model is actually working correctly over time. It’s the same as any area within business, looking at objectives, keeping nimble and making sure that evaluations are taking place, this will only aid more learning and development.

Exactly. Thank you so much for your time and insight Marie, a really thought-provoking and interesting discussion!

To hear more from Marie or understand more about Fabacus’ data-science driven capabilities, get in touch at info@fabacus.com

Related Posts

fabacus logo black

Founded in London in 2016, we are a global technology business and creator of proprietary technology platform, Xelacore.

Let's Connect

Popular Post