Content-Security-Policy: "frame-ancestors 'self' ptp.cloud"
by Scott Scheirey
In between work, bombing ski runs and taking my dog out to play, I had the opportunity to read Jesse Johnson’s recently released Leading Biotech Data Teams. His message really hit home for me, and if you’re interested in early-stage biotech research, I’d highly recommend giving it a read. Jesse’s suggestions that ‘perfect is the enemy of good’ and that often ‘data driven biotech’s experience bottlenecks’ both triggered thoughts that lead to this blog. This is not a full summary of the report, rather a highlight of common trends I see among data teams that create friction and contribute to the delay of progress.
Before I offer my perspective, it will be helpful to review the three main sections Jesse outlines in the report that create an impediment to progress.
  • Defining Objectives: Technically your job is to advance your company’s scientific research first and foremost.
  • Building Collaboration: Biotech to Wet Bench and Data Teams need to collaborate to be a successful biotech.
  • Deploying Tooling: Collaboration is important throughout the process, and perfect is the enemy of good. You need your team’s input to deploy tooling that is useful to your team.

 

The Smart Factor
Jesse refers to ‘data teams’ as any mix of individuals specializing in the bioinformatics, computational biology, machine learning, data science, software and data engineering disciplines. I find that each individual that makes up a data team is bright, has advanced degrees, and has dealt with and overcome the trauma and hierarchy drama that’s often involved with graduate education in STEM. However, when everyone in the room has a different idea of what steps are needed to drive research forward, the room divides. If you’re not on “my” side, you must not be that smart after all.

 

Role Identification
This is a major issue I notice on many data teams. Team members enjoy learning new things, creating new things, and seeing their efforts turn into something they see as purposeful. The question is SHOULD they be investing on the side quest? To explore this, I offer common traits your data team members share:

  • Tremendous pride in building something cool.
  • Passion for learning.
  • Willingness to say “yes” (and possibly over-commit).
  • Experts in their discipline.
  • Independent and accountable (and limited collaboration).

 

I’ve encountered the side quest dynamic many times relating to pipeline building and implementing AWS technologies, among other scenarios. For example, a computational biologist is excited to build new pipelines from scratch but has over-committed to their projects. There is excitement in the challenge to build something that requires additional learning, but likely is not the avenue that provides the most business or scientific value to the company. This can cause significant friction in teams and is why companies leverage technologies, platforms, and tools to accelerate this process (i.e. Nextflow).

Another example would be people building shiny new dashboards that took entirely too long, without enough input from other team members, which ultimately gets canned for a much less pretty dashboard that was done quickly (matching the demand / scientific needs of a company) that took incorporated a broader input.

Sometimes the side quests your team are on have been facilitated by other scientists who ‘need this data really quick’. In more extreme scenarios this causes the data team to explore more ways to help other less code-friendly folks ‘self serve’ on their data. Building new things in house, that hopefully the team likes, or paying for some sort of platform to achieve this to the best of their ability – they’re never perfect.

 

Cloud Architecture
Other scenarios we see at PTP involve AWS cloud architecture. Teams with a scientific focus are spending time to learn more about AWS, provisioning new resources, and spinning up instances. This could be a useful skill, but not a skill they plan to have as their ‘main focus at their company’. Which inherently allows your teams to be more self-sufficient in using compute, with a lack of oversight, driving up costs at organizations.

Ultimately the architecture that has been built works, but has points of failure, and costs entirely too much. We’ve started working with companies where we were able to reduce spend by 70% by implementing an appropriate architecture.

It reminds me of my organic chemistry courses where you’d lose points on a synthesis diagram if it could have been done in fewer / more efficient steps. The reasoning behind it is that industry wants scientists to get results quickly and reduce spend wherever possible.

Reading Leading Biotech Data Teams certainly stirred up thoughts and emotion in me, something a lot of people in our industry share. My belief is that building a successful data team requires people to be laser-focused on their goals, to effectively collaborate with different team members, and that perfect is the enemy of good when deploying tooling (and in a lot of scenarios). Get that prototype out there, perfect it, and best of luck on the path to IND!

How can PTP help with CloudOps Services?

Get our latest news right away!

You have Successfully Subscribed!