text
stringlengths 0
21.4k
|
---|
Artem: |
Depends on your end goal. If you wish to follow a "conventional" path of publishing papers, writing grant proposals, and moving up the ladder, it's almost necessary to join a lab at some point, the sooner - the better. There are many research groups who routinely collect experimental data, but lack people who could do data analysis. Luckily, being a computational neuroscientist also means that you have the capacity to do work remotely. |
If you are learning for other purposes, such as personal interest or science communication, the next steps will vary drastically. |
Cesar: |
The answer depends on the rabbit hole you find yourself in. My general advice is to keep up to date with the latest research on your topic of interest. You can set up email alerts from Google Scholar for certain keywords. Expanding your Twitter network is also a good way to stay on the cutting edge because people will often post good summaries of their latest publication with tweet-prints. |
What are the main differences between working in academia and working in the industry? |
Cesar: |
Two big differences come to mind: |
1. In academic research, you are usually involved in the whole lifetime of a project: from cradle to grave. In industry, you are often dropped into work that has been ongoing for months or years. This requires you to readily make sense of the work that a team has already undertaken and the tools they have built so that you can effectively contribute to the team's efforts. |
2. Work in the industry is much more collaborative/team-based. In academia, you own a research project and maybe pull in people for help when their expertise in a specific method is required. If no one in the lab has the required knowledge, you have to figure out how and where to learn the necessary skills. In industry, you own a small piece of the research project and are in constant communication with people working on other parts of the team. This allows for a faster pace of research. Additionally, a company has people with a wide range of skills. That means that you have access to guidance on topics outside your area of expertise and can thus pick up knowledge and skills more quickly or bypass the need to spend time learning altogether. |
Do I need a degree to get a job in computational neuroscience? Are there many jobs? |
Cesar: |
Computational neuroscience is a vastly inter-disciplinary field. You don't need a degree in computational neuroscience to get a job doing computational neuroscience. If you have a strong mathematical or engineering background and some knowledge of biology, you are an attractive candidate. If you are currently a student, I would strongly recommend finding and pursuing an internship in the industry, if that's something that interests you. This will definitely give you a leading edge as an applicant, though it's not a necessity. |
There are relatively few jobs doing computational neuroscience. However, there are two reasons why a background in computational neuroscience positively impacts your job prospects: |
1. The field of neurotechnology is growing and computational neuroscience is the backbone of the industry so the number and variety of jobs available will also increase. |
2. The skills you develop in computational neuroscience are relevant to the analysis and interpretation of other physiological and sensor data. The wearables industry is at a relatively more mature stage and the jobs available in that industry tackle very interesting high-impact problems. For example, Verily recently released a study where they show that they can track the progress of Parkinson's disease using wearable data. |
You can give only a single piece of advice to someone newly interested in computational neuroscience. What is your number one advice? |
Artem: |
Don't just sit and passively learn about computational neuroscience. Actually DO computational neuroscience. Our brains subconsciously fear the hard work. It is very tempting to just cruise through textbooks and papers, absorbing every bit of the knowledge. You feel like you're making some progress. But this progress is mostly illusory. |
Without direct practice, it's impossible to advance. Go get your hands dirty. Go spend hours figuring out how to do something on your own, resisting the temptation to just look at the solution / copy-paste someone's code. It's better to come up with one clumsy inefficient script, written on your own, than to read tens of papers on a similar topic. |
Cesar: |
Stay curious. Don't be afraid to delve into a new topic. If there's something that piques your interest, take the time to learn it. |
Authored and designed by David Smehlik, edited by Chelsea Lord, Lina Cortéz, and Sophie Valentine. |
David has worked in tech as a product designer with a background in molecular biology. Now he's looking to transition to neuroscience or neurotech in one way or another. |
Chelsea works in clinical operations and is currently working for a company that is a leader in brain-computer interfaces. She studied neuroscience and is interested in the AI and Machine-Learning side of neurotechnology. |
Lina is an electronics engineer and neurotechnology enthusiast who is highly interested in the application of brain-computer interface in robotics control. |
Sophie is a self-starter with a passion for using technology to benefit those who need it most, particularly in neurotechology and mental health. |
In the past decade a small handful of "digital biotech" companies have drawn on ideas, technology, and talent from the software industry to build valuable data platforms for accelerating drug development. While most biotech founders say they want data and ML to be a pillar of their strategy, many have no software experience and therefore struggle to build a digital-first company. |
In my career working in several early-stage digital biotechs, I've seen repeating patterns arise, and have been able to see how early decisions later shape the trajectory of the company. Each company is unique but there are some foundations and rules of thumb for digital biotech platforms that can be generally applied. |
My goal is to provide a few concrete steps to follow, but also to highlight important cultural, technical, and organizational decisions that should be considered very early in the life of any digital biotech company. I'll also propose novel ideas that I think could be successful. |
I hope this can be helpful to others in the industry starting their own digital biotech companies. The more we can infuse software and data practices into drug discovery, the sooner we can solve the problems of poor success rate and reproducibility and the better we'll be at curing disease. |
What is a digital biotech? |
The term "digital biotech" was popularized by Stéphane Bancel, CEO of Moderna, who in describing Moderna as "the first digital biotech" laid out his ideas in a white paper and a 2017 blog post, and whose platform has been the subject of a Harvard Business School case study. Stéphane recognized that mRNA was similar to software, in that it was a set of coded instructions to the cell which could be reprogrammed to produce any protein drug, and built Moderna as a software company from the start. Since then, most industries have recognized the need for a heavy software and data focus, and most companies are scrambling to hire tech talent and implement a digital transformation. Biotech and pharmaceutical companies in particular have invested heavily in data infrastructure in recent years. |
Despite the recent investment, most life sciences companies remain a decade behind the tech world with regard to data management, IT systems, and operational styles. Companies are held back somewhat by the regulated nature of our industry but mostly by a slow-moving, conservative culture inherited from academia and big pharma. In my opinion, "digital biotech" is synonymous with "digitally native" because cultural inertia and legacy systems make it impossible to retrofit modern data practices into an established company. I believe the best and only approach to true, full digitization in biotech is to start fresh with a startup. |
Digitization is much more than simply converting from paper notebooks to documents on a computer, and goes far beyond electronic LIMS and ELN or computational biology pipelines. As a few examples, digital biotech companies |
* Integrate in-house and 3rd party software into a unified platform with defined services, a managed data model, and customized user interfaces |
* Store all production data in databases that are easily findable, accessible, interoperable, and reusable |
* Heavily automate digital workflows via modern web-based applications |
* Provide computational teams with REST APIs or clients in a modern programming language to access data and services |
* Incorporate computational biology and machine learning in the loop of experimental design and analysis, and |
* Invest heavily in robotic lab automation controlled by the digital platform. |
As Moderna's white paper lays out, digitization improves quality, and cost. Furthermore, while many companies aspire to apply artificial intelligence, few have the AI-ready digital infrastructure required to extract clean, harmonized datasets for training machine learning models at scale. |
Only a handful of biotech startups are truly digitally native. These companies often resemble Silicon Valley tech startups in the proportion and influence of software engineers, hacker culture, and reliance on purpose-built data infrastructure. However, this breed of startup is becoming more common, and I expect will eventually dominate most life sciences markets. |
Do you need a digital platform? |
First, if your strategy is to rely on AI, you will need a custom digital infrastructure to enable it. But the good news is that just having a digital infrastructure will be such a massive competitive advantage that it may not matter how good your AI is. |
Digital biotechs only make sense for platform companies, which intend to build a novel, highly generalizable capability to produce many different products. A custom-built data infrastructure is probably not worth the investment for biotechs with one or a few assets. If you already have a lead molecule and have a straightforward path to the clinic, you probably aren't nor need to be a digital biotech. Likewise companies that rely on slow, manual techniques that don't scale, such as animal models, may have more typical data management needs that can be outsourced to vendors or CROs. If your assays are unwieldy and data generation is difficult, this path is probably not for you. |
Another case where digital platforms might not worth be the investment is when the R&D approaches are very mature, for example if the goal is to screen and optimize a proprietary small molecule library for a well-known target. Tools for small molecule compound registration and SAR are mature and you might not get much return from a custom data platform. |
Digital platforms are especially good when you have a data generation engine - a handful of novel workhorse bioprocesses or assays that are scalable and can be automated. Other good reasons for a heavy digital investment are if machine learning is a central pillar of your strategy, your data are complex, your data have economies of scale and increasing returns, and you have clear opportunities for lab automation. |
Strategy |
Strategic decisions should be made very early, because these are often hard to reverse. You probably already have a strategy for your product, but if you are building a digital biotech then it's important to consider the platform strategy. |
The inherent advantage of a digital biotech is the ability to practically eliminate the marginal cost of manual tasks of scientific work. In other words, it becomes as cheap and easy to do something a million times as it is to do it once. Once you digitize a task, that task is now available to support a company of 5 or 50,000. Similarly, the promise of artificial intelligence in our current era is not in superhuman abilities but the ability to scale an intelligence of even subhuman levels. |
It is obvious, of course, that digitization implies scalability, but it might not be obvious that the most important aspect of your platform strategy should be how to leverage this potential to scale. Your digital strategy should clearly lay out how the platform leverages fewer people to gather and share vastly more data, which can in turn be applied to make all of your products better. |
The first thing to consider are the details of your data engine. Questions to answer include: |
* What is your data generation engine? These are your handful of workhorse assays or bioprocesses that you will build your platform around. |
* How can you scale your data generation engine? Do you have an advantage in data generation that others don't? Can you build that advantage by investing in automation, talent, or IP? |
* How will your platform get better with scale? Your company should become stronger with more data, more programs in your portfolio, and more partners - whereas traditional biotech companies lose efficiency at scale due to communication barriers and data silos. |
Even if you don't use computer simulations or artificial intelligence, the role of models should not be overlooked - at the very least you have a collection of mental models. Also, all data have an inherent model in their table, column, and row structure, otherwise it's just numbers. Questions to answer regarding models include: |
* What are the key aspects of your data model? These are the main fields and metadata from your data engine. How flexible and extensible will your data model be? Who will be allowed to view, modify or add to it? How easy should it be to navigate and query? What data are structured, semi-structured, or unstructured? |
* How are you building, communicating, and applying mental models? Do you have a space for sharing narratives and analyses? Do you pre-register hypotheses? For which aspects of your data engine do you want to avoid preconceptions and gather unbiased data ? |
* Do you have a clear machine learning strategy? Are there existing model architectures that can be trained on the data coming from your data engine? How are the data labeled and what is being predicted by the model? Who has access to the model predictions? Is ML "in the loop" of data generation? Will there be a shared codebase for models? |
Digital infrastructure |
Until about 5 years ago, most biotechs were still deciding whether to adopt cloud computing for their IT infrastructures. Now few biotech startups need to be convinced to use the cloud. However, the cloud vs "on-prem" debate is just one instance of a constant "build or buy" decision when it comes to digital services. |
A useful strategic framework for these decisions is the technology "stack," and at what level of the stack you believe your company can differentiate by building or customizing. For example, at the base of the stack might be networking, which can usually follow well-defined best practices. Similarly, if you consider the servers and databases as the next level in the stack, managed services and templates from the cloud provider will be fine for most use cases. |
At some level of the stack, it becomes less clear that standard tools and services will meet your needs, and you will need to make decisions about whether to use mature solutions, to buy new tools or combinations of tools through software vendors, or to hire programmers to build your own. |
I have a separate blog post that gets into the details of how to decide on specific IT tools. Instead, this section outlines the high-level strategy for the data stack and how it relates to other facets of the business. |
LIMS |
The Laboratory Information Management System, or LIMS, simply refers to the highly configurable database and data access layer customized to a company's unique data model. The LIMS provides APIs and graphical interfaces that abstract and hide typical database operations from the user - create, read, update, delete, but we will consider these "apps" separately. |
A "build" decision for the LIMS will propagate to all services up the stack, since almost all other software components will need to interact with it. If you roll your own LIMS, you'll need to also build the data layer and applications, as well as think about the many data access and governance issues that happen under the hood of commercial LIMS systems. Building a home-grown LIMS is therefore the key decision point for whether you are a "digital biotech," as this is a major commitment to long-term software investment but probably the key differentiator. |
The benefit of building your LIMS is that you'll have much more flexibility to customize your platform and provide a seamless, well-integrated experience across your tools and services. You will be forced to hire true, full-stack software engineers and engineering managers, which will open further capabilities for incorporating custom software throughout the business. You will focus on APIs and data models that unlock the potential to scale. Your company will inevitably be digitally native. |
ELN |
The electronic lab notebook, or ELN, is often lumped together with the LIMS in vendor-provided solutions, but these can be considered separate tools. In fact, while a digital biotech generally builds its own LIMS, it is probably unnecessary to build an ELN. While the leading vendor ELN, Benchling, is creeping into the LIMS space itself, it also provides APIs that allow integration of the notebook and any modern LIMS. |
While ELNs are currently a necessary tool for industry research and can be used independently from other systems, the concepts of ELN and LIMS are beginning to merge. At the highest level of abstraction, they are both databases-LIMS systems capture structured metadata and ELNs capture unstructured metadata for experiments. However, there is nothing preventing notebook entries from being stored in a LIMS as a long text or blob field in the record of an experiment. The ELN interface provides a user-friendly front end, but these will probably converge to the same system eventually. |
Furthermore, the ELN may change drastically if trends continue toward more flexible automation and encoding lab protocols as code. Much of the functionality of the ELN revolves around protocols. In an automated protocol, none of these features are necessary and the protocol is better stored as a typical codebase in a version control system such as git. Manual notes from the experiment can be tracked alongside sensor and instrument readouts during execution of the experiment. As software "eats the lab", the ELN may be replaced by software and structured data. |
Data model |
The data model is extremely important - not simply to manage the data you put into the database, but also because it is central to representing and communicating the shared mental model of your scientific strategy. The data model will continually and iteratively updated, even before you have much data to manage. |
The tables and columns of your database describe what entities and attributes of your platform need to be tracked, how parts of your scientific platform relate to each other, and ultimately which hypotheses you expect to be testing and learning from in the future. For example, if you think cell line passage number might correlate to expression of a key protein, and this readout is crucial to your product, then these should be columns in the database that can be easily joined into a single table for plotting. |
Whether you build or buy your LIMS, in the early days you'll want to subscribe to a no-code tool such as Airtable to prototype your data model in the months while your LIMS is being built and configured with your scientists. No-code tools will then allow the scientists to make edits to the data model themselves as the data begin to accumulate. |
Regardless of the flexibility of your LIMS, however, the data model will become more difficult to modify in the future as schema changes require more shuffling and massaging of the data. |
Services |
Many tech companies have adopted. Simply stated, this is the idea that a complicated software platform can be divided into independent codebases, each with its own database, middleware, and programmatic interface or API. Other parts of the platform can freely access the "service," typically through an automated request over the network. This contrasts with a "monolith" design, where the platform lives on one codebase and different components are imported as libraries. The benefits of SOA are the ability to rapidly develop a codebase independently of a larger, slow-moving monolith, and to provide a stable menu of compute and data functions that can be composed into more complex applications. |
Designing the service interfaces is a crucial process for how the company will operate and scale, and depends on what data and compute functionality need to be exposed to scientists to most conveniently do their job without being overwhelmed by complexity. Careful thought to the organizational structure will also help guide the design of service boundaries and APIs. |
In a scientific startup, you may want to maintain a single, centralized data model that captures the scientific knowledge of the company. This goal can be at odds with services, which can abstract and hide the data model from the user. A balance must be maintained between encapsulating unnecessary details of the data model, while making sure that the relevant data is easily accessible. Data warehouses, discussed later, also address this problem. |
Data ingestion |
Digitizing and structuring biological data and associated metadata takes tremendous effort. This is partly because most instrument vendors export their data in bespoke CSV or Excel formats designed for human readability, and give little thought to making their readouts interoperable and machine readable. Also, scientists are used to managing their data in spreadsheets-requiring both customized tools and a cultural shift in order to automate their workflows. |
First, choose one or two workhorse assays to invest heavily in data automation. Prototype a system to access data directly from the instrument, capture or link to relevant metadata, format into a structured table, and insert records into a central database, with minimal user intervention. This may only be a throw-away prototype but it will help you tease out inevitable issues in networking, instrument interfaces, and data model, and will be an early introduction of digital culture to your lab. |
Simultaneously, begin developing generalizable tools for data ingestion, hardening common patterns of data ingestion into apps and services that can be re-used across assays and processes. This data ingestion layer will be a valuable component of your platform that will quickly be worth the investment. One example of this is a platemap designer: if your scientists use Excel to annotate blocks of wells with metadata, a web tool to help design platemaps and convert them into structured tables can be invaluable. Even better, build a tool to capture the scientists' intentions and convert them into randomized platemaps for liquid handling automation. |
Data lakes |
Your central database should be reserved for "small" and "medium" sized, easily structured data, summarized in way that is human-interpretable. Large, low-level, unstructured data such as images, next-generation sequencing, flow cytometry, mass spectrometry, and other datatypes are best stored in a data lake. |
The simplest form of a data lake is a shared drive or S3 bucket that has an associated table in your database containing filepaths, metadata, and summarized values for each file. For example, you may have a table in your database for flow cytometry experiments that captures the metadata. In the data lake pattern, this table would also contain filepaths to the raw FCS files, so that the data can easily be found and pulled into dedicated analysis pipelines. |
Data warehouses |
Your LIMS is essential for tracking the entities of your lab, but this transactional paradigm for databases is not always the best fit for analytics use cases such as data visualization and machine learning. The crux of the problem is that the best data model for transactions is highly "normalized." Normalized databases have complex schemas with many tables that need to be joined in order to connect data from related entities. |
In contrast, analysis and ML are best suited for a small number of denormalized tables with simple schemas. Duplication and database anomalies are not as much of a problem, but speed of data aggregation operations is crucial. For this reason you will probably want a separate data warehouse, with columnar storage to speed up common analytics operations. This is a separate schema, often a separate database, which is loaded periodically by automated ETL pipelines. |
In the tech world, the utility of a data warehouse is to aggregate data from many databases across the company. But even in the digital biotech with a single, centralized database, the separation of concerns provided by the data warehouse has advantages. It allows you to design the data model for analytics, rather than transactional, use cases. |
In designing your data warehouse schema, think about the simplest scatter or barplots that non-programmers across the company might want to see. Then make sure your schema has tables with the axes of those plots as the columns. Eventually you may want a more flexible, slightly normalized data model, with a query builder to help non-technical users perform joins. |
Org structure |
Too often, the organizational structure emerges as some accident of history - who was hired when, and with what title - or by following the traditional "functional matrix" design of big pharma. While any structure is better than no structure, the org chart should be intentionally designed to match your culture, business model, and eventual product. |
One of Amazon's strengths is a strong set of principles around org design. One important concept is that of the "single-threaded leader", in reality the STL is the more dominant concept and often results in small, nimble teams. |
The ideal org structure for a digital biotech has probably not been invented, and probably doesn't exist. Instead, the best structure completely depends on the specifics of your company and its current state. Therefore, the optimal org design is one that evolves constantly to match your needs. Unfortunately, org design tends to be static, so you should instill culture and mechanisms to combat this inertia and promote periodic reorganization. In the early days it might make sense to shuffle groups, team leaders, and goals as often as quarterly. However, even a mature company should probably undergo a reorg at least every few years to stay competitive. |
Hiring |
Subsets and Splits