I just finished attending a workshop called WINGS (Workshop in Next Generation Sequencing) at UNC-Charlotte. The first day consisted of mostly fantastic research talks and one as much about career and life as much as science.
Colleen Doherty is a relatively new PI at NC-State and she went through 10 things to keep in mind when designing an experiment or embarking on a career. One of the first things she brought up was having a good question. It’s something you hear from a lot of scientists, especially when you’re considering going on a big expensive next-gen sequencing experiment that will cost a lot of money. Garbage-in, garbage-out is basically what it comes down to.
Her broadest question is ‘How do plants make decisions?’ which is certainly a good question and one that basically encompasses all of developmental biology; how does a cell determine what it’s supposed to do based on both internal and external cues, even ones that might be conflicting. Of course, in science broad questions are important, but specific testable ones are arguably more so since that’s the basis of experimentation.
Something that scientists talk about is that a lot of our experiments, even ones that are well designed (everything works on paper) don’t actually work as expected (or outright fail). Working out a constellation of experiments that lead to funding takes time. She talked about how incremental learning is not a bad thing, especially when it comes to computer science– where the worst that can happen is the code doesn’t work– if it doesn’t, go back and fix it. It takes time, but the financial cost of the computing power isn’t usually prohibitive (assuming you’re using some one else’s dataset).
Experiments seem to be getting bigger and more expensive, particularly in biology. And justifying them is all the more important (justifying them to a tax payer who funds the research particularly). The size and expense of a basic research experiment now almost means it has to pay off in some translation; and not just a spin off (like the world wide web out of CERN, but near immediate payoff from the results of the experiments directly). At the workshop, all of the talks featured big experiments where people were quite clever about their experiments, questions and the data was incredible. However, as in any talk, it’s also true that people leave out the parts that don’t work, the harder and messier parts of the story that always exist.
It can be hard to figure out just what questions are worth asking.
Many of the breakthroughs in science come after years of blind avenues, experimental failures, near misses and figuring out just what the question worth actually asking might be. When starting work in a system, there’s at least one question that seems to make sense, but that often morphs into something else altogether by the end of the experimental process. The actual results in a scientific paper are often not in the order in which they were determined.
There’s a synthesizing that happens which is why writing practice is so important.
And that’s part of why science is so great. We can account for everything we know, but can’t account for the things we don’t and there are always things we don’t know and things we can’t control for due to cost, a lack of technology/method, or even just plain human obliviousness/stupidity (scientists have all the cognitive biases other humans have).
PIs are a selected group of people because they have collectively reached that ‘good question’ point. Of course, they have to keep asking good questions and I think that that’s where the benefit of what I call ‘The PI network’ becomes a continued success engine. It’s a concentrating of knowledge of people who constantly review each other’s grants, talk to one another at conferences and actually vote for professional society leadership positions (as a postdoc, I don’t vote for ASPB officers; I simply don’t feel I have a real voice in that selection as a postdoc). One of the last points Dr. Doherty made was that
‘Your ideas are as good as anyone else’s’.
Not all of us will be so lucky to hit upon a great question, even though we may have some moments of brilliance; nearly everyone does (keep putting ideas out into the arena!). But fewer of us in the era of tight budgets and bigger experiments (where learning by trial and error may be much less of an option– small scale/preliminary results are important, but there’s still that steep learning curve and knowledge it may not work) that take longer in an era when more of a premium is being placed on speed than care in getting results published (not to mention the bias to only ever produce positive results), I fear that asking good questions, learning incrementally and being OK with taking time to think, and just letting ideas breathe are ideas going by the wayside. Big data are a fantastic opportunity and resource for scientists, but generating that new data is expensive (even if the expense of sequencing is cheaper than ever/base pair, with replicates, sample processing and different treatments/controls, it adds up quickly).
I’m glad I attended WINGS 2014. At the very least it’s gotten me thinking more about the technologies and data analyses of larger datasets and just how careful and clever you have to be to ask good questions and generate interesting results. These new technologies are similar to the introduction of computers to biology; and learning to use new technologies is of course, always important in science. I hope wet lab/bench science is always still relevant, though I often joke with the growth of computational biology, that all anyone will be doing 20 years from now will be in-silico.