it required another blog post. I won't over blog it as it is open access and apparently far clearer than some of my other papers to my parents (my metric for how overly complicated I've made things). This one is about the importance of validating finite element analyses (see FEA for "dummies"), but will also touch on the joys of trying to publish negative results (i.e. when experiments don't match computer models). A quick background for those who don't want to read the previous post, finite element analysis (FEA) is a method for analysis how complex structures deform under loads, by simplifying them to a series of finite interconnected units (be it bricks, tetrahedra or any triangles: the elements) that have been given material properties appropriate for the structure (e.g. if it is a steel beam, the elements are given steel structural properties). It is known the method works incredibly well on man-made objects, and it is indeed the engineering tool used for everything from designing cars (and crashing them virtually) or planes, to bridges and buildings. Basically anything that an engineer might build, there is probably a finite element model out there somewhere.You may see where I am going with this then, the method works with varying degrees of success on biological structures for replicating strain magnitudes and orientations. Most recent work on mammals (monkeys, pigs) and reptiles (particularly alligators) manages to get very close replication of strain patterns across the models, but to date few studies have looked at birds. Birds are important as they have very mobile skulls (they have loads of extra little joints in the skull compared to most mammal and reptile skulls) are in a palaeontological context are important as the nearest living relatives to dinosaurs (being descended from them). Many studies have looked at how dinosaur skulls perform under feeding loads, but what does that really mean if we don't know how accurate models are on even their living relatives?
So building on the previous limited work that has looked at ostrich mandibles (Rayfield 2011), and finch beaks (Soons et al., 2012a,b,c), and in preparation for trying to understand ornithomimosaur (ostrich mimic dinosaurs) skull function, I started looking at validating an ostrich cranium (n.b. skull is cranium plus the jaws). We had some frozen ostrich skulls from an ostrich farm in the UK, and I used several in the course of the project, first as a practice dissection, then as a practice experiment, then one for the actual experiment/validation, and one more for material property testing. The one used for the validation was sent frozen to Hull/York Medical school for CT scanning prior to any work so we had a full digital copy, and could use it for making the computer models.
As you can see from the two images showing the ostrich models, missing gauge 6 is a shame as it is in one of the high strain areas. It becomes important, when considering strain magnitudes (effectively change in shape, i.e. deformation) which don't particularly match:
These results are particularly interesting as similar methods have worked on mammals and alligators producing models that closely match those of the experiments. As for why the results are so far off in our models is unknown, and something that needs further investigating. It may come down to how we modelled the materials of the cranium, because joints in the skull are far more difficult to model than we have, because our new tendons were worse than before, or a myriad of other factors that I've not discussed here or in the paper. However, the data in the paper are all interesting and this is the first full attempted cranium validation of a bird ever. As a spin off issue from the paper, it showed me how difficult it is to publish negative results. Negative results are where the results of a study show no match between models and the experiments or in the case of medical science, where the medicine are no better than a placebo. However, these results are really poorly represented in publishing as they don't make sexy stories. This leads to the potential for replication of experiments that don't work repeatedly through time:
My paper went through a round of major corrections at one of the "traditional" journals, before being rejected when we put in more data showing the fact the model doesn't match. As such we sent it to PeerJ (a new open access more welcoming to all result types) who sent it through a round of major revisions, before accepting it. Most of the biggest problems stem from reviewers believing our results are wrong through some fault in the methodology and telling us to do more experiments (I accept some of the corrections were things that we needed to clarify, or tidy, or explain further). 1) This is problematic as the specimen quickly dries out during testing so would require a complete redoing of the entire thing which took me almost a year and 2) this perpetuates the not publishing negative results trend. If the method doesn't work, why shouldn't we tell people this doesn't work and not to try it again, or to come up with modifications that might improve it? I believe if our results had been very close with no issues it would have been published rapidly in the "traditional" journal and not taken 2.5 years. It is something I would love to test, but the ethics of sending papers out to review that are the same methods, but differing results is a bit dubious and would require some thoughts. If anyone has any idea or willingness to get involved on this, please let me know.
References
Cuff AR, 2014. Functional mechanics of ornithomimosaurs. Thesis. University of Bristol.
From Cuff 2014. Ostrich cortical bone, and muscle model showing strain patterns. |
These results are particularly interesting as similar methods have worked on mammals and alligators producing models that closely match those of the experiments. As for why the results are so far off in our models is unknown, and something that needs further investigating. It may come down to how we modelled the materials of the cranium, because joints in the skull are far more difficult to model than we have, because our new tendons were worse than before, or a myriad of other factors that I've not discussed here or in the paper. However, the data in the paper are all interesting and this is the first full attempted cranium validation of a bird ever. As a spin off issue from the paper, it showed me how difficult it is to publish negative results. Negative results are where the results of a study show no match between models and the experiments or in the case of medical science, where the medicine are no better than a placebo. However, these results are really poorly represented in publishing as they don't make sexy stories. This leads to the potential for replication of experiments that don't work repeatedly through time:
From: http://theupturnedmicroscope.com/comic/negative-data/ |
References
Cuff AR, 2014. Functional mechanics of ornithomimosaurs. Thesis. University of Bristol.
Rayfield EJ. 2011. Strain in the ostrich mandible during
simulated pecking and validation of specimen-specific finite element models.
Journal of Anatomy 218:47-58.
Soons J, Herrel A, Aerts P, Dirckx J. 2012a. Determination
and validation of the elastic moduli of small and complex biological samples:
bone and keratin in bird beaks. Journal of the Royal Society Interface
9:1381-1388.
Soons J, Herrel A, Genbrugge A, Adriaens D, Aerts P, Dirkx
J. 2012b. Multi-layered bird beaks: a finite-element approach towards the role
of keratin in stress dissipation. Journal of the Royal Society Interface
9:1787-1796.
Soons J, Lava P, Debruyne D, Dirckx J. 2012c. Full-field
optical deformation measurement in biomechanics: digital speckle pattern
interferometry and 3D digital image correlation applied to bird beaks. Journal
of Mechanical Behavior Biomedical Materials 14:186-191.
No comments:
Post a Comment