In defense of the sanctimonious women's studies set || First feminist blog on the internet

Brittany Wenger knows how to diagnose cancer online. Not bad for a gi-irl.

But girls still suck at science, right?

The grand prize in this year’s Google Science Fair went to Brittany Wenger, 17, who wrote a “global neural network cloud service” app–basically, a cloud-based brain with spectacular pattern-recognition capabilities that “learns” as more data is provided–to help doctors diagnose breast cancer. Using data from fine needle asiprates–traditionally one of the least invasive but least precise diagnostic procedures–Wenger’s Cloud4Cancer correctly identifies 99 percent of malignant tumors.

I once did a science fair project on which mouthwash was most effective at killing germs over time. I found that Listerine was pretty much an atomic bomb for your mouth, and after that you might as use saline. I think. It was a while ago.

Over 6,800 trials of 681 test samples,

The custom neural network achieved predictive success of 97.4% with 99.1% sensitivity to malignancy — substantially better than the evaluated commercial products. Out of the commercial products, two experienced consistent success while the third experienced erratic success. The sensitive to malignancy for the custom network was 5% higher than the best commercial network’s sensitivity. This experiment demonstrates modern neural networks can handle outliers and work with unmodified datasets to identify patterns. In addition, when all data is used for training, the custom network achieves 100% success with only 4 inconclusive samples, proving the network is more effective with more samples. Additionally, 7.6 million trials were run using different training sample sizes to demonstrate the sensitivity and predictive success improves as the network receives more training samples.

English translation? In early testing, Wenger’s app beat out three commercial apps currently in use and appears to get even more accurate as sample sizes increase. Wenger says she thinks her app “might best hospital ready” and would “love to get different data from doctors.” So… doctors, you’re up.


12 thoughts on Brittany Wenger knows how to diagnose cancer online. Not bad for a <em>gi-irl.</em>

  1. Very, very cool and good for her. Does it say anywhere what her rate of false positives was? I didn’t quite understand that aspect of the reported test data.

  2. This is red hot: teenage app developer pwns cytotechs. LMAO. Brittany, science Olympics should exist, and don’t. Cheering 4U.

  3. Jadey: if you go to the google site, there’s a slideshow. Slide #17 gives the following breakdown:

    Actual Outcomes
    Positive | Negative
    Tested:
    Positive 222 15
    Inconclusive 14 11
    Negative 2 417

    Which is apparently a 93.67% positive predictive rate and 99.52% negative predictive rate. (Presumably that means 6.33% false positives and .48% false negatives.) Which makes sense – the slide show stresses that the model was “weighted towards malignancy” – that is, to err on the side of saying it’s malignant when it wasn’t.

  4. Yeah, I would expect a serious false positive rate. At the same time, as long as it is pitched as a first pass to be confirmed, that’s still exceedingly useful. It’s great stuff.

  5. @ Shauna

    Ah, thanks – I didn’t realize that the slideshow contained so much more detailed information. The specificity rate (96.53%) was exactly what I was looking for, and it’s equally impressive to the sensitivity rate! I actually wish more sources were reporting both – a sensitive but non-specific test is about as useless as an insensitive test.

    Yes, it makes more sense to gear a testing model like this more toward reducing false negatives than false positives, given the relative consequence of failing to get treatment for a malignant tumour versus getting unnecessary treatment for a benign one, but I’m very happy to see that she achieved such excellent results on both scores!

  6. LC –
    With a pattern-recognition software like this one, you could conceivably train it on multiple populations, using only data from those relevant to a given patient. Definitely very dependent on being given enough data from each group, though.

  7. I’d love to here about the background here. Who gave her the educational tools here? I’m not diminishing what she’s done, by any means, but it’s clear she’s been getting incredible support somewhere- she obviously didn’t pull data out of her ass-, and I’d love to hear that story, too.

  8. pkle – absolutely. That’s part of what makes this so interesting, it should be fairly tunable. From what has been described, though, it strikes me that it is going to swing to being a better screen to eliminate negatives no matter the case.

    And I’m with samanthab, I’d like to see where she got all the support from.

  9. It looks like she goes to a fairly high-end prep school, so she’s probably had the benefit of a lot of support and attention. Which is not to in any way diminish her accomplishments, but rather to note how much can be accomplished with the benefit of support and attention.

  10. Which is not to in any way diminish her accomplishments, but rather to note how much can be accomplished with the benefit of support and attention.

    Quoted for truth, (and I suspect what both samanthab and I were getting at).

  11. Not to rain on anyone’s parade, but the usefulness of this software is pretty dubious. I’m a pathologist who looks at fine needle aspirates…the program that she wrote requires that the slides be evaluated by a person and then the criteria be entered into the program by hand. The criteria are well-known features of malignancy (nothing new here). Now, if it was based on image analysis with slide scanning by a computer, it would be useful. This is NOT hospital ready.

    Not that I could write a program like that now, or when I was 17…

Comments are currently closed.