Wednesday, April 30, 2014

"...every one of them is slowly going mad."

I found this blog entry about how much programming sucks though Lifehacker today. If you were to ask me what the top problem with programming is, I'd say something to the effect of "programs are written for computers rather than for people" or "code is like a hammer you have to read" and so on. (You didn't ask me about programming but I'm going to tell you anyway.) I'd suspect that most of what makes code "bad" comes down to the inherent differences between computers and human beings. Humans tend to freak out without context while computers don't need reasons to get things done. Humans see value in hard work, so they'll hammer away at the same problem for weeks or years; a computer will do the same but we don't see any value in its long-term labor unless there really isn't a faster way to do it.

So, I think I interpreted the blog post differently than Lifehacker did. Their idea was "The good news, if you're learning to code, is you don't have to worry (too much) if your code is bad." The impression I get is that you just can't write good code unless you're also doing good writing. That means you're simultaneously writing for an audience with perfect comprehension and no consciousness (computers, that is) and for human beings with plenty of consciousness but far from perfect comprehension. That doesn't sound fun at all. The solution most programmers use is to not even bother.

A few choice quotes from the post echo these sentiments:

Not a single living person knows how everything in your five-year-old MacBook actually works. Why do we tell you to turn it off and on again? Because we don't have the slightest clue what's wrong with it, and it's really easy to induce coma in computers and have their built-in team of automatic doctors try to figure it out for us. The only reason coders' computers work better than non-coders' computers is coders know computers are schizophrenic little children with auto-immune diseases and we don't beat them when they're bad.

The human brain isn't particularly good at basic logic and now there's a whole career in doing nothing but really, really complex logic. Vast chains of abstract conditions and requirements have to be picked through to discover things like missing commas. Doing this all day leaves you in a state of mild aphasia as you look at people's faces while they're speaking and you don't know they've finished because there's no semicolon. You immerse yourself in a world of total meaninglessness where all that matters is a little series of numbers went into a giant labyrinth of symbols and a different series of numbers or a picture of a kitten came out the other end.

It's a good post. It's very cathartic.

Monday, April 28, 2014

Today's paper is: 
Legrain, P. & Rain, J.-C. Twenty years of protein interaction studies for biological function deciphering. Journal of proteomics (2014). doi:10.1016/j.jprot.2014.03.038.

I keep changing the reference format. This really shouldn't bother anyone but me.

The take-home message: There's a lot of protein interaction data out there! Contrary to popular belief, most of it isn't just false positives. Rather, most of this data reveals actual biological complexity. Proteins may just interact with more binding partners than we originally thought.

A few awkward points:
  • Both of the authors are experts in the field but are also employed by Hybrigenetics, a company providing protein interaction screen services. They don't advertise the company's services specifically so I suppose it isn't really a conflict of interest. I still grow concerned about such things.
  • The title sounds odd. Couldn't it have been "Twenty years of biological function deciphering by protein interaction studies"? I think it's the word "deciphering", mostly.
  • The entire review sounds a bit strange, actually. I assume it's a result of English as a second language. It does make some conclusions hard to understand, i.e. "Almost 2000 different proteins were analyzed over six time points, covering four orders of magnitude in terms of protein abundance. In those papers, the aim of the purification process is no more the isolation of a protein complex but just a way to zoom in a specific part of the proteome", which sounds dismissive.
  • More specific references to recent interactome-dependent functional studies would have been nice. As they mention, there have been thousands, but it continues to be an active research venue. It looks like the authors just forgot to add some references in other sections: i.e., "..in different cellular contexts (for review on affinity-purification coupled to mass spectrometry, see)".
These guys made an early protein interactome of Helicobacter pylori so I can't complain too much.

Thursday, April 24, 2014

I never seem to remember the name of the film Mafioso. It's the one with the Italian guy who gets shipped to NYC by the mob. It comes up in conversation every so often but its title is just so generic that it's hard to recall.

Wednesday, April 23, 2014

Here's a neat little story about frozen frogs. The basic idea is this: wood frogs freeze solid over the winter, though that's not a problem since they can keep ice out of their cells (otherwise, all those ice crystals can really mess up those frog tissues). They do it by using glucose as a cryoprotectant and a specialized glycolipid as an antifreeze.

The paper in the Journal of Experimental Biology is here.

Tuesday, April 22, 2014

Beats!

Music for today - like many on Youtube, this guy does very high-quality song covers. The difference here is that he covers electronic-heavy songs that would be impossible for anyone to cover without a finely-trained ear, perfect timing, and just the right batch of equipment. I usually prefer covers to move in entirely new directions (the lady and I have been musing about what a female-fronted cover of December, 1963 would sound like*) but these are impressive on a more technical level.  Here are a couple.

A cover of Lovely Bloodflow by Baths. That whole album is wonderful.

A cover of Windowlicker by Aphex Twin. He doesn't quite capture as much of the original's classically weird atmosphere but that may be impossible.

A cover of Lost and Found by Amon Tobin. This is the live version, for reference.

Courtesy of this Metafilter post.

*A good cover. There are some on Youtube but this is the best one I could find, if that tells you anything.

Monday, April 21, 2014

Hello there! You may have noticed this blog transitioning to more of an electronic portfolio. This means that it will be less of a collection of musings and more of a collection of musings with a good bit of Context included. The rambling journal entries are still a primary focus. Now, however, you can learn about me in the "About" tab up there. I've included my personal mission and Curriculum Vitae. There's also a tab for Research: this page describes my current area of study. More material is on the way.

Thursday, April 17, 2014

On networks of the social variety

It's a strange time when I'm feeling better about Facebook and more uncertain about more specific social networks. In theory, the Book of Faces really is the internet's Walmart: it's a place to get all your social interaction needs as long as your needs aren't too specific. It's great for small talk but actually quite inefficient for any kind of extensive dialogue (to see what I mean, try picturing your next Facebook comment thread as a real-life conversation. It gets pretty difficult in those 10+ comment rigmaroles). The same goes for Twitter.* Even so, there's something to be said for small talk. It's better than no talk at all.

The constant level of background chatter is also preferable to noise with a perceived sense of urgency. In real-life terms, I'm talking about ringing phones, ambulance sirens, or other warning klaxons. These are notifications designed to get your intention; they signal immediacy and the need for rapid action. That phone isn't going to ring all day and that ambulance may need the road you're on. A bit of that urgency shows up in email and social networking messages. In each case, we see a tally indicating how many new requests for our attention there are today. Most of those cases aren't even as urgent as a ringing phone. We can ignore most of what happens on Facebook without any cost.

I've been trying out Researchgate over the past few weeks. It's one of the smaller, more specific social networks I alluded to above. It came strongly recommended by folks in my personal development class. Researchgate is essentially a social network specifically for scientists. Each user has a profile as with other sites, but the focus is less on their biographic details and more on their scientific accomplishments (read: publications). The site is well-designed and leverages a more natural approach to networking than sites like Linkedin do. Connection suggestions are frequent but appear to be weighted in favor of real-world contexts (i.e., colleagues in your department, people in your field, or authors you've cited) rather than a largely generic network structure.

The mildly irritating part about Researchgate is the question-and-answer portion. Much like Stackoverflow, it's a forum for technical questions sorted by method or scientific topic. Unlike Stackoverflow (or even Reddit, for that matter), the moderation is minimal. Many questions are hardly questions at all. I can excuse the lack of English proficiency in many cases but it renders many questions impossible to interpret. In other cases, the phrasing is nearly perfect but the question asker fails to provide anything but the most basic details. The worst part may be the sense of urgency: these questions give the impression that entire careers are on the line. A wrong answer, employed in earnest, may waste weeks or even months of valuable research time. I know it isn't my fault if people don't know how to ask helpful questions or how to take advice from internet strangers.** Social networks are just enough of a potential waste of time as it is.


*I'd suggest that Twitter is the Starbucks of social interaction. It's ubiquitous, frequently noisy, and redolent with fashionable credibility, yet even the people who use it often don't seem to devote much mindspace to it. It's usually pleasant yet generally forgettable.

**This is the way.


Additional note to self: don't forget that Biostars exists. It's Stackoverflow for bioinformatics.

Tuesday, April 15, 2014

Today I found out about the grid.arrange function for R (it's in the gridExtra package). It's one of those things which would have been useful to know about years ago. I just found it on a Stackoverflow answer today. The function just takes plots and squishes them together into a single canvas. It's easy, too: just enter
grid.arrange(p1, p2, nrow=2)
and you have everything in one place. The results look great with ggplot's faceting, too!
The extensive package library is absolutely one of the nicest things about R.

Monday, April 14, 2014

Despite the absolutely ideal weather this past weekend*, I spent a small chunk of time beginning to learn Ruby. The programming language itself seems fun and friendly; it's billed as a conceptual hybrid of Perl and Python with the object-oriented nature of something like Java. There are a few reasons why I'd like to learn the language:
  1. Learning new programming languages enhances understanding of the others. My Perl and Python can always be improved.
  2. I've seen a few bioinformatics-related job postings lately which specifically mention the desirability of Ruby experience. They're usually talking about Ruby on Rails and its use in web development. 
  3. It's either that or go back to learning Haskell. As far as learning a new language goes, most programmers seem to take Ruby seriously, at least.
I do have a few concerns.
  1. Is Ruby (or Rails) becoming obsolete? There's clearly room for a great diversity of frameworks in the scope of web application development. I'm also not qualified to answer such a question. Programming is just one of those areas where a newly-learned language or skill can quickly become a dusty relic. Every framework might as well be Esperanto in the long run: they all have their strengths, even if the only strength is facilitating communication between the right people.
  2. Will I ever really need to know Ruby (or Rails)? This question is even more difficult to answer, but I'll give it a try: "probably not." There will always be someone with better code than mine and it may even be in a different language or even an entirely different framework (for all I know, even scientific applications will be most frequently run in a mobile OS in a decade from now). 
Learning for fun seems like the best plan of attack. I can do it at my own pace without worrying about how applicable it is to Doing Bioinformatics. I'll just have the background to know it when I see it. 


*Here's a shot of the grounds at the University of Virginia.
Brooks Hall.

Friday, April 11, 2014

I've been enjoying the short "Futures" science fiction stories in Nature since they started. They tend to be concise, punchy, taut, and perfectly suited for the science-savvy audience. The most recent entry, "How Kameron Layas rode out the crash", is an ideal example. It breezes through some post-cyberpunk hallmarks without getting bogged down in anything but the most critical details, yet hints at a fully-fleshed periphery. It might even make a nice novella.

The author is Rahul Kanakia. The guy studied economics at Stanford but now writes SF. It's a natural progression but not a common one. He's also just a tad older than I am, a sure fire trigger for the "what are you doing with your life when this guy has already written a bunch of novels" effect.  This is alleviated somewhat by his great blog.  It's very analytical and unpretentious. If I ever decide to give up science for science fiction, I'll have to follow some of his examples. He's not big on advice, though.

Wednesday, April 09, 2014

This is the Donnelly Centre for Cellular and Biomolecular Research at the University of Toronto.

That's a nice-looking building. It was built in 2006 and really captures a particularly minimalist glass-and-color-burst aesthetic (the building reminds me of a larger, more technical Pompidou Center at least, and that building has been around since 1977). I'm curious about how energy-efficient all that glass is.

I think I just like labs with plenty of window space. It may be that the natural light cycles provide a constant reminder of the power of omnipresent natural processes. Or, it may be symbolic of the open exchange and transparent review characteristic of an ideal scientific process. The colors are nice, too. They're always a great way to find your way around a building and orient yourself.* Little details like that can really make the process of science much more enjoyable and efficient.


*In the absence of colorblindness, of course. Perhaps each area of a building could have a corresponding color and pattern.

Tuesday, April 08, 2014

I felt very caffeinated today. For me, the line between "fairly awake" and "very caffeine-stimulated" mostly appears to be a function of when I'm drinking it rather than a function of coffee volume consumed. Late afternoon seems like the time when caffeine is most effective (or maybe a little too effective, depending upon the level of focus the day requires). A few recent papers and at least one science blogger explain this effect in terms of cortisol levels and the circadian rhythm. See also: the concept of the zeitgeber.

I'm also reducing the level of anonymity for this blog, as you may have already noticed. This trend will continue. The person writing this is the same person who does everything else that person does every day.

Monday, April 07, 2014

Distant roses and yoga poses

Today I read about Sarcopoterium spinosum, a species of flowering plant common to the Mediterranean and Middle East. It's not an especially interesting plant - at least not to me. I'm not entirely certain why I focused on it. Perhaps it was the idea that this plant is quite common in some areas yet poorly studies. Some folks seem to think it's a potential source for an antidiabetic agent (but that's in the Journal of Ethnopharmacology; I neither have access to this journal nor can read the linked abstract without raising an eyebrow to the point of strain).

Unrelated: I'm been trying to take moments to be present lately, i.e. stopping to breathe and focus. I'm skeptical of most things which would look ideal on pamphlets for yoga classes but hey, meditating is really nice and much-maligned. Most notably, it pays to notice what's going on around you or perhaps even because of you. It's nice to notice when you're happy, too. Somehow that often appears to be treated as a separate idea (noticing you're happy, that is).

Dang, that guy is quotable.

Friday, April 04, 2014

I was trying to batch-rotate a whole bunch of photos today in Ubuntu 12.04 (it's 2014; maybe it's time to try out Trusty Tahr) but my usual solutions just weren't cutting it. Shotwell doesn't even have batch options, as far as I can tell. I've used digiKam before for batch editing but it doesn't even appear to import files anymore. Luckily, I found the Nautilus Image Converter. It can rotate and resize images right from the file manager, just like in, um, most recent versions of Windows. Anyway, problem solved.

Thursday, April 03, 2014

Music for today is Solyaris - Achieving Sky True through Vertigo Vortexes.

Anonymity, the Web, and whether people still use the term "the Web" in conversation

My Wikipedia username used to be a fiction. I changed it recently: it was time to cease being anonymous in that forum.* I didn't do it because of other Wikipedia users. If anything, I've always been a bit worried about the prospect that today's editor, blindingly-furious with me for my edits to their favorite page but completely incapable of actually communicating with anyone about it, will be tomorrow's supervisor. Academics can be tricky that way. By that same token, however, I changed my Wikipedia name so people begin to see my name attached to their favorite subjects.

I spend a fair amount of time writing about microbiology, whether it's online or for some kind of eventual peer-reviewed research**. The chances still seem pretty slim that academics will ever take Wikipedia seriously (and that's fine! It's just an encyclopedia, right?). Many of these researchers are unfortunately intent on citing Wikipedia in their articles. It's still pretty neat to think that the first words someone may read after googling the name of some bacterial species may be mine. Er, partially mine.

Maybe that whole shared-content aspect is the real stumbling block. Academics love the peer review process because it lets People Who Know What They're Doing evaluate People Who Are Doing Something Similar Yet Different. The process works just fine when it's partially anonymous; most people are kind in their everyday interactions but have a license to be critical when their identity is hidden. Pseudo-anonymity doesn't provide the same kind of protective shroud. It's really more of a dissociation effect in which a distinct aspect of one's personality is brought to the forefront.

I'm certainly not an expert in the field of anonymity. It doesn't matter because this is a place where I can write about things I may know very little about. I can do that more easily because my Real Name doesn't have to be responsible for it.  


*Not this one, though! If you're reading this, you already know who I am. It's the internet equivalent of wearing a false mustache.***

**That's all online now, too. I've done much more writing under a pseudonym than under my Real Name if you don't count classwork.

***The meatspace equivalent of wearing a false mustache on the internet is probably something like "breathing" or "eating lunch" or "sinking into a creative morass."

Wednesday, April 02, 2014

Today's paper is: 

The shortest-possible summary: Mycobacteria like M. tuberculosis have cell walls but they have nice, thick lipid membranes, too. All these layers add up to protection from antimicrobial compounds: the bacteria won't get killed if antimicrobials can't even get inside them.

Tuesday, April 01, 2014

Today, I read the following paper: 

This one is really just about a fairly simple method to assay protein-protein interactions. It's an advertisement: the method was developed commercially by ChromoTek Gmbh and is sold as a kit. The basic idea is that proteins are expressed in Baby Hamster Kidney* cells as GFP and RFP fusions. The GFP-fusion baits also have LacI, so they'll end up bound to lac in the nucleus no matter what happens. The RFP-fusion preys should just float around the cell unless they interact with the baits, at which point they'll also be present at the same spot as the GFP. The process is reversible, so an interaction can be observed in real-time as it is disrupted, i.e. by some variety of specific protein binding inhibitor.

I probably won't use this method much myself, but it's quite simple and looks like it could be convenient for screening potentially therapeutic small molecules or peptides. It would be nice to see a paper published about the method's applications by groups other than ChromoTek employees.


*There's something about the word "baby" that makes it difficult to take seriously. This is an established cell line and has been in use since the 60's.

Monday, March 31, 2014

Today, I learned not to mistake Pyrex tubes for polycarbonate ones when using them in a centrifuge spinning at 12,000 x g. The polycarbonate ones are designed to not leak under these kinds of conditions. The Pyrex ones leak everywhere, in the way that liquid-containing vessels do when they disintegrate.
Within the category of "internet things I will be inordinately distracted by but will probably give hypochondriacs a case of the skin-crawlies": ProMed Health Map. I didn't realize it even existed until recently. It's a list of recent disease outbreaks plus a few poisonings and such. It's fun.

Friday, March 28, 2014

The unbearable brightness

Here are two quick, strange observations about colored light and biology:

1. A recent study in PNAS claims that exposure to orange light could have an impact on cognitive function. The MRI results certainly appear significant but I'm not familiar enough with the field to know if they're reliable. Caveats: Their sample size was 16 people, the effects of the particular light were noticeable more than an hour post-exposure, and potential participant effects (i.e., how sleepy they felt) were all self-reported, though the researchers state these reports were consistent. I'm curious to see if the results can be replicated with a different sample of volunteers.  (The paper by Chellapa et al. is here.)

2. C. elegans glows blue when it dies. Don't take my word for it. Take the word of Coburn et al. from their 2013 PLoS Biology paper, Anthranilate Fluorescence Marks a Calcium-Propagated Necrotic Wave That Promotes Organismal Death in C. elegans: "We report that organismal death is accompanied by a burst of intense blue fluorescence, generated within intestinal cells by the necrotic cell death pathway." It turns out that, at least in C. elegans, organismal death looks like a wave of necrosis as a cascade of self-destruction propagates cell death. The short story: cells burst, pH increases, things that wouldn't normally be fluorescent suddenly are. The death-glow may have been found to happen in yeast, too (Coburn et al. cite this 2007 paper by Liang et al but I couldn't find any explicit mention in it about blue fluorescence, just yellow and red).

Thursday, March 27, 2014

Gen-eralizations

I'm really not fond of the term "Millenial." I didn't like the terms "Gen Y" or "Gen X" or even the whole "baby boomer" classification for the same reason: all these terms are shorthand ways to generalize entire generations of people.

It's possible that the terms were never originally intended to be pejorative. Having a shorthand way to refer to people born during certain periods of history not only provides a chronological shorthand, but it allows us to infer some cultural context. If someone's identified as a baby boomer, their childhood likely didn't include Super Soakers or AOL chat rooms. They're statistically likely to have different opinions from more recently-born folks about a number of topics. We can, of course, raise similar arguments about different racial or ethnic groups. There are objective differences in cultural context between groups, even when we ignore the obvious physical differences (i.e., baby boomers are older than Millenials, just like Native Americans have different skin tones than people with recent European ancestors).

The problems arise when the generalizations become interpretations. Any generalization of human beings is inherently dehumanizing. It's an unavoidable element of living in a modern society, though the relationship between personal identity and broader ethical concerns only really began in the mid 1600s. That's really a distinct topic -- even so, it illustrates the difficulties involved in reconciling individual identity with the identities of the Many. We can draw conclusions about one person says or does and what groups of people generally say and do, but using the group as a model for the individual is nigh-impossible when there is high variance among group members.

Really, I'm just tired of glossy stories about Millenials. I'm tired of TIME Magazine's offerings, even when they're chock-full of Joel Stein's usual everyman sarcasm. I'm tired of Atlantic pieces about how self-centered these Millenials are supposed to be (spoiler: youth is a great time for narcissism).* I'm tired of Slate articles which may be unintentional satire, as this quote may reveal:
A generation ago, my college peers and I would buy a pint of ice cream and down a shot of peach schnapps (or two) to process a breakup. Now some college students feel suicidal after the breakup of a four-month relationship. Either ice cream no longer has the same magical healing properties, or the ability to address hardships is lacking in many members of this generation.
Isn't hindsight wonderful? I'd love to be able to labor under the pretense that This Generation Is Broken, but I suspect that humanity just generally can't handle hardships well. At best, they can imagine a fantastical, pain-free past. 

I don't even identify as a Millenial. Why would I? It's certainly not going to do me any favors.

*I won't even get started on the monthly NYT articles about That Thing Gen Y Does. A slightly more acerbic Slate article than the above one may have to suffice.

Wednesday, March 26, 2014

Tuesday, March 25, 2014

Monday, March 24, 2014

The most minimal bacteria

Today I learned about the existence of the bacteriome, a specialized organ found in some insect species which is just chock full of endosymbiotic bacteria. Most animals provide hosts for bacteria, but the critical part here is the endosymbiotic nature: these symbionts must live and reproduce within host cells. As a result, many insect endosymbionts are quite odd in genetic terms and have tiny genomes. They can only grow to a certain population size, too, as they're limited by the space available within those host cells.

One such example of the resulting genetic oddities is found in Hodgkinia symbionts from cicada bacteriomes. A report by McCutcheon et al in 2009 showed howHodgkinia cicadicolaappear to have re-coded their UGA codons to code for tryptophan rather than the usual Stop codon. This specific re-coding has been observed before, but only in very low-GC content species, of which Hodgkinia is not one (it has a GC% of more than 58 percent). This symbiont also has a crazy-small genome at 144 kb. That was the smallest bacterial genome yet sequenced butNasuia deltocephalinicola, another insect endosymbiont, has it beat by 22 kb.

Sunday, March 23, 2014

Self-aware weddings

I first read David Marusek's The Wedding Album in 1999 in an issue of Asimov's Science Fiction. As the title implies, Marusek's novella concerns weddings and marriage, two concepts as distant to me fifteen years ago as the advent of single-serving coffee pods (I would have scoffed at the idea at the time, anyway. They're awfully wasteful, right?). It's now the future and I've had a wedding photographer booked for months. We also have the ability to read entire e-books of compiled post-cyberpunk* without digging through musty old issues of Asimov's.**

The Wedding Album begins with the premise that don't just save memories in photographs and video, they map them at the atomic level and use those maps to create virtual constructs of memorable moments on a whim. It's like a holodeck, but for notable yet trivial times like graduation ceremonies and birthdays. The killer app*** for this technology is its ability to create near-perfect AI constructs of any mapped humans. The AIs, despite their inherent humanity, are treated worse than slaves: they're as disposable as email and their re-creation could be moments away. Problems arise when folks finally have to confront the inevitable questions about whether AIs are really self-aware and whether that makes them human. Unlike in many stories, political and logistic issues also have to be addressed. Who gets to decide who's self-aware and who's just an organized bunch of electrons? If self-aware beings have the right to The Pursuit Of Happiness, where do they get to do so?

It's possible that The Wedding Album, due to its broad scope and refusal to adhere to any single post-singularity viewpoint, exudes a nearly pure cyberpunk ethos. Cyberpunk was never really about streetwise hackers and all-neon-everything any more than Tolkienesque fantasy is about forest-wise elves and rune-engraved weaponry. Cyberpunk was always about the transformative power of technology. Sometimes, it's even about how technology transforms itself (a purely metaphorical situation, at least until self-aware AIs pop up). Post-cyberpunk is cyberpunk for an age when we're both painfully aware and blissfully ignorant of technology's multitudinous effects on humanity. To that extent, The Wedding Album also sits squarely within post-cyberpunk. We may not have the ability to make perfect AI copies of ourselves yet, but we certainly have online identities and they are better reflections of ourselves than any physical mirror.

William Gibson, widely considered one of cyberpunk's Founding Fathers and credited with the term "cyberspace", began setting his works in increasingly more imminent futures more than a decade ago. His novels always had a near-future atmosphere but didn't age well as the prefix cyber- grew tired. In a 2012 interview, Gibson describes it as feeling "...haunted by a feeling that the world itself was so weird and so rich in cognitive dissonance, for me, that I had lost the capacity to measure just how weird it was." Speculation can be a gamble, but works like The Wedding Album may be just strange enough to show us how weird we've always been.****

*Rewired: The Post-Cyberpunk Anthology, edited by James Patrick Kelly and John Kessel. Kelly's quite-literally-fantastic Wildlife has always stuck with me as one of the most resonant and touching bits of transhumanist science fiction of the last few decades. He suggests in Rewired that post-cyberpunk (or in his potentially-misleading acronym, PCP) is the natural evolution of a genre so cutting-edge it ended up sliced to ribbons.

**All my issues were recycled ages ago. They were probably musty from the beginning.

***I think the term "killer app" predates adoption of the term "app" (that is, when used to refer to any piece of software) so perhaps the true "killer app" in the information age is the ongoing concept of discrete pieces of software in an era increasingly dependent upon interdependent code, APIs, and so on. I mean, my TV has Apps. It really doesn't need them but I suppose there's an intended illusion of control that way.

****That's not to say that it's anywhere near the strangest fiction you'll ever read, or even that it's genuinely odd. It's just weird enough to provide some tantalizing reflections of What Happens Now.

Wednesday, March 19, 2014

This MIT Tech Review article about face verification is interesting, if only because face comparisons are one of those things that people are supposed to be ideally suited for. I'm not sure that the Facebook project presented in the article is really as good as they claim (that's a function of the training data, of course, and may not be much like real world face-matching conditions since it's, um, a Book of Faces) but it sounds like a decent start.
I'm back from Texas! Here are a few photos.
This is part of the Cibolo Creek. It's not usually this low.

The bathrooms at the Little Gretel Restaurant. It's ostensibly Czech themed but you would't know it from the facilities.

Part of the San Antonio Botanical Garden or a Park With Dinosaurs. Exact-ish location: 29.458046, -98.457146.

Can you find the cat in this lumber yard?

Friday, March 14, 2014

I'm in Texas at the moment, visiting my parents. It's not quite as warm down here as I would have liked but it's nice and sunny and I'm getting caught up on my reading. There are also many antiques (see below: an 80-dollar cookie jar in feline form).

The cookies. Place them within me. My emptiness will preserve them.

Some of what I've been reading is rehashed cyberpunk SF. It's an interesting subgenre for its supposed outcast status and heavy reliance on particular ideas and aesthetics, almost like the noir detective stories of speculative fiction, and for its stubborn refusal to admit that it may be the most prescient subgenre of fiction in general. (Except for the whole VR thing. Oculus Rift notwithstanding, virtual reality just feels like a perpetually obsolete concept, though that may be lingering cyberpunk influence again. Big black goggles can make anyone look like an amateur datajack.) Further thoughts forthcoming.

Monday, March 10, 2014

Here are three bioinformatics tools for protein structure prediction, presented without context or comparison and primarily so I don't forget they exist:

  1. HHpred. Handy, fast, and the output is data-rich. 
  2. RaptorX. Will do structure alignments and binding predictions, too. Servers may be sluggish sometimes.
  3. I-TASSER. They've added some neat functional prediction modules (primarily based on existing structures and GO terms, I think) since I've last used the server. 
Today's paper was : 

It's a quick review of sRNAs in H. pylori, a species once thought to use very little RNA-based regulation (AKA riboregulation). The presumed lack was based on observations that H. pylori lacks the RNA-binding protein Hfq, generally thought to be a requirement for riboregulation. Turns out that's not the case. H. pylori may regulate plenty of cellular activities using sRNA, including the stress response and flagellum biogenesis.


Saturday, March 08, 2014

A chamber, made

So, Antichamber. I finished it recently. Spoilers may follow. It's not a monumental challenge, but I'm fairly sure it's not intended to be one. It also hasn't lost much of its spark for me despite an original release date of more than a year ago. Games get spoiled so easily these days. Portal, this game's obvious inspiration in both design and technical matters (despite not sharing the same engine, but that's just a technicality in the grand scheme of things) certainly didn't have such a stable shelf life. That may just be due to cake and lies.

Yes, Antichamber. I won't describe the details and I'll assume you've either tried it out already, are planning to play it, or will never do so. It's a minimal game with an ambient background. The visuals are predominantly white or a few colors at a time. The soundtrack is a collection of chirping birds, rushing wind, and distant thunder. The goals are straightforward: go where you haven't been, open closed doors, collect new equipment and learn all the neat things you can do with it. Every stage is summarized in a tidy aphorism, the presumably obvious lesson gently restated in a single sentence and a napkin-sketch cartoon.

Antichamber is ostensibly about life. That's an ambitious goal and probably a bit too much for any one creative work to concern itself with. Instead, I think it provides an excellent example of minimal gaming. It's certainly not minimalist gaming, though that's a popular and valuable trend in and of itself.

Rather, Antichamber deconstructs modern gaming into some of its most dominant elements: confusion, movement, and surrealism. All three of these elements are particularly well suited to gaming as they benefit from interaction. A film can confuse us, make us feel like we're moving, or even present impossible images (indeed, every film is made of impossible images, or it would be theater). Only an interactive experience can make us feel like the world isn't responding to us in the way it's supposed to. It's also the only way to simulate control over one's own movement and to do it in a way that's realistic enough to be convincing and responsive but unreal enough to retain a patina of fantasy and unlimited possibilities.

The paths in Antichamber are circuitous but eventually require pursuit of a floating, dark, ghostly mass. The mass sounds like strangers congregating in an art museum lobby* and looks like a glitch.** By the end of the game, it's fully within the players control, ready for release until just the last few moments. Before that point, the ghost does many of the same things the player does as far as opening doors and wandering around an illogical maze goes. We could almost recursively view this black mass as the game's player or even just the concept of a player. Players make games what they are, but they'll all have different experiences in the process.



*Or, um, maybe an antechamber. Anterooms are pretty critical to congregating and to the hanging of coats, though neither happens in Antichamber.

**It's kind of like this recent art project.

Friday, March 07, 2014

A brief, hopeful rant about Powerpoint

The "tools of the trade" are little more than iconography for most trades. In science, no matter the field, one of those tools is usually Powerpoint. It doesn't look as good on a flag as a hammer or sickle or even a cutlass* but it's still the reliable sidearm of any scientist with conclusions to present and an audience to hear about them. Despite -- or perhaps because of -- its popularity, Powerpoint is widely abused. It's less of a tool and more of a drug. Many scientists just can't get through a presentation without loading up a stack of slides and letting 'em fly.

Lets break the cycle, folks. Some physicists are abandoning the slides in favor of whiteboards and direct conversation. Even better, we can avoid the bullet-point habit entirely and adapt to specific audiences. Presentations should be about expanding upon ideas, not glossing them over! I can't claim to be a presenting expert and certainly haven't had a long career. Even so, I can hope for numerous chances to present my research outside the bounds of Microsoft's glowing rectangles.


*I once saw a version of the University of Virginia logo specially prepared for their department of Biology. The sabres had been replaced by micropipettes, the actual tools of the molecular biology trade.

Thursday, March 06, 2014

It's tough to stay focused. There's always something requiring more attention than it's ever going to get, and in the odd times when it gets nearly the required amount, it's probably due to the sacrifice of some limited resource. Time is a limiting factor, certainly. Motivation is another one. If the end result of a project looks dim and distant, I can only imagine how it will look when it's within reach.

I suppose there's a lot to be said for having a concrete idea of presumed context. I've often found this to be an irritant in fiction: the author provides just enough detail for me to get a good mental image of the characters and their context, but just when it's relevant, new details emerge. This isn't along the lines of "she saw the reflection of her blue eyes in the pooling rainwater" when all along I had thought the character had brown eyes. It's more like having the rainwater appear in the first place when I had understood the weather to be clear. Is this the fault of my own assumptions, or can I reasonably expect authors to provide me with enough detail to minimize assumptions?

The same issue is true with writing scientific manuscripts. That's what I'm doing now. It's a collaborative job, so at any point a fellow author may add or remove details and data. The collaboration is helpful when it works toward the same goal but that goal often remains blurry, a granite crag looming in the distant hills.

Wednesday, March 05, 2014

Dysfunctional uncles and luminous oncology

I've been trying to remember a particular film all afternoon and was totally stumped about it. I knew it was from the 90's and involved someone dying of cancer, but beyond that, all I could remember was finding it inordinately sad -- a real heartstring-gripping jerker-of-tears. I remembered that and the keywords "Jewish packrat uncle." Those three words were enough for Google identify the film: 1995's Unstrung Heroes. This was a validating result as I had already tried searching for "Unsung Heroes", though this is actually a North Korean spy film series and an unrelated short mockumentary about superheroes.

Andie McDowell looks awfully cheery on that movie cover for somebody dying of cancer. Franz Lidz, whose memoir was the distant base material for Unstrung Heroes, later wrote about the subject, finding that "...the terminal sappiness of cancer movies is probably beyond remedy."

Monday, March 03, 2014

I had a weekend full of mind-flexing! It's enough for a whole bunch of meditative blog-postery so that is what it will be. Just not yet.

In short, though: the lady's university had a departmental conference which she had helped to organize. I presented a general overview of my work at said conference; this is especially notable as I am a student of the Life Sciences and this was a conference of German language and literature. It was, however, intended to be interdisciplinary by design. It succeeded quite well and I have some fresh ideas to think about. There was an especially nice talk about repetition in music. See the following example, Steve Reich's It's Gonna Rain:

I've also been trying out a few artsy games: Antichamber and Starseed Pilgrim. They're elderly in internet years but young n' fresh to me. Thoughts later.

Wednesday, February 26, 2014

I don't normally feel unnerved about the omnipresent social-networking morass persistently enveloping the modern internet. It's clearly a social phenomenon and an admittedly interesting one at that. Even so, I can't help but feel a tad creeped out when. say, I edit a company's new Wikipedia page and said company's founder looks up my Linkedin profile a few hours later. He doesn't even leave any snide comments or anything, he just looks at it, a feature seemingly unique to that particular network. It's not easy to find my LI profile from my WP one without some Internet Detective work so it really just comes down to a big "why?".

Why do I care who feels like being strange on, god forbid, the internet?
Why do they go and do it anyway?

(Of course, the Wikipedia Internet Detective also felt like looking up one of my email addresses and sending me a passive-aggressive nastygram. Maybe a better question is "why do people get offended so quickly on the internet?")

Here's my glitch for the day.



Monday, February 24, 2014

We talked about the Peak-end rule in the development class this evening. It's an interesting heuristic, if perhaps an oversimplification. We discussed it in the context of giving presentations but I'm curious about how it applies to other presentation-like experiences or even whether a collective group of people tends to define the "peak" of a shared experience in an identical manner.

Thar's gels in them thar hills

Text mining is a rather unkind metaphor. In the life sciences, it refers to how we can sort through all the published data out there and extract broad conclusions from the aggregate. This does make the rhetorical assumption that most of that text is some kind of stone to be blasted away until the truth emerges, gleaming in the sun.

In practice, things are never that simple, of course. I think I've mentioned here before how some of the major science publishers have only just begun opening their archives to text mining. There's also the issue of images: current software can usually handle text without serious issues but extracting meaningful data from figures is conceptually problematic. Scientists just aren't consistent when it comes to presenting their data. That's a good thing, really, as it they often have to focus on different aspects of their results; nature (or Nature, for that matter) just isn't always as consistent as we'd like it to be.

A paper posted to arXiv recently by Kuhn et al proposes one strategy for extracting data from images in scientific papers. They focused on gel images. These kinds of figures are great because they're generally just photos with sets of horizontal bands in them.  The placement of the bands determines their relative size, so as long as a size standard is present, there's one bit of easy data already. The tricky part is knowing what's actually present on the gel. Even when we know that, the gel images are often too data-rich to tease apart every apparent result. As the authors say, "...the text rarely mentions all these relations in an explicit way, and the image is therefore the only accessible source."

These folks used a straightforward approach to break down gels into usable data - check out the paper for details. They're fond of the optical character recognition in MS Office 2003 for handling text. The gel segment recognition needed a machine learning-based approach and random-forest classifiers. Assigning relations to those gel images is much trickier so the authors had to use the ol' Human Touch in their code.

So how well does it work? Not terribly well yet, at least because it's still incomplete. The issue of inconsistent labeling remains; the authors' approach works passably as long as figures are neatly labeled with gene or protein names. This kind of approach and others like it may eventually mean authors have to consider image mining when designing figures. They could, of course, just have fun with the image mining and write out all their labels by hand.

Friday, February 21, 2014

Diners, drive-Ins, and drift

A Guy and his favorite condiment.

I recently wrote a silly little Python script which would randomly generate names of the kinds of meals one would only see in Guy Fieri's vivid, flame-scorched culinary imaginationscape. I'm not mocking the guy per se. He's clearly working with a winning formula even if it does take the lowest-common-denominator and divide it by zero. That is to say, the results fit expected parameters but aren't terribly rational.

But yes, the script. Here's some sample output:
Then we can start with the
Guy-talian Slammin' Bad Boy
Southern Honey Dijon Dumplings
Green Pepper Sashimi Kabobs
Pesto Baltimore Veggie Roast
Southern Johnny Garlic's Bourguignon
Bourbon Pesto Shrimp
and the
Lime Pepperoni Mashed Potatoes
then we can finish up with the
Queso Smothered Tostado Po'Boy

All the starting adjectives and nouns (that's where the meat is, this isn't any fancy code) are from the menu at Guy's own recipes, his restaurant, Applebee's, Hooters, or a competitor of these. They generally sound palatable with the possible exception of those mashed potatoes up there. That's not really the fun part, though. The fun part is that I wrote this for Python 3 though it still runs in Python 2. The output gets a bit garbled:
Then we can start with the
 Bakea  Sriracha
 Baked Potatoin'
 Kabobs Crazy
 Friestalarmesan-Crusted
 Bourguignonan
 Rings Pepperlo
and the
 HalibutSix-Cheese
then we can finish up with the
 Frittersanch

What we end up with is the logical extension of maximally-flavored American comfort-food: Guy's Impressionist Cuisine.  
The artist for today is Kate MccGwire. She makes feathery sculptures which look like they either want to eat you or want you to eat them.

Wednesday, February 19, 2014

Music for Programming

This mix series is wonderful. Ingredients, as per their aesthetic description:
Drones
Noise
Fuzz
Field recordings
Vagueness (Hypnagogia)
Textures without rhythm
Minor complex chords
Early music (Baroque, lute, harpsichord)
Very few drums or vocals
Synth arpeggios
Awesome
Walls of reverb

It's pretty decent background sound for programming, yes, but it's even better for an evening walk through the noisome late-February air.

Very Much Like Me


The unrelated background music for today.

Looks like it's time for another personal assessment. This is one of a series of questionnaires and inventories prepared as part of a program called Authentic Happiness. Yes, that's a name with some tall claims. The program is courtesy of the Positive Psychology Center at the University of Pennsylvania so it may have some decent, scientifically-sound psychology chops.* It's also accessible without any monetary cost.

I took the VIA Survey of Character Strengths. (I think it's a longer version of the VIA Me! survey, which I have taken previously.) It's a series of 240 statements, evaluated in much the same way as the previous strengths surveys I've described (that is, a 5-point scale from Very Much Like Me to Very Much Unlike Me). Some of the statements feel rather transparent (i.e., "I am a spiritual person.") as it's clear how they translate to results (if you say you're spiritual, the survey won't disagree). This may be by design; perhaps the best indicator of a spiritual person is that they consider themselves spiritual.

Here are my results:
Your Top Strength
Creativity, ingenuity, and originality
Thinking of new ways to do things is a crucial part of who you are. You are never content with doing something the conventional way if a better way is possible.
Your Second Strength
Judgment, critical thinking, and open-mindedness
Thinking things through and examining them from all sides are important aspects of who you are. You do not jump to conclusions, and you rely only on solid evidence to make your decisions. You are able to change your mind.
Your Third Strength
Appreciation of beauty and excellence
You notice and appreciate beauty, excellence, and/or skilled performance in all domains of life, from nature to art to mathematics to science to everyday experience.
Your Fourth Strength
Humor and playfulness
You like to laugh and tease. Bringing smiles to other people is important to you. You try to see the light side of all situations.
Your Fifth Strength
Curiosity and interest in the world
You are curious about everything. You are always asking questions, and you find all subjects and topics fascinating. You like exploration and discovery.

So, not too different from the short version of the inventory. As part of my personal development class, we're being assigned the task to survey other folks about what strengths they think we display. I will have to see if there is a correlation.

*I tend to take just about any psychological theory or finding with a Volvo-sized grain of salt. To be fair, I don't have any formal education or experience with the actual study of psychology. My blanket judgement is largely unfounded. I've just read enough poorly-supported, statistically-unsound psychology claims to be wary.

Tuesday, February 18, 2014

Here's a dumb GNOME Classic interface issue that I always seem to run into: desktop elements (i.e., launcher shortcuts) aren't movable or removable by the mouse alone.* Right-clicking them brings up context menus but some options remain hidden. Holding the superkey + Alt and then right-clicking makes those options show up. This forum thread was helpful.

*This may just be a change since Ubuntu 12.04. I'm just glad to have an interface other than Unity available.

Monday, February 17, 2014

Fields of gold

The only truly wasted parts of my day are those spent on hold, waiting to speak to customer service. You'd think, at the very least, that there would be numerous alternatives to this phone-queuing. You would be absolutely right because you are aware of the internet. You may have even sent an Instant Message at some point! Barring any of that potential experience, though, I'm sure there are better phone-based alternatives. Could I just have a call back when a human is willing and able to speak to me rather than having to wait while the entire Sting catalog floats through my senses at 8 kHz?

On a directly related note, I learned today that some Cisco call-processing software had some pretty nice default hold music. It had some kind of mention in This American Life a few weeks back. I didn't hear that episode but the little tune is preferable to, er, just about anything else. It makes me nostalgic for all that great mid-90's tracker music. Even kinda terrible MODs would be just fine.

Saturday, February 15, 2014

Nothing but flow-ers

I had a fair amount of trouble getting into anything like a flow state this week. It's not normally so difficult: I tend to stay focused on anything repetitive or anything involving researching a topic. The former situation isn't really ideal as I don't actually like those kinds of tasks. I just have to treat them as a kind of meditative exercise to even want to do them at all. The second situation is more enjoyable. It's really nice to sit back and query the total of human knowledge about a subject, whether it's the function of a particular protein complex or the discovery of phosphorus. This is the kind of flow state I can get into at work and at home.
Unfortunately, I find it very easy to get knocked out of a flow state. The most trivial email can do it in seconds. This week presented all manner of distractions and the inclement weather became just one of them. 

Friday, February 14, 2014

Messages in the ether

Just finished up two things yesterday: Greg Bear's Hull Zero Three and my Twitter account. The former took much less time; it's a quick read in a genre of science fiction known for its long-winded exposition and overwhelmingly massive scales. The novel benefits from a brisk pace despite leaving its plot obscured until the last quarter.* The Twitter account, however, was hampered by the network's breakneck pace. I never really felt compelled to post anything but the most sarcastic thought morsels there, at least partially because those were the kinds of tweets I actually found funny. The rest was largely dispensable.

Twitter just doesn't seem like an efficient way to convey any kind of idea except for the most urgent or expendable ones.


*I give it 3 out of 4 possible Oort clouds. I'm not going to explain the plot or setting or even note when it was written. I just didn't find it that notable.

Wednesday, February 12, 2014

A drug for when myosin starts actin up

Here's the slightly overstated science headline of the day: "Protein folding: Bringing dead proteins back to life". This is actually an overview of a larger scientific paper in one of my favorite newer life science publications, eLife.

The hyperbole isn't entirely undeserved as this is some pretty neat stuff. Myosin, famous for its role of moving things about inside cells, appears to gain efficiency and stability when a small molecule called EMD 57033 is around. This small molecule isn't a new discovery and has been studied for at least twenty years. What's new is its potential role as a pharmacological chaperone. Cells produce classes of proteins known as chaperones to keep proteins from misbehaving, especially when these proteins are being assembled and often after stressful circumstances (i.e., heat shock). EMD 57033 isn't naturally occurring in cells but it may serve a similar function as a drug.

Tuesday, February 11, 2014

Transmutation


Urine luck.
Today's artwork is The Alchemist Discovering Phosphorus, also known as The Alchymist, in Search of the Philosopher's Stone, Discovers Phosphorus, and prays for the successful Conclusion of his operation, as was the custom of the Ancient Chymical Astrologers. It is by English painter Joseph Wright, also known as Wright of Derby. The painting may or may not be about the historical discovery of phosphorus, a process which involved the reduction of urine, a fluid rich in phosphates.

Monday, February 10, 2014

William Gibson Describes the Attractions at Colonial Williamsburg.

The "Great Authors" bits at Something Awful are easily some of the best content-morsels to ever emerge from the site. The shtick itself is very McSweeney's but it really toes that line between parody and detachment. (All of this is a good, enjoyable thing.)

I'm actually rather surprised as to how Something Awful keeps on truckin' otherwise.

Friday, February 07, 2014

Today I learned about laser guide stars.

I don't think I misjudged my planned career path and I don't think I'd ever want to be an astronomer. That being said, space is neat!

Thursday, February 06, 2014

Social animals

This editorial in Nature makes some very definitive claims about what should and shouldn't be done about addiction research. That isn't really my field - I don't work with any organism capable of addiction as we understand it - but the steadfast stance is a worrisome one. Without quoting anything, here's the TL;DR: "Animal-rights protesters claim addiction is a social problem and use that to support how we shouldn't use animals in addiction research. Scientists have shown addiction is a disease so we need to keep using animals in addiction research."

The stance is myopic at best and logically erratic at worst. The dichotomy of "disease vs. social problem" really doesn't do anything toward understanding or treating addiction. Neither does banning animal research, whether wholesale or specifically in the context of addiction research. Equating animal research directly with physical disease paradigms is little more than a knee-jerk reaction to the similarly over-reactive policies promoted by the animal-rights folks. There is unquestionably a social component and ignoring this component should not be sacrificed in favor of a hard-line stance on animal model use.

On a related note, this history of opiod addiction treatment is interesting.


Wednesday, February 05, 2014

Tuesday, February 04, 2014

Emotional Quotients

It's Tuesday, so that must mean it's time for another personal assessment! (This is not specifically due to Tuesday but rather because Tuesday is a discrete period of time.)

I'm taking the Talentsmart Emotional Intelligence Appraisal. It's billed as the "#1 measure of emotional intelligence (EQ)." You'd think that would mean something about how many emotional intelligence tests there are out there or even why people keep framing intelligence tests in the context of IQ, but there are in fact similar tests out there, including this one by a group at UC Berkeley and this one used by Yale. The Talentsmart test is intended to be bundled with the book Emotional Intelligence 2.0 by Bradberry and Greaves.* 

Judging by the introductory demographics questions, this test has been designed with fairly traditional career ladders in mind; one of the questions is "What type of score did you receive on your last job
performance evaluation (performance review)?" The test itself is very short, including only 20 questions or so about how often you are able to do things like handle stress or read the mood of a room. I'm always suspicious whenever an assessment tries to draw conclusions based on limited data. In this case, the assessment appears designed to simply identify emotional or social areas and skillsets in which you may feel lacking. This suggests that the assessment may not be misleading, but it may not provide much insight, either.

The results confirmed my suspicions and are copied below. Scores are from 0 to 100, though anything below 59 is "a concern you must address." Scores in the 70s and 80s indicate areas which can be improved, while higher scores than that are definite strengths which should be employed often. I don't have to worry about that last category. My social competence is evidently lower than my personal competence, which is to be expected, but I'm having trouble gleaning any useful material from these scores since I know they're based on just a few questions. Self-reporting is bad enough as it is, but with this assessment they practically just asked "are you socially aware?" and then mirrored the provided answer.
 
Your Overall Emotional Intelligence Score: 75

Personal Competence: 80
The collective power of your self-awareness and self-management skills. It's how you use emotional intelligence in situations that are more about you privately.
Self-Awareness
75
Your ability to accurately perceive your emotions and stay aware of them as they happen. This includes keeping on top of how you tend to respond to specific situations and certain people.
Self-Management
85
Your ability to use awareness of your emotions to stay flexible and positively direct your behavior. This means managing your emotional reactions to all situations and people.
Social Competence: 69
The combination of your social awareness and relationship management skills. It's more about how you are with other people.
Social Awareness
67
Your ability to accurately pick up on emotions in other people and get what is really going on. This often means understanding what other people are thinking and feeling, even if you don't feel the same way.
Relationship Management
71
Your ability to use awareness of your emotions and the emotions of others to manage interactions successfully. Letting emotional awareness guide clear communication and effective handling of conflict.

Verdict: Little insight to be gained. I'll give the accompanying book a fair shake eventually.

*The course gave us a copy of this book. I've only read one page out of the middle of it so far. The page told me I should stop taking notes at meetings lest I miss out on the emotional states of others.  Still a bit split about that advice.

Monday, February 03, 2014

New toys!

The lab got one of these guys today. I'm excited to try it out and even more excited to have a piece of new lab equipment that actually works and isn't a fridge or a water bath.

Unrelated music for tonight is the following.