How to increase your serotonin levels without drugs.

Serotonin.png

Serotonin : Blood-based ("serum") chemical that constricts blood-vessels ("tonus")

Serotonin is important for mood regulation. Low levels of serotonin have been associated with depression, irritability, and other badnesses. Many anti-depressant drugs work by increasing serotonin levels in the brain, and many recreational drugs also work by increasing brain serotonin levels (which often leads to the post-hgh comedown once the serotonin has been used up). The rationale therefore is that if you can keep your brain serotonin at a healthy level, you're less likely to suffer from depression.

In a paper from 2007, Simon Young summarises ways in which we can increase our serotonin levels without drugs. In brief, these are:

(References are in the Young 2007 paper below)

1) Be happy! Ok this is slightly circular, but the point is that doing things that you enjoy, such as being social, may be a pretty direct way to boost serotonin. In addition, Young suggests that positive moods (and the related increase in serotonin production) can be induced, either through therapy or self-induction. As Young puts it, the relationship between mood and serotonin production may be 2-way.

2) Look at Bright Lights. Light is a powerful controller of our natural daily hormone cycles, and it seems that it also has a big influence on serotonin production. This may explain why many suffer from Seasonal Affective Disorder (SAD) in the low-light winter months (less light = less serotonin). Therefore, try not to spend the morning daylight hours under a blanket in a dark room looking at your phone (like I do).

3) Exercise. There is some debate over whether exercise really induces the production of more serotonin in the brain, but even if it doesn't, exercise has been shown to improve mood (see point 1) in healthy people and in sufferers of mild depression. And your heart and waistline will thank you! Apparently exercise-to-exhaustion causes the bigger increase in serotonin (which would explain those self-righteous happy joggers).

4) Diet. Your body needs an amino acid called trytophan to make serotonin. There is some evidence that eating purified tryptophan may therefore increase serotonin levels in the brain, but it's not clear whether you can get the same benefit from naturally-occurring sources of tryptophan. Tryptophan is not a very abundant amino acid, and has to compete with other amino acids for access to the brain. Therefore if you eat a high-protein diet (proteins are just chains of amino acids stuck together) you may actually be reducing the proportion of tryptophan in your blood. It's possible that eating foods with a high tryptophan/protein ratio might help, but I think it's fair to say the evidence is still inconclusive on this one. In any case, it probably doesn't hurt to make sure you are eating enough tryptophan as this is the only way your body can get it (hence why it is an "essential" amino acid). Here is a handy table from Wikipedia showing the foods with the highest tryptophan/protein ratio (Milk, Sesame seeds, sunflower seeds, soybeans, spirulina & cheese come out on top).

I'm off to eat cheese and stare at the sun.

Take care of your brain!

 

+


References:
Young 2007 paper: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2077351/#__ffn_sectitle

Finally: All you ever need to know about serotonin (and more): http://physrev.physiology.org/content/physrev/72/1/165.full.pdf

Including this gem:  "Serotonergic neurons are exceptional in their regenerative capacity in the adult brain." There is still hope! :)

Increased neural activity when waiting for a visual stimulus - what does it mean?

The following musing was inspired by watching Sabine Kastner's keynote at the OHBM conference 2015:

Brain regions within the visual processing streams have receptive fields which, when a visual stimulus is presented in their preferred spatial location (the receptive field), increase their baseline activity. Furthermore, when a subject is visually cued to attend to a given spatial location, and then made to wait until a target appears in that same location, these brain regions will show elevated activity during the delay period, which gradually wanes back to baseline, before being activated again when the target appears.

What function does this elevated activity serve? The task (waiting for the target) requires that the subject’s brain be primed to attend and respond to stimulation in that visual location. Presumably the elevated activity is the mechanism by which this priming process is achieved. But if that is the case, what happens after the elevated activity has waned back to baseline levels? Priming is still observable behaviourally, even though the elevated activity is gone. 

The first question should possibly be - how does elevated activity lead to priming of a RF location?Perhaps by keeping the receptive field neurons firing, the link between RF regions and higher-order brain regions can be maintained. it seems important to note that the purpose of priming a receptive field location is not simply to push-forward the target activity quickly, but perhaps primarily to orient higher-order cognitive capacities to that spatial location in anticipation of future stimulus. It is probably fair to assume that the same higher-order regions communicate with multiple receptive field neurons so that the same faculties can be directed to the whole visual field. In this sense, the priming of receptive field neurons could be seen as a beacon: guiding top-down signals and processes to the currently most important spatial location. The elevated activity in RF regions could thus be the brain’s way of keeping higher-order regions on point, and aimed at the correct spatial location, so that when the target is presented the system is already oriented towards that location.

If this ‘beacon’ property is indeed true, and achieved by repeated neural firing, what happens when the elevated activity wanes back to baseline? Again, the spatial location is still primed even after the activity has waned. It seems to me that a similar ‘beacon’ would still be necessary, however perhaps it is achieved through a different mechanism.

Perhaps the initial elevated firing serves two purposes: 1) to act as an immediate priming beacon for higher-order regions, and 2) to set up a more long-term priming beacon which doesn’t require continuous firing, which could be metabolically costly. Thus, the initial elevated firing might also trigger some form of rapid and transient long term potentiation (LTP) between the RF and higher-order brain regions. If so, this LTP could allow a spatial RF to be primed even after the elevated activity has waned.

In other words, neurally, there may be two mechanisms at play, immediately following the cue, repeated reentrant activation may represent keeping the gates open by keeping the neurons depolarised, or by keeping the RF in communication with higher-order regions. This is a costly, but immediate form of priming, and so meanwhile this reentrant activity may also be setting up a transient form of LTP, which lasts seconds or minutes, to keep that RF location primed while allowing the costly elevated activity to return to baseline. 

Thus during the early phase, priming is achieved by repeated activation of RF and higher-order neurons. However, over time the mechanism is shifted such that LTP takes over as the priming mechanism, and the same priming effect is achieved through molecular potentiation.

If this system is indeed the case, it would be impressive if the two types of priming could not be distinguished - neurally, even if not behaviourally.

 

 

 

Surviving on inference

Listening to the Freakonomics podcast this morning I heard this gem, by Rory Sutherland (behavioural psychologist and advertising consultant):

"We have evolved to live in a world not of perfect information, but in a world where information is often incomplete. We have to survive on inference, and we have to draw those inferences from wherever we can find [them]"

How very true! The brain, along with the higher order cognitive processes it allows, can surely be seen as an inference-generator, allowing us to predict how the world will behave before it does so. Thus maximizing our chances of survival based on the 'incomplete information' available.

The Analog Brain: Why You Can't Download Your Brain Onto a Computer - Part III

This post is a continuation from this previous post.

< Previous      Next >

The weather is a classic example of a chaotic system.  Try as we might, we just cannot predict the weather well.  Even with computers the size of villages we haven’t been able to produce a model that mimics the real weather beyond a few days’ time.  Why is that? 

For the answer, we have to turn to Edward Lorenz; mathematician, meteorologist and Father of Chaos (theory).  Lorenz tried to predict the weather by creating a computer model that could calculate how meteorological variables such as temperature, wind-speed, air-pressure, etc. would interact over time.  The idea was that if he could represent each of those variables mathematically, and calculate the effect each would have on the others, there was no reason why he couldn’t calculate the future behavior of the weather.  Just as long as he could get his model right.  To this end, Lorenz took extraordinarily accurate measurements of the weather, so that at the starting point his model and the real weather were perfectly matched.  Once everything was in place, Lorenz hit ‘Enter’, let the model run, and compared the model’s predictions to the real weather conditions.  

Initially, the two systems mirrored each other quite well.  However, within a few days the model started behaving erratically, and would no longer give him a reliable prediction.  Confused, Lorenz went back and checked his data.  The variables he put into his model were exactly the same as the weather measurements, and his model matched the real climate perfectly at the point at which he pressed ‘Go’.  So how could the two systems possibly diverge if their starting conditions were the same? 

It dawned on Lorenz that his model was only a perfect match to a certain level of accuracy.  The measurement he had taken of temperature, for example, was accurate only to a limited number of decimal places. So whereas in his model the starting temperature might be set to, say, 22.00000000000°C and not 21.99999999999°C, in reality the actual temperature could lie somewhere between the two.  This is what’s known as a rounding error, and Lorenz’s model was full of them.

Like the hairs on our snooker table, these tiny rounding errors have very little effect over short periods of time, but after a while they add up to produce drastically different behaviours.  This is the famous ‘Butterfly effect’, where a tiny change in one variable (a butterfly flaps its wings in Hong Kong) can lead to huge differences (a hurricane in New York instead of sunshine) given enough time.  Because these tiny errors are catastrophically additive, a digital model of a chaotic system will only behave exactly like the original if its starting parameters are exactly the same as the original, totally free from rounding errors. 

Which brings us back to the digital brain…

< Previous      Next >

Five steps to surviving the Post-PhD Career Precipice

Picture this horror scene: You’ve spent >20 years in education becoming a beacon of knowledge. Everywhere you go people duly drop their jaws at your cranial awesomeness. But just as you are to be crowned with a PhD, you emerge into the world to find no job, no salary, no means and no pedestal for your dreams.

Sadly, that is the reality for a lot of students, which means that most of us spend our last months as students scouring the Internet for work instead of finalising our research. That has been my reality for the last 6+ months, and I was recently saved by the powers that be, unscathed.

I am very happy to announce that I have been awarded the very prestigious Sir Henry Wellcome Postdoctoral Fellowship from the Wellcome Trust (read more about this here). I absolutely could not have achieved this without a long list of helpers at Imperial College and the Clinical Sciences Centre, for which I owe a lot.

And so, as a small way to pay my large debt, here are some tips on How to Survive the Post-PhD Career Precipice for those of you who want to stay in academia.

1) Don’t limit yourself – apply for lots of things.

There are many sources of funding which you can apply to survive ‘The End’: personal fellowships, project grants, and postdoctoral positions.

Personal fellowships are the hardest to get, and the best for your CV as they really encourage your personal and independent development. You will typically need to have at least one first author publication from your PhD to be considered. I applied for three (Wellcome Trust, Alzheimer’s Trust and The Fulbright Commission) but there are other field-specific ones. You will have to write a project proposal, find sponsors for your project, and complete an epic form that will make your thesis seem like a romantic novel. It is often recommended that you find two institutions to sponsor you, as this will broaden your training considerably (I stalked a Professor at Harvard a full year before my end date to get his support). Deadlines for Fellowships start early, so do your research at least 9 months before ‘The End’ and plan accordingly.

Project grants – these involve getting a big-wig (e.g. your supervisor) to submit a grant proposal with you as a named researcher. Or rather, this will probably involve you writing a grant proposal and naming your supervisor as the principal investigator! But either way, if it allows you to survive ‘The End’, it’ll be worth it. I thankfully avoided this option but it would’ve been my last resort.

Post-doc positions – these are often competitive but, given that you will have a PhD from Imperial, so are you! Check jobs.ac.uk for postings. In this setup, you will be working on someone else’s project, so it is important that you are interested in the project. In most cases, the principal applicant will want you to learn new techniques, come up with new questions and develop as a researcher, so you won’t (hopefully) just be someone’s lackie. It is best to wait until closer to your end date (1-3 months) before applying for these, as most positions will want you to start relatively soon. I applied to three different postdoctoral positions and even when I was rejected found the interview practice very useful. Be aware that sometimes there is already someone (usually internal) in mind for the job, so don’t feel disheartened if you shone at the interview but were still rejected.

2) Get help!

You will be amazed at how much your university wants you to succeed! Overall, I had about seven mock interviews with different academics at Imperial. Each one taught me something new and made me more confident in my proposal. Imperial College, for example, is very good at supporting and coaching its students, so do seek the help. Contact your administrator who will be glad to arrange a mock interview for you. It is also useful to interview with academics from different fields, as they will have new insights into why your proposal is flawed.

3) Do the Hussle -

Don’t waste the opportunity by scoring an own goal. Do the preparation! Whether it’s for a fellowship or postdoc position, you have to put the time into it and research the position and project. This will show your interviewers that you really want the job. In preparation for my Wellcome interview, I spent two weeks revising full-time (in between mock interviews) as if it was an undergraduate exam. I told myself that I had to go in there knowing everything, or at least having an answer to anything that they might ask me. This is crucial, and is within your control, so just do it!

4) The Pitch

You’ve probably watched The Apprentice. This over-confident alpha-leader breed of human is what you must become to win over the interviewers (don’t worry, you can go back to normal shortly afterwards). They want to see that you command your field, that you have a clear direction in your career, and that you are coherent. This last point is the most important. Make sure you can explain your research in a simple, engaging, but exciting way, and the battle is half won. To do this, you need to practice your pitch again and again and again.

5) Good luck!

Sometimes it is just luck, so don’t be disheartened and keep trying.

 

The cover for the 'Project DSB' album has been chosen!

Thanks a lot to everybody who has voted for their favourite album cover - the voting was extremely close, with only one vote separating the top two:

Results

But in the end there can only be one winner: Hands! And here it is:

Image

Image by Wild Air Photography

Thanks for your votes - Project DSB will be released next Monday (27th Jan 2014). The first song, 'Grain of Sand' can be heard here. Until then!

The Analog Brain: Why You Can't Download Your Mind Onto a Computer - Part II

This post is a continuation from this previous post.

< Previous      Next >

Simple Models

In the curious case of the brain-download it is not enough to create a computer model that is similar to a human brain.  For a successful download of your brain we need to recreate, to a high-level of accuracy, 'Your' exact brain.  Your unique and wonderful brain, with all its imperfections and hard-won connections which you’ve trained and pruned your whole life.  The computer model has to be exactly the same as your biological brain in terms of how it behaves, otherwise it simply doesn't count.  A half-baked, inaccurate model would not really be ‘You’, or at least would cease to behave as ‘You’ would after a short amount of time. And who wants to live forever as an approximation of their former self?

The issue then is not whether we will ever have the technology to reproduce your brain digitally, but whether a digital copy of your brain could ever be good enough to be indistinguishable from your real brain. We might almost dismiss the notion on principle because a computer model is, by necessity, a summary of a real scenario.  Since a summary could never contain the same amount of information as the original (otherwise it would not be a summary), the premise is necessarily false.  However it would be foolish to suggest that a model cannot meaningfully represent reality.  After all, we use models all the time.  The map on your phone, for example, is a digital model of your surrounding geography.  The important information, the roads and place names, are contained in the map, but everything else is ignored. The textures and smells of the real world aren’t needed for the map to be useful, so they are ignored. This reduces the amount of information in the model while preserving its behaviour. Theoretically we should also be able to reduce the amount of information we need to represent in our computer-brain-model while still preserving your brain’s behaviour. However, the behaviour in this case is vastly more complex, meaning that the more information we leave out, the higher the chance of producing a bad model.

Models of relatively simple systems can afford to be superficial.  A simulation of a snooker game, for example, doesn’t need to take into account the effect every hair on the snooker table has on the moving ball.  It can afford to ‘zoom out’ and represent the net contribution of all the hairs with a summary variable, such as 'Friction'.  Because the system is simple, our model will be able to make accurate predictions of the ball’s trajectory despite summarising reality to a great extent, and ignoring all the tiny variations between the individual hairs on the table.  Despite this blunt approach, our model will still be a true representation of the real system.

In reality, of course, those little hairs do have an effect on the snooker ball’s trajectory, but their contribution is so infinitesimally small that within the confines of our snooker table the effect is completely unnoticeable. If we had a long enough snooker table though, and a perpetually moving ball, these tiny effects would eventually become very relevant.  In other words, on an infinite table if we plucked a single hair from the ball’s path and re-struck the ball in the exact same way, the new trajectory would noticeably diverge from the original given enough time.

Modelling simple systems is relatively easy because we can afford to be inaccurate.  That is why you can play snooker right now on your phone for 59p if you wished. For complex systems like the brain however, the situation is very different.  Complex systems are determined by lots of different factors, each of which might be influenced by many others.  The number of possible interactions means that complex systems are very hard to model accurately, because an error in any one of these factors will lead to bigger and bigger errors with every interaction. Like a virulent sneeze on a train, the single error infects all factors it comes in contact with, and eventually blights the whole model.

And speaking of colds, we all know first-hand how futile it is to try to model complex systems.  In fact, there is one particular chaotic system that we talk about pretty much every day

< Previous      Next >

Introducing: Project-DSB

Two summers ago I packed my parent's car with a shedload of guitars and one Mr. Ant Law, and we drove across the UK to Edinburgh. There we spent a week recording music in the fine company of drummer Rami Sherrington, bassist Kevin Glasgow, and producer Garry Boyle.  Those sessions became known as:

Image

Project DSB is a jazz-rock album, a labour of love panning many musical styles and influences. We unleashed odd time-signatures and unconventional harmony, and whip-cracked those unruly beasts into sonically gratifying themes. On the whole, we think we got the balance just right. We hope you'll enjoy hearing it as much as we did making it.

The first song from the album, 'Grain of Sand' is now available here.  Please help us share it and send us your feedback over on facebook or twitter.

The Analog Brain: Why You Can’t Download Your Mind Onto a Computer - Part I

brain download

Leading thinkers such as neuroscientist David Eagleman and philosopher Nick Bostrom believe that one day, you will be able to download your mind onto a computer.  A sophisticated brain scanner will record all the connections in your brain and a computer will then recreate them all digitally.  The digital ‘brain’ will then begin to behave exactly like your real brain, which means it will essentially become you, and allow you to live beyond the death of your body in an eternal 'transhuman' existence.

If that sounds too good/bad to be true, that’s because it probably is.  Replicating the trillions of dynamic connections that exist in your brain digitally would be a truly miraculous feat, and one which is definitely beyond us at this time.  Nonetheless, we should never underestimate the future’s potential to wildly exceed our expectations. With our technology and scientific methods steadily improving, one day we will surely have the capacity to create a computer model with comparable complexity to a human brain.  Progress towards this has already begun, with the 2.5-million-neuron SPAUN brain model recently being created, and the Human Connectome Project working diligently to map all the connections of a single human brain.

However, even if we were able to map and model all the connections of a brain, translating this into a personality-download is a whole different ball-game.  Whether or not identity is embodied (i.e. attached to a particular body) will keep philosophers occupied even while the digital-human-brained robots take over civilization.  The problem being that the minute we recreate our minds in another location, that duplicate mind will begin to have its own experiences and perspective, and will therefore necessarily have a different identity to the original.

This problem is perhaps insurmountable.  But for the sake of argument, and because it would be very cool, lets assume that if we created a perfect digital replica of someone’s brain we will have transferred their identity to a computer.  After all, that in itself would still be a mind-bending feat, even if both resulting individuals remained convinced their parallel twin was an impostor.  In this situation, could we ever be confident that the digital version was faithful to the original brain?

In this ‘Analog Brain’ series I will argue that it is impossible to perfectly replicate a brain digitally. And the problem lies in the chaotic nature of complex systems and tiny unassuming things called rounding errors.

 Next > 

My first first author paper on auditory attention networks

Separable auditory and visual attention networks

Earlier this year I published my first first author paper in the scientific journal Neuroimage.

(n.b. Being first author of a scientific paper is a big deal as it shows you did most of the work!)

Although our ears are bombarded with different sounds, our brains are very good at picking apart this soundscape and selecting relevant auditory objects for us to perceive.  This selection and filtering process is what we mean by attention, and it is crucial for us to be able to navigate our rich sensory environments without overloading our feeble minds.

In this paper we showed that the regions of the brain that let us select sounds from this soundscape are different  to those involved in selecting objects from our visual field. We had 20 people listen to busy background sounds (e.g. the sounds of a busy pub) that were full of distractors, and made them listen out for a specific target sound; a series of tones which made a simple melody.  In another 20 participants, we made them view busy natural scenes (e.g. commuters walking down Oxford Street) and had them look out for a target shape; a red rectangle that could appear in two possible locations on the screen.

We scanned our participants using MRI during these tasks and studied the neural activity that happened while subjects were paying attention to the sounds and videos.  We found that the connection between neurons in the middle frontal gyrus (MFG; which is important for the inhibition of a number of behaviours) and the posterior middle temporal gyrus (MTG; which is part of the extended auditory association areas) seems to be important for the selection of auditory objects.  In contrast, for visual selection we saw activity in the superior parietal lobe (SPL; which is important for spatial navigation and awareness) and the frontal eye fields (FEF; which are crucial for controlling eye movements).

In addition, we showed that there is a common area of overlap between the two sensory modalities.  The MFG was activated for both visual and auditory selection.  This suggests that the MFG is important for coordinating which sensory modalities are being attended to, as it is able to connect to both visual and auditory attention areas simultaneously.

I was recently interviewed for this work by Faculti Media, and you can see the video below. Enjoy!

 

CSC Scientific Image Competition 2012

Neural Hot Spots

This is my entry for the 2012 Clinical Sciences Centre scientific image competition:

'The human brain contains several 'hubs'; cortical hotspots where complex neural signals are found (top) .  These complex signals are the result of communication with the brain's functional networks (bottom images).  Multiple functional signals overlap at the cortical hotspots, meaning they could be a potential site for the dynamic integration of the information exchanged within each neural network.'

In the end I called it 'Neural Hot Spots', but 'Jesus-Brain' would've no doubt been more appropriate! The data is currently being written up for publication (with a somewhat more sedate version of this image).

Eliza & the Great Spaghetti Monster

This is my article which was shortlisted for the Max Perutz Science Writing Award 2012 – an annual competition to encourage MRC-funded scientists to communicate their research to a wider audience. You can also read the winning article, published in The Metro, here.

Rodrigo Braga at the Max Perutz Award Ceremony

The human brain is the most complex object in the known universe. With it we have built entire civilisations and harnessed the power of nature. Yet despite their amazing complexity, all brains begin life as a tiny bundle of cells that divide, migrate and miraculously wire themselves up into the thinking machines that make us who we are. The fact that it happens at all is almost as astounding as the finished product itself, but it doesn't always work out as Mother Nature intended.

Tucked up in her crib at the Neonatal Imaging Centre of Hammersmith Hospital, newborn baby Eliza is sleeping through another magnetic resonance imaging (MRI) scan. Around her head, the scanner machinery wails and screams with high-pitched ululations, but she sleeps peacefully, ears protected by tiny muffs. Eliza was born prematurely and her doctors are making sure that her little brain is growing normally. In her short 10 week life, she has been inside the scanner more times than most of us ever will. But today is different. Today we are using a new technique called Diffusion Tensor Imaging (DTI) to help unravel the mysteries of brain development. And that is a huge task.

The human brain contains 1,000 trillion connections between 86 billion neurons (neurons are what we really mean when we say 'brain cells'). Each neuron has a long thin arm called an 'axon' that it uses to send messages to other neurons that could be on opposite ends of the brain. Connecting them all means criss-crossing the brain with axons.

To give you a sense of the resulting confusion, imagine a planet (let's call it 'Braintopia') that is packed with ten times more people than planet Earth. Imagine that every Braintopian has to make regular long distance calls to an overbearing mother on the other side of the planet. On Earth this would be easy, but Braintopians haven't discovered mobile phones or landlines yet. Instead, all they have are those cup and string phones that children play with here on Earth. Each Braintopian carries their own paper cup, and trails along a string that stretches around the globe to mum. Simple!

It might seem absurd, but this is actually how neurons communicate, through a direct physical connection. In order that you can wriggle your toes, a daring axon made the journey from the top of your brain to the bottom of your spinal cord to pass the message on to your legs. Now if a single Braintopian trailing a string like an umbilical cord sounds ridiculous, picture the mess that a whole city-full of them would make, strings tangling through the streets like a Great Spaghetti Monster. Or worse, imagine the chaos of an entire planet-full of intercontinental strings. The resulting ball of yarn would be monolithic!

The brain has a similar connection problem, but it maintains order by packing the axons heading in the same direction together into thick fibres called 'white matter tracts'. Recent research suggests that the normal development of white matter is an important indicator that a baby's brain is healthy. If a white matter tract doesn't develop properly, the brain regions connected by that tract cannot communicate with each other. This can lead to serious physical and learning disabilities. If doctors could assess a baby's white matter early on, they could check the connections are healthy and in place, and give special attention to the infants that need it. But doing this when the brain is sealed inside a baby's head is extremely challenging. Luckily, this is where DTI comes in.

Back in Hammersmith, Eliza's scan is almost done. The DTI procedure uses the MRI scanner's powerful magnets to spin the atoms in Eliza's brain on the spot, like pirouetting ballerinas. Atoms spin frantically anyway, but when placed inside a magnet they align their spin with the direction of the magnetic field. And so the ballet begins. In this synchronised dance, each atomic twirl sends out a tiny radio signal that the scanner uses to work out where the atom is. From this, we can find atoms that are attached to water molecules and trace them as they float around Eliza's brain. The brain is 70% water, and white matter tracts act like miniature hosepipes, channeling water along them. By following the movement of water we can therefore visualise exactly where the white matter tracts lie. Using this principle we have created a white matter atlas for babies, to help doctors recognise abnormal brain development.

Eliza continues to sleep while the scanner diligently chugs away. This short 20 minute scan will produce a beautiful map of her own Braintopia without hurting her in any way. By comparing Eliza's map to our atlas, doctors can tell if her fibres are healthy, and give her the best possible start in life.

A little night music - Roche Continents 2012

Roche Continents 2012 For every year of the last six, Roche Pharmaceuticals (one of the world's biggest drug companies) has invited 100 students from across Europe to Salzburg, Austria, to take part in a festival commemorating 'Youth! Arts! Science!'.  The theme of the gathering is simple and noble; to explore the common ground between the arts and the sciences, namely the need to be creative and innovative.

This year I was fortunate to be one of those students.  At a princely 29 years of age it's arguable whether I qualify for 'Youth!', but I guess 2 out of 3 is not bad.  And so, I packed my bag and my finest evening garbs, and hopped on a plane to the birthplace of Herr Mozart.  I had applied for the festival thinking it would be a nice opportunity to visit a new country, and knowing that the bursary would look good on my CV.  Yet as I arrived at my home for the next 6 days, the modern and comfortable Tourismusschule Klessheim on the outskirts of Vienna, I remained completely clueless as to what was in store for my fellow 'Youths!' and I.  I was also apprehensive that the whole week would turn into a Clockwork Orange-style indoctrination exercise into the wonders of Hoffmann-La Roche Ltd.  But thankfully I couldn't be more wrong.

In truth, the very first talk we heard was on how 'Big Pharma' companies are not so big and evil, with the main point being that the earnings of Roche are a fraction of that of companies like Wal-Mart.  This of course says nothing about the 'evil' part, but the speaker exuded an enthusiasm that left me in no doubt that she felt the benefit of Roche's drug discoveries went far beyond the profits that came with them.  And nonetheless the talk was welcome as we all wanted to know a little more about our generous hosts.

And how generous they were! For the next 6 days each of the 100 students was spoiled to unparalleled levels of pampering.  There were gourmet meals every day with limitless wine and champagne even in the early hours of the morning, boxes of Austrian Mozartkügeln chocolates greeting us in our rooms, and front row seats to the most popular shows at the renowned Salzburg Festival of Music and Drama (which Roche is a prominent sponsor of).  At these shows, we were waited on with canapés and (more) champagne, were given private audiences with the conductors and performers who were fresh from the stage, and rubbed shoulders with the organisers of the festival.  All the while a professional photographer fluttered around taking snapshots of us all dressed in our Sunday best.  The illusion of celebrity was impeccable.

But the remarkable thing about the festival, the thing that still makes me look back on that week with fondness a month on, was not the luxury, nor the repeated morning seminars where we discussed the importance of being creative (in my view a fruitless exercise akin to making endless to-do lists without actually doing anything).  The glaring charm of the event, by far, were the people that I met while there.  It was immediately clear from day one that each of the students had been hand-picked for their achievements and self-evident passion for learning music or science.  The abundance of talent that had been gathered was obvious; one night, at a moment's notice an oboe and violin were produced, and a Mozart quartet was reproduced impromptu, just for fun.  A quick chat to a spectacled neighbour would lead to enlightening discussions of the latest scientific discoveries and advances. The organisers were genuinely excited to be looking after this small sea of bright faces, and this enthusiasm was truly infectious (even for those of a more skeptical British nature).  The result was that throughout the week every face you turned to was smiling and engaging.  One that was interesting and interested in exchanging ideas.  It was a surreal experience that honestly made me wish that all the people I met day to day were as open-minded and friendly as this little group.  Given that everyone there had been selected based on similar interests, perhaps it's not surprising that everyone got along. But I have no doubt that some life-long friendships were made that week.

And being invited to be a part of it all, to be a peer of such illustrious company, was infinitely more rewarding, and endowing of gratitude and humility, than the glamour of a celebrity treatment ever could be.

If you are interested in attending Roche Continents, visit: http://www.roche-continents.net