The Analog Brain: Why You Can't Download Your Brain Onto a Computer - Part III

This post is a continuation from this previous post.

< Previous      Next >

The weather is a classic example of a chaotic system.  Try as we might, we just cannot predict the weather well.  Even with computers the size of villages we haven’t been able to produce a model that mimics the real weather beyond a few days’ time.  Why is that? 

For the answer, we have to turn to Edward Lorenz; mathematician, meteorologist and Father of Chaos (theory).  Lorenz tried to predict the weather by creating a computer model that could calculate how meteorological variables such as temperature, wind-speed, air-pressure, etc. would interact over time.  The idea was that if he could represent each of those variables mathematically, and calculate the effect each would have on the others, there was no reason why he couldn’t calculate the future behavior of the weather.  Just as long as he could get his model right.  To this end, Lorenz took extraordinarily accurate measurements of the weather, so that at the starting point his model and the real weather were perfectly matched.  Once everything was in place, Lorenz hit ‘Enter’, let the model run, and compared the model’s predictions to the real weather conditions.  

Initially, the two systems mirrored each other quite well.  However, within a few days the model started behaving erratically, and would no longer give him a reliable prediction.  Confused, Lorenz went back and checked his data.  The variables he put into his model were exactly the same as the weather measurements, and his model matched the real climate perfectly at the point at which he pressed ‘Go’.  So how could the two systems possibly diverge if their starting conditions were the same? 

It dawned on Lorenz that his model was only a perfect match to a certain level of accuracy.  The measurement he had taken of temperature, for example, was accurate only to a limited number of decimal places. So whereas in his model the starting temperature might be set to, say, 22.00000000000°C and not 21.99999999999°C, in reality the actual temperature could lie somewhere between the two.  This is what’s known as a rounding error, and Lorenz’s model was full of them.

Like the hairs on our snooker table, these tiny rounding errors have very little effect over short periods of time, but after a while they add up to produce drastically different behaviours.  This is the famous ‘Butterfly effect’, where a tiny change in one variable (a butterfly flaps its wings in Hong Kong) can lead to huge differences (a hurricane in New York instead of sunshine) given enough time.  Because these tiny errors are catastrophically additive, a digital model of a chaotic system will only behave exactly like the original if its starting parameters are exactly the same as the original, totally free from rounding errors. 

Which brings us back to the digital brain…

< Previous      Next >

Five steps to surviving the Post-PhD Career Precipice

Picture this horror scene: You’ve spent >20 years in education becoming a beacon of knowledge. Everywhere you go people duly drop their jaws at your cranial awesomeness. But just as you are to be crowned with a PhD, you emerge into the world to find no job, no salary, no means and no pedestal for your dreams.

Sadly, that is the reality for a lot of students, which means that most of us spend our last months as students scouring the Internet for work instead of finalising our research. That has been my reality for the last 6+ months, and I was recently saved by the powers that be, unscathed.

I am very happy to announce that I have been awarded the very prestigious Sir Henry Wellcome Postdoctoral Fellowship from the Wellcome Trust (read more about this here). I absolutely could not have achieved this without a long list of helpers at Imperial College and the Clinical Sciences Centre, for which I owe a lot.

And so, as a small way to pay my large debt, here are some tips on How to Survive the Post-PhD Career Precipice for those of you who want to stay in academia.

1) Don’t limit yourself – apply for lots of things.

There are many sources of funding which you can apply to survive ‘The End’: personal fellowships, project grants, and postdoctoral positions.

Personal fellowships are the hardest to get, and the best for your CV as they really encourage your personal and independent development. You will typically need to have at least one first author publication from your PhD to be considered. I applied for three (Wellcome Trust, Alzheimer’s Trust and The Fulbright Commission) but there are other field-specific ones. You will have to write a project proposal, find sponsors for your project, and complete an epic form that will make your thesis seem like a romantic novel. It is often recommended that you find two institutions to sponsor you, as this will broaden your training considerably (I stalked a Professor at Harvard a full year before my end date to get his support). Deadlines for Fellowships start early, so do your research at least 9 months before ‘The End’ and plan accordingly.

Project grants – these involve getting a big-wig (e.g. your supervisor) to submit a grant proposal with you as a named researcher. Or rather, this will probably involve you writing a grant proposal and naming your supervisor as the principal investigator! But either way, if it allows you to survive ‘The End’, it’ll be worth it. I thankfully avoided this option but it would’ve been my last resort.

Post-doc positions – these are often competitive but, given that you will have a PhD from Imperial, so are you! Check jobs.ac.uk for postings. In this setup, you will be working on someone else’s project, so it is important that you are interested in the project. In most cases, the principal applicant will want you to learn new techniques, come up with new questions and develop as a researcher, so you won’t (hopefully) just be someone’s lackie. It is best to wait until closer to your end date (1-3 months) before applying for these, as most positions will want you to start relatively soon. I applied to three different postdoctoral positions and even when I was rejected found the interview practice very useful. Be aware that sometimes there is already someone (usually internal) in mind for the job, so don’t feel disheartened if you shone at the interview but were still rejected.

2) Get help!

You will be amazed at how much your university wants you to succeed! Overall, I had about seven mock interviews with different academics at Imperial. Each one taught me something new and made me more confident in my proposal. Imperial College, for example, is very good at supporting and coaching its students, so do seek the help. Contact your administrator who will be glad to arrange a mock interview for you. It is also useful to interview with academics from different fields, as they will have new insights into why your proposal is flawed.

3) Do the Hussle -

Don’t waste the opportunity by scoring an own goal. Do the preparation! Whether it’s for a fellowship or postdoc position, you have to put the time into it and research the position and project. This will show your interviewers that you really want the job. In preparation for my Wellcome interview, I spent two weeks revising full-time (in between mock interviews) as if it was an undergraduate exam. I told myself that I had to go in there knowing everything, or at least having an answer to anything that they might ask me. This is crucial, and is within your control, so just do it!

4) The Pitch

You’ve probably watched The Apprentice. This over-confident alpha-leader breed of human is what you must become to win over the interviewers (don’t worry, you can go back to normal shortly afterwards). They want to see that you command your field, that you have a clear direction in your career, and that you are coherent. This last point is the most important. Make sure you can explain your research in a simple, engaging, but exciting way, and the battle is half won. To do this, you need to practice your pitch again and again and again.

5) Good luck!

Sometimes it is just luck, so don’t be disheartened and keep trying.

 

The Analog Brain: Why You Can't Download Your Mind Onto a Computer - Part II

This post is a continuation from this previous post.

< Previous      Next >

Simple Models

In the curious case of the brain-download it is not enough to create a computer model that is similar to a human brain.  For a successful download of your brain we need to recreate, to a high-level of accuracy, 'Your' exact brain.  Your unique and wonderful brain, with all its imperfections and hard-won connections which you’ve trained and pruned your whole life.  The computer model has to be exactly the same as your biological brain in terms of how it behaves, otherwise it simply doesn't count.  A half-baked, inaccurate model would not really be ‘You’, or at least would cease to behave as ‘You’ would after a short amount of time. And who wants to live forever as an approximation of their former self?

The issue then is not whether we will ever have the technology to reproduce your brain digitally, but whether a digital copy of your brain could ever be good enough to be indistinguishable from your real brain. We might almost dismiss the notion on principle because a computer model is, by necessity, a summary of a real scenario.  Since a summary could never contain the same amount of information as the original (otherwise it would not be a summary), the premise is necessarily false.  However it would be foolish to suggest that a model cannot meaningfully represent reality.  After all, we use models all the time.  The map on your phone, for example, is a digital model of your surrounding geography.  The important information, the roads and place names, are contained in the map, but everything else is ignored. The textures and smells of the real world aren’t needed for the map to be useful, so they are ignored. This reduces the amount of information in the model while preserving its behaviour. Theoretically we should also be able to reduce the amount of information we need to represent in our computer-brain-model while still preserving your brain’s behaviour. However, the behaviour in this case is vastly more complex, meaning that the more information we leave out, the higher the chance of producing a bad model.

Models of relatively simple systems can afford to be superficial.  A simulation of a snooker game, for example, doesn’t need to take into account the effect every hair on the snooker table has on the moving ball.  It can afford to ‘zoom out’ and represent the net contribution of all the hairs with a summary variable, such as 'Friction'.  Because the system is simple, our model will be able to make accurate predictions of the ball’s trajectory despite summarising reality to a great extent, and ignoring all the tiny variations between the individual hairs on the table.  Despite this blunt approach, our model will still be a true representation of the real system.

In reality, of course, those little hairs do have an effect on the snooker ball’s trajectory, but their contribution is so infinitesimally small that within the confines of our snooker table the effect is completely unnoticeable. If we had a long enough snooker table though, and a perpetually moving ball, these tiny effects would eventually become very relevant.  In other words, on an infinite table if we plucked a single hair from the ball’s path and re-struck the ball in the exact same way, the new trajectory would noticeably diverge from the original given enough time.

Modelling simple systems is relatively easy because we can afford to be inaccurate.  That is why you can play snooker right now on your phone for 59p if you wished. For complex systems like the brain however, the situation is very different.  Complex systems are determined by lots of different factors, each of which might be influenced by many others.  The number of possible interactions means that complex systems are very hard to model accurately, because an error in any one of these factors will lead to bigger and bigger errors with every interaction. Like a virulent sneeze on a train, the single error infects all factors it comes in contact with, and eventually blights the whole model.

And speaking of colds, we all know first-hand how futile it is to try to model complex systems.  In fact, there is one particular chaotic system that we talk about pretty much every day

< Previous      Next >

The Analog Brain: Why You Can’t Download Your Mind Onto a Computer - Part I

brain download

Leading thinkers such as neuroscientist David Eagleman and philosopher Nick Bostrom believe that one day, you will be able to download your mind onto a computer.  A sophisticated brain scanner will record all the connections in your brain and a computer will then recreate them all digitally.  The digital ‘brain’ will then begin to behave exactly like your real brain, which means it will essentially become you, and allow you to live beyond the death of your body in an eternal 'transhuman' existence.

If that sounds too good/bad to be true, that’s because it probably is.  Replicating the trillions of dynamic connections that exist in your brain digitally would be a truly miraculous feat, and one which is definitely beyond us at this time.  Nonetheless, we should never underestimate the future’s potential to wildly exceed our expectations. With our technology and scientific methods steadily improving, one day we will surely have the capacity to create a computer model with comparable complexity to a human brain.  Progress towards this has already begun, with the 2.5-million-neuron SPAUN brain model recently being created, and the Human Connectome Project working diligently to map all the connections of a single human brain.

However, even if we were able to map and model all the connections of a brain, translating this into a personality-download is a whole different ball-game.  Whether or not identity is embodied (i.e. attached to a particular body) will keep philosophers occupied even while the digital-human-brained robots take over civilization.  The problem being that the minute we recreate our minds in another location, that duplicate mind will begin to have its own experiences and perspective, and will therefore necessarily have a different identity to the original.

This problem is perhaps insurmountable.  But for the sake of argument, and because it would be very cool, lets assume that if we created a perfect digital replica of someone’s brain we will have transferred their identity to a computer.  After all, that in itself would still be a mind-bending feat, even if both resulting individuals remained convinced their parallel twin was an impostor.  In this situation, could we ever be confident that the digital version was faithful to the original brain?

In this ‘Analog Brain’ series I will argue that it is impossible to perfectly replicate a brain digitally. And the problem lies in the chaotic nature of complex systems and tiny unassuming things called rounding errors.

 Next > 

My first first author paper on auditory attention networks

Separable auditory and visual attention networks

Earlier this year I published my first first author paper in the scientific journal Neuroimage.

(n.b. Being first author of a scientific paper is a big deal as it shows you did most of the work!)

Although our ears are bombarded with different sounds, our brains are very good at picking apart this soundscape and selecting relevant auditory objects for us to perceive.  This selection and filtering process is what we mean by attention, and it is crucial for us to be able to navigate our rich sensory environments without overloading our feeble minds.

In this paper we showed that the regions of the brain that let us select sounds from this soundscape are different  to those involved in selecting objects from our visual field. We had 20 people listen to busy background sounds (e.g. the sounds of a busy pub) that were full of distractors, and made them listen out for a specific target sound; a series of tones which made a simple melody.  In another 20 participants, we made them view busy natural scenes (e.g. commuters walking down Oxford Street) and had them look out for a target shape; a red rectangle that could appear in two possible locations on the screen.

We scanned our participants using MRI during these tasks and studied the neural activity that happened while subjects were paying attention to the sounds and videos.  We found that the connection between neurons in the middle frontal gyrus (MFG; which is important for the inhibition of a number of behaviours) and the posterior middle temporal gyrus (MTG; which is part of the extended auditory association areas) seems to be important for the selection of auditory objects.  In contrast, for visual selection we saw activity in the superior parietal lobe (SPL; which is important for spatial navigation and awareness) and the frontal eye fields (FEF; which are crucial for controlling eye movements).

In addition, we showed that there is a common area of overlap between the two sensory modalities.  The MFG was activated for both visual and auditory selection.  This suggests that the MFG is important for coordinating which sensory modalities are being attended to, as it is able to connect to both visual and auditory attention areas simultaneously.

I was recently interviewed for this work by Faculti Media, and you can see the video below. Enjoy!

 

CSC Scientific Image Competition 2012

Neural Hot Spots

This is my entry for the 2012 Clinical Sciences Centre scientific image competition:

'The human brain contains several 'hubs'; cortical hotspots where complex neural signals are found (top) .  These complex signals are the result of communication with the brain's functional networks (bottom images).  Multiple functional signals overlap at the cortical hotspots, meaning they could be a potential site for the dynamic integration of the information exchanged within each neural network.'

In the end I called it 'Neural Hot Spots', but 'Jesus-Brain' would've no doubt been more appropriate! The data is currently being written up for publication (with a somewhat more sedate version of this image).

Eliza & the Great Spaghetti Monster

This is my article which was shortlisted for the Max Perutz Science Writing Award 2012 – an annual competition to encourage MRC-funded scientists to communicate their research to a wider audience. You can also read the winning article, published in The Metro, here.

Rodrigo Braga at the Max Perutz Award Ceremony

The human brain is the most complex object in the known universe. With it we have built entire civilisations and harnessed the power of nature. Yet despite their amazing complexity, all brains begin life as a tiny bundle of cells that divide, migrate and miraculously wire themselves up into the thinking machines that make us who we are. The fact that it happens at all is almost as astounding as the finished product itself, but it doesn't always work out as Mother Nature intended.

Tucked up in her crib at the Neonatal Imaging Centre of Hammersmith Hospital, newborn baby Eliza is sleeping through another magnetic resonance imaging (MRI) scan. Around her head, the scanner machinery wails and screams with high-pitched ululations, but she sleeps peacefully, ears protected by tiny muffs. Eliza was born prematurely and her doctors are making sure that her little brain is growing normally. In her short 10 week life, she has been inside the scanner more times than most of us ever will. But today is different. Today we are using a new technique called Diffusion Tensor Imaging (DTI) to help unravel the mysteries of brain development. And that is a huge task.

The human brain contains 1,000 trillion connections between 86 billion neurons (neurons are what we really mean when we say 'brain cells'). Each neuron has a long thin arm called an 'axon' that it uses to send messages to other neurons that could be on opposite ends of the brain. Connecting them all means criss-crossing the brain with axons.

To give you a sense of the resulting confusion, imagine a planet (let's call it 'Braintopia') that is packed with ten times more people than planet Earth. Imagine that every Braintopian has to make regular long distance calls to an overbearing mother on the other side of the planet. On Earth this would be easy, but Braintopians haven't discovered mobile phones or landlines yet. Instead, all they have are those cup and string phones that children play with here on Earth. Each Braintopian carries their own paper cup, and trails along a string that stretches around the globe to mum. Simple!

It might seem absurd, but this is actually how neurons communicate, through a direct physical connection. In order that you can wriggle your toes, a daring axon made the journey from the top of your brain to the bottom of your spinal cord to pass the message on to your legs. Now if a single Braintopian trailing a string like an umbilical cord sounds ridiculous, picture the mess that a whole city-full of them would make, strings tangling through the streets like a Great Spaghetti Monster. Or worse, imagine the chaos of an entire planet-full of intercontinental strings. The resulting ball of yarn would be monolithic!

The brain has a similar connection problem, but it maintains order by packing the axons heading in the same direction together into thick fibres called 'white matter tracts'. Recent research suggests that the normal development of white matter is an important indicator that a baby's brain is healthy. If a white matter tract doesn't develop properly, the brain regions connected by that tract cannot communicate with each other. This can lead to serious physical and learning disabilities. If doctors could assess a baby's white matter early on, they could check the connections are healthy and in place, and give special attention to the infants that need it. But doing this when the brain is sealed inside a baby's head is extremely challenging. Luckily, this is where DTI comes in.

Back in Hammersmith, Eliza's scan is almost done. The DTI procedure uses the MRI scanner's powerful magnets to spin the atoms in Eliza's brain on the spot, like pirouetting ballerinas. Atoms spin frantically anyway, but when placed inside a magnet they align their spin with the direction of the magnetic field. And so the ballet begins. In this synchronised dance, each atomic twirl sends out a tiny radio signal that the scanner uses to work out where the atom is. From this, we can find atoms that are attached to water molecules and trace them as they float around Eliza's brain. The brain is 70% water, and white matter tracts act like miniature hosepipes, channeling water along them. By following the movement of water we can therefore visualise exactly where the white matter tracts lie. Using this principle we have created a white matter atlas for babies, to help doctors recognise abnormal brain development.

Eliza continues to sleep while the scanner diligently chugs away. This short 20 minute scan will produce a beautiful map of her own Braintopia without hurting her in any way. By comparing Eliza's map to our atlas, doctors can tell if her fibres are healthy, and give her the best possible start in life.