Scientists have found that sleeping for an hour or more extra a night can dramatically improve an individual’s alertness and reduce their sensitivity to pain.
In fact, say the researchers, getting nearly 10 hours a night – rather than the recommended 8 – is more effective at reducing pain than taking the drug codeine.
The study used 18 healthy, pain-free volunteers who were randomly assigned either four nights of their normal sleep pattern or four nights of 10 hours in bed.
The American researchers measured daytime sleepiness using the multiple sleep latency test – a standard method used by doctors to diagnose sleep problems in which brain waves, eye movement, heart rate and muscle tone are measured.
Pain sensitivity was assessed using a heat source.
Results showed the extended sleep group slept 1.8 hours more per night than those on a regular sleeping pattern. This was associated with increased daytime alertness and significantly less pain sensitivity.
Those getting more sleep were able to keep their finger on a heat source for 25% longer, showing a loss of pain sensitivity.
Scientists have found that sleeping for an hour or more extra a night can dramatically improve an individual’s alertness and reduce their sensitivity to pain
The findings, published in the journal Sleep, also revealed the magnitude of this increase is greater than the effect found in a previous study of 60 mg of codeine.
The results, combined with data from previous research, suggest increased pain sensitivity in tired people is the result of their underlying sleepiness.
Dr. Timothy Roehrs, an expert in sleep disorders and their treatment based at the Henry Ford Hospital in Detroit, said: “Our results suggest the importance of adequate sleep in various chronic pain conditions or in preparation for elective surgical procedures.
“We were surprised by the magnitude of the reduction in pain sensitivity, when compared to the reduction produced by taking codeine.”
US scientist Prof. Philip Low is to unveil details of work on the brain patterns of Prof. Stephen Hawking which he says could help safeguard the physicist’s ability to communicate.
Prof. Philip Low said he eventually hoped to allow Prof. Stephen Hawking to “write” words with his brain as an alternative to his current speech system which interprets cheek muscle movements.
Prof. Philip Low said the innovation would avert the risk of locked-in syndrome.
Intel is working on an alternative.
Prof. Stephen Hawking was diagnosed with motor neuron disease in 1963. In the 1980s he was able to use slight thumb movements to move a computer cursor to write sentences.
His condition later worsened and he had to switch to a system which detects movements in his right cheek through an infrared sensor attached to his glasses which measures changes in light.
Prof. Philip Low is to unveil details of work on the brain patterns of Prof. Stephen Hawking
Because the nerves in his face continue to deteriorate his rate of speech has slowed to about one word a minute prompting him to look for an alternative.
The fear is that Prof. Stephen Hawking could ultimately lose the ability to communicate by body movement, leaving his brain effectively “locked in” his body.
In 2011, he allowed Prof. Philip Low to scan his brain using the iBrain device developed by the Silicon Valley-based start-up Neurovigil.
Prof. Stephen Hawking will not attend the consciousness conference in his home town of Cambridge where Prof. Philip Low intends to discuss his findings.
A spokesman said: “Professor Hawking is always interested in supporting research into new technologies to help him communicate.”
The iBrain is a headset that records brain waves through EEG (electroencephalograph) readings – electrical activity recorded from the user’s scalp.
Prof. Philip Low said he had designed computer software which could analyze the data and detect high frequency signals that had previously been thought lost because of the skull.
“An analogy would be that as you walk away from a concert hall where there’s music from a range of instruments,” he said.
“As you go further away you will stop hearing high frequency elements like the violin and viola, but still hear the trombone and the cello. Well, the further you are away from the brain the more you lose the high frequency patterns.
“What we have done is found them and teased them back using the algorithm so they can be used.”
Prof. Philip Low said that when Prof. Stephen Hawking had thought about moving his limbs this had produced a signal which could be detected once his algorithm had been applied to the EEG data.
He said this could act as an “on-off switch” and produce speech if a bridge was built to a similar system already used by the cheek detection system.
Prof. Philip Low said further work needed to be done to see if his equipment could distinguish different types of thoughts – such as imagining moving a left hand and a right leg.
If it turns out that this is the case he said Prof. Stephen Hawking could use different combinations to create different types of virtual gestures, speeding up the rate he could select words at.
To establish whether this is the case, Prof. Philip Low plans trials with other patients in the US.
The US chipmaker Intel announced, in January, that it had also started work to create a new communication system for Prof. Stephen Hawking after he had asked the firm’s co-founder, Gordon Moore, if it could help him.
It is attempting to develop new 3D facial gesture recognition software to speed up the rate at which Prof. Stephen Hawking can write.
“These gestures will control a new user interface that takes advantage of the multi-gesture vocabulary and advances in word prediction technologies,” a spokeswoman said.
“We are working closely with Professor Hawking to understand his needs and design the system accordingly.”
Scientists at University of California Berkeley have demonstrated a striking method to reconstruct words, based on the brain waves of patients thinking of those words.
The method, reported in PLoS Biology, relies on gathering electrical signals directly from patients’ brains.
Based on signals from listening patients, a computer model was used to reconstruct the sounds of words that patients were thinking of.
The technique may in future help comatose and locked-in patients communicate.
Several approaches have in recent years suggested that scientists are closing in on methods to tap into our very thoughts.
In a 2011 study, participants with electrodes in direct brain contact were able to move a cursor on a screen by simply thinking of vowel sounds.
A technique called functional magnetic resonance imaging to track blood flow in the brain has shown promise for identifying which words or ideas someone may be thinking about.
By studying patterns of blood flow related to particular images, Jack Gallant’s group at the University of California Berkeley showed in September that patterns can be used to guess images being thought of – recreating “movies in the mind”.
Now, Brian Pasley of the University of California Berkeley and a team of colleagues have taken that “stimulus reconstruction” work one step further.
“This is inspired by a lot of Jack’s work,” Dr. Brian Pasley said. “One question was… how far can we get in the auditory system by taking a very similar modelling approach?”
The team focused on an area of the brain called the superior temporal gyrus, or STG.
This broad region is not just part of the hearing apparatus but one of the “higher-order” brain regions that help us make linguistic sense of the sounds we hear.
The team monitored the STG brain waves of 15 patients who were undergoing surgery for epilepsy or tumours, while playing audio of a number of different speakers reciting words and sentences.
The trick is disentangling the chaos of electrical signals that the audio brought about in the patients’ STG regions.
To do that, the team employed a computer model that helped map out which parts of the brain were firing at what rate, when different frequencies of sound were played.
With the help of that model, when patients were presented with words to think about, the team was able to guess which word the participants had chosen.
The scientists at UC Berkeley were even able to reconstruct some of the words, turning the brain waves they saw back into sound on the basis of what the computer model suggested those waves meant
The scientists were even able to reconstruct some of the words, turning the brain waves they saw back into sound on the basis of what the computer model suggested those waves meant.
“There’s a two-pronged nature of this work – one is the basic science of how the brain does things,” said Robert Knight of UC Berkeley, senior author of the study.
“From a prosthetic view, people who have speech disorders… could possibly have a prosthetic device when they can’t speak but they can imagine what they want to say,” Prof. Robert Knight explained.
“The patients are giving us this data, so it’d be nice if we gave something back to them eventually.”
The authors caution that the thought-translation idea is still to be vastly improved before such prosthetics become a reality.
But the benefits of such devices could be transformative, said Mindy McCumber, a speech therapist at Florida Hospital in Orlando.
“As a therapist, I can see potential implications for the restoration of communication for a wide range of disorders,” she said.
“The development of direct neuro-control over virtual or physical devices would revolutionise ‘augmentative and alternative communication’, and improve quality of life immensely for those who suffer from impaired communication skills or means.”