Werner Karl Heisenberg (1901-1976)

Werner Heisenberg was a German theoretical physicist who made foundational contributions to quantum theory. He is best known for the development of the matrix mechanics formulation of quantum mechanics in 1925 and for asserting the uncertainty principle in 1926, although he also made important contributions to nuclear physics, quantum field theory and particle physics. He was awarded the Nobel Prize in Physics in 1932 “for the creation of quantum mechanics ».
Werner Karl Heisenberg was born in Würzburg, Germany on 5 December 1901, the son of a secondary school teacher of classical languages. He studied physics and mathematics from 1920 to 1923 at Munich and at Göttingen with such illustrious teachers as Arnold Sommerfeld, Wilhelm Wien, Max Born, James Franck and David Hilbert. Sommerfeld in particular encouraged Heisenberg’s interest in atomic physics, and introduced him to Niels Bohr’s work on quantum physics.

From 1924 to 1927, Heisenberg lectured at the University of Göttingen, and conducted research with Niels Bohr at the University of Copenhagen. It was during this time that the young Heisenberg developed the “matrix mechanics” formulation of quantum mechanics (in collaboration with Max Born and Pascual Jordan). Matrix mechanics was the first complete and correct definition of quantum mechanics, and it extended the Bohr model of atoms by describing how the quantum jumps occur and by interpreting the physical properties of particles as matrices that evolve over time.

It was also in Copenhagen that Heisenberg developed his famous uncertainty principle, which he first described in a letter to Wolfgang Pauli in 1927. The uncertainty principle (Heisenberg actually used the word « Ungenauigkeit » or « imprecision ») states that the values of certain pairs of variables cannot both be known with complete precision, not so much due to the limitations of the researcher’s ability to measure them, but rather due to the very nature of the system itself. For example, if a particle is forced to take on a specific, precise position, then the particle’s speed or momentum cannot be precisely defined (and vice versa).

In 1927, Heisenberg became professor of theoretical physics and head of the department of physics at the University of Leipzig, and among his students numbered several who went on to distinguish themselves internationally in theoretical physics. In early 1929, he and Pauli submitted the first of two papers laying the foundation for relativistic quantum field theory. He was awarded the 1932 Nobel Prize in Physics “for the creation of quantum mechanics, the application of which has, inter alia, led to the discovery of the allotropic forms of hydrogen”, although he always believed that the prize should have been shared with his collaborators on matrix mechanics, Max Born and Pascual Jordan.

The “Deutsche Physik” movement of the early 1930s was strongly anti-Semitic and biased against theoretical physics, especially quantum mechanics and the theory of relativity. When Adolf Hitler became Chancellor of Germany in 1933, some of the leading German theoretical physicists, including Arnold Sommerfeld, Max Planck and Heisenberg himself, found themselves attacked and ostracized, particularly by the movement’s two most prominent supporters, the Nobel laureates Philipp Lenard and Johannes Stark. No less a personage than the head of the SS, Heinrich Himmler, became involved in the so-called “Heisenberg Affair”, calling him a « white Jew » who should be made to « disappear ». But Heisenberg fought back with an editorial and a letter to Himmler and the matter was eventually resolved.
Heisenberg met Elisabeth Schumacher at a private music recital in 1937 (he enjoyed classical music and was himself an accomplished pianist) and the two were married soon after. They were to bear twins, Maria and Wolfgang, in 1938, followed by five more children over the next 12 years: Barbara, Christine, Jochen, Martin and Verena.
In 1939, Heisenberg travelled to the United States to visit Samuel Abraham Goudsmit at the University of Michigan, but refused an invitation to emigrate to the United States. Back in Germany, in 1939, shortly after the discovery of nuclear fission, Heisenberg became one of the principal scientists leading research and development in the German nuclear energy project, known as the “Uranium Club”, and he travelled to German-occupied Copenhagen in 1941 to lecture and discuss nuclear research and theoretical physics with Niels Bohr. In 1942, he was asked by the Nazi administration to direct the Uranium Club’s research more toward developing nuclear weapons and, when Heisenberg prevaricated, the authority and regulation of the project was changed.

In 1943, Heisenberg was appointed to the Chair for Theoretical Physics at Humboldt University in Berlin, and was elected to the Prussian Academy of Sciences. But, as Allied bombing increased in Berlin, he moved his family to their rural retreat in Urfeld, and later joined them there. In May 1945, at the end of the War, Heisenberg was picked up by Operation Alsos (along with nine other prominent German scientists working in the nuclear field) and was incarcerated for a time in England under Operation Epsilon.

On his release in 1946, he settled in Göttingen where he worked as director of the Max Planck Institute for Physics until 1958, and then of the expanded Max Planck Institute for Physics and Astrophysics in Berlin until 1970. Throughout this period, he continued to lecture across the world and to publish papers, including works on superconductivity, turbulence and cosmic-ray showers, as well as being appointed to various councils, commissions and associations, and receiving numerous honours and awards.
Heisenberg died of cancer of the kidneys and gall bladder at his Munich home on 1 February 1976, aged 74.

Cooperation of U.S., Russian scientists helped avoid nuclear catastrophe at Cold War’s end, Stanford scholar says

BY CLIFTON B. PARKER, Stanford University

When the Soviet Union collapsed in 1991, the worry in the West was what would happen to that country’s thousands of nuclear weapons. Would “loose” nukes fall into the hands of terrorists, rogue states, criminals – and plunge the world into a nuclear nightmare?

Former Los Alamos National Laboratory director Siegfried Hecker recounts the epic story of how American and Russian scientists joined forces to avert some of the greatest post-Cold War nuclear dangers.

Fortunately, scientists and technical experts in both the U.S. and the former Soviet Union rolled up their sleeves to manage and contain the nuclear problem in the dissolving Communist country.

One of the leaders in this relationship was Stanford engineering professor Siegfried Hecker, who served as a director of the Los Alamos National Laboratory before coming to Stanford as a senior fellow at the Center for International Security and Cooperation. He is a world-renowned expert in plutonium science, global threat reduction and nuclear security.

Hecker cited one 1992 meeting with Russian scientists in Moscow who were clearly concerned about the risks. In his new book, Doomed to Cooperate: How American and Russian scientists joined forces to avert some of the greatest post-Cold War nuclear dangers, Hecker quoted one Russian expert as saying, “We now need to be concerned about terrorism.”

Earning both scientific and political trust was a key, said Hecker, also a senior fellow at Stanford’s Freeman Spogli Institute for International Studies. The Russians were proud of their scientific accomplishments and highly competent in the nuclear business – and they sought to show this to the Americans scientists, who became very confident in their Russian counterparts’ technical capabilities as they learned more about their nuclear complex and toured the labs.

Economic collapse, political turmoil

But the nuclear experts faced an immense problem. The Soviets had about 39,000 nuclear weapons in their country and in Eastern Europe and about 1.5 million kilograms of plutonium and highly enriched uranium (the fuel for nuclear bombs), Hecker said. Consider that the bomb that the U.S. dropped on the Japanese city of Nagasaki in 1945 was only six kilograms of plutonium, he added. Meanwhile, the U.S. had about 25,000 nuclear weapons in the early 1990s.

Hecker and the rest of the Americans were deeply concerned about the one million-plus Russians who worked in nuclear facilities. Many faced severe financial pressure in an imploding society and thus constituted a huge potential security risk.

“The challenge that Russia faced with its economy collapsing was enormous,” he said in an interview.

The Russian scientists, Hecker said, were motivated to act responsibly because they realized the awful destruction that a single nuclear bomb could wreak. Hecker noted that one Russian scientist told him, “We arrived in the nuclear century all in one boat, and a movement by anyone will affect everyone.” Hecker noted, “Therefore, you know, we were doomed to work together to cooperate.”

All of this depended on the two governments involved easing nuclear tensions while allowing the scientists to collaborate. In short order, the scientists developed mutual respect and trust to address the loose nukes scenario.

The George H.W. Bush administration launched nuclear initiatives to put the Russian government at ease. For example, it took the nuclear weapons off U.S. Navy surface ships and some of its nuclear weapons off alert to allow the Russians to do the same. The U.S. Congress passed the Nunn-Lugar Cooperative Threat Reduction legislation, which helped fund some of the loose nuke containment efforts.

While those were positive measures, Hecker said, it was ultimately the cooperation among scientists, what they called lab-to-lab-cooperation, that allowed the two former superpower enemies to “get past the sensitivity barriers” and make “the world a safer place.”

Since the end of the Cold War, no significant nuclear event has occurred as a result of the dissolution of the Soviet Union and its nuclear complex, Hecker noted.

Lesson: cooperation counts

One lesson from it all, Hecker said, is that government policymakers need to understand that scientists and engineers can work together and make progress toward solving difficult, dangerous problems.

“We don’t want to lose the next generation from understanding what can actually be done by working together,” he said. “So, we want to demonstrate to them, Look, this is what was done when the scientists were interested and enthusiastic and when the government gave us enough room to be able to do that.”

Hecker said this scientific cooperation extended to several thousand scientists and engineers at the Russian sites and at U.S. nuclear labs – primarily the three defense labs: Lawrence Livermore, Los Alamos, and Sandia national laboratories. Many technical exchanges and visits between scientists in Russia and the United States took place.

He recalled visiting some of the nuclear sites in Russian cities shrouded by mystery. “These cities were so secret, they didn’t even appear on Soviet maps.”

Change of threat

When the Soviet Union collapsed, the nature of the nuclear threat changed, Hecker said. The threat before was one of mutual annihilation, but now the threat changed to what would happen if nuclear assets were lost, stolen or somehow evaded the control of the government.

“From an American perspective we referred to these as the ‘four loose nuclear dangers,’” he said.

This included securing the loose nukes in the Soviet Union and Eastern Europe; preventing nuclear materials or bomb fuel from getting into the wrong hands; the human element involving the people who worked in the Soviet nuclear complex; and finally, the “loose exports” problem of someone trying to sell nuclear materials or technical components to overseas groups like terrorists or rogue nations.

For Hecker, this is not just an American story. It is about a selfless reconciliation with a longtime enemy for the greater global good, a relationship not corrupted by ideological or nationalistic differences, but one reflective of mutual interests of the highest order.
“The primary reason,” he said, “why we didn’t have a nuclear catastrophe was the Russian nuclear workers and the Russian nuclear officials. Their dedication, their professionalism, their patriotism for their country was so strong that it carried them through these times in the 1990s when they often didn’t get paid for six months at a time … The nuclear complex did its job through the most trying times. And it was a time when the U.S. government took crucial conciliatory measures with the new Russian Federation and gave us scientists the support to help make the world a safer place.”

Genes and the American Dream

source: Scientific American

New study reveals that only wealthy Americans realize genetic potential

By David Z. Hambrick on March 29, 2016

Nearly a century after James Truslow Adams coined the phrase, the “American dream” has become a staple of presidential campaign speeches. Kicking off her 2016 campaign, Hillary Clinton told supporters that “we need to do a better job of getting our economy growing again and producing results and renewing the American dream.” Marco Rubio lamented that “too many Americans are starting to doubt” that it is still possible to achieve the American dream, and Ted Cruz asked his supporters to “imagine a legal immigration system that welcomes and celebrates those who come to achieve the American dream.” Donald Trump claimed that “the American dream is dead” and Bernie Sanders quipped that for many “the American dream has become a nightmare.”

But the American dream is not just a pie-in-the-sky notion—it’s a scientifically testable proposition. The American dream, Adams wrote, “is not a dream of motor cars and high wages merely, but a dream of social order in which each man and each woman shall be able to attain to the fullest stature of which they are innately capable…regardless of the fortuitous circumstances of birth or position.” In the parlance of behavioral genetics—the scientific study of genetic influences on individual differences in behavior—Adams’ idea was that all Americans should have an equal opportunity to realize their genetic potential.    

A study just published in Psychological Science by psychologists Elliot Tucker-Drob and Timothy Bates reveals that this version of the American dream is in serious trouble. Tucker-Drob and Bates set out to evaluate evidence for the influence of genetic factors on IQ-type measures (aptitude and achievement) that predict success in school, work, and everyday life. Their specific question was how the contribution of genes to these measures would compare at low versus high levels of socioeconomic status (or SES), and whether the results would differ across countries. The results reveal, ironically, that the American dream is more of a reality for other countries than it is for America: genetic influences on IQ were uniform across levels of SES in Western Europe and Australia, but, in the United States, were much higher for the rich than for the poor.

For their analysis, Tucker-Drob and Bates identified 14 studies, with a total sample of nearly 25,000 pairs of twins. Each study compared identical twins to fraternal twins on an IQ measure, and also collected information on SES, including income, wealth, and education. Identical twins share 100% of their genes, whereas fraternal twins share, on average, only 50 percent of their genes. Thus, if variation across people in a trait is influenced by genes, identical twins will be more similar to each other on that trait than fraternal twins will be. Degree of genetic influence—or heritability—is determined by statistically comparing the identical twins and the fraternal twins. Tucker-Drob and Bates then aggregated the findings of the studies using meta-analysis, a statistical tool that quantitatively synthesizes the results of multiple studies.

Ironically, the American dream is more of a reality for other countries than it is for America: genetic influences on IQ were uniform across levels of SES in Western Europe and Australia, but, in the United States, were much higher for the rich than for the poor.

Nearly a century after James Truslow Adams coined the phrase, the “American dream” has become a staple of presidential campaign speeches. Kicking off her 2016 campaign, Hillary Clinton told supporters that “we need to do a better job of getting our economy growing again and producing results and renewing the American dream.” Marco Rubio lamented that “too many Americans are starting to doubt” that it is still possible to achieve the American dream, and Ted Cruz asked his supporters to “imagine a legal immigration system that welcomes and celebrates those who come to achieve the American dream.” Donald Trump claimed that “the American dream is dead” and Bernie Sanders quipped that for many “the American dream has become a nightmare.”

But the American dream is not just a pie-in-the-sky notion—it’s a scientifically testable proposition. The American dream, Adams wrote, “is not a dream of motor cars and high wages merely, but a dream of social order in which each man and each woman shall be able to attain to the fullest stature of which they are innately capable…regardless of the fortuitous circumstances of birth or position.” In the parlance of behavioral genetics—the scientific study of genetic influences on individual differences in behavior—Adams’ idea was that all Americans should have an equal opportunity to realize their genetic potential.    

A study just published in Psychological Science by psychologists Elliot Tucker-Drob and Timothy Bates reveals that this version of the American dream is in serious trouble. Tucker-Drob and Bates set out to evaluate evidence for the influence of genetic factors on IQ-type measures (aptitude and achievement) that predict success in school, work, and everyday life. Their specific question was how the contribution of genes to these measures would compare at low versus high levels of socioeconomic status (or SES), and whether the results would differ across countries. The results reveal, ironically, that the American dream is more of a reality for other countries than it is for America: genetic influences on IQ were uniform across levels of SES in Western Europe and Australia, but, in the United States, were much higher for the rich than for the poor.

For their analysis, Tucker-Drob and Bates identified 14 studies, with a total sample of nearly 25,000 pairs of twins. Each study compared identical twins to fraternal twins on an IQ measure, and also collected information on SES, including income, wealth, and education. Identical twins share 100% of their genes, whereas fraternal twins share, on average, only 50 percent of their genes. Thus, if variation across people in a trait is influenced by genes, identical twins will be more similar to each other on that trait than fraternal twins will be. Degree of genetic influence—or heritability—is determined by statistically comparing the identical twins and the fraternal twins. Tucker-Drob and Bates then aggregated the findings of the studies using meta-analysis, a statistical tool that quantitatively synthesizes the results of multiple studies.

The results were striking. For studies conducted in the U.S., the heritability of IQ was 61% at the 95th percentile for SES, compared to just 24% at the 5th percentile. (In more technical terms, there was a Gene x SES interaction, meaning that the genetic contribution to IQ varied as a function of SES.) By contrast, for studies conducted in Western Europe or Australia, heritability was statistically no different across levels of SES. IQ was as heritable for the poor as for the rich.

What might explain this finding? The new study doesn’t pinpoint the cause, but the leading hypothesis is that social policies in countries like Sweden, Australia, and Germany create living conditions that facilitate genetic influences on intellectual functioning. In these countries, people have relatively equal access to high-quality education and healthcare, and childhood poverty rates are low. Much like fertile soil allows plants to reach their maximum height, these conditions are hypothesized to promote the expression of any genetic differences in IQ that exist. By contrast, in the U.S., an estimated 33 million people still do not have health insurance, and there are dramatic differences across school districts in quality of education.

The results of Tucker-Drob and Bates’ analysis are as important as they are sobering. As economists have documented, the gap between the rich and poor in the United States is vast and widening. As a consequence, large numbers of Americans cannot afford even the basic necessities of life. Until they can, the American dream will likely remain a dream that these Americans have little hope of fulfilling.

New Clues Show Out-of-Control Synapse Pruning May Underlie Alzheimer’s

source:http://www.scientificamerican.com/article/new-clues-show-out-of-control-synapse-pruning-may-underlie-alzheimer-s/

A study in mice shows that the normal process by which the brain prunes excess synapses during development may be hijacked early on in the progression of Alzheimer’s and other neurodegenerative diseases
By Jordana Cepelewicz on March 31, 2016

To many of us, Alzheimer’s disease is a familiar and terrifying malady. Afflicting an estimated 5.3 million people in the U.S. alone, the disorder slowly and relentlessly robs patients of memory, judgment and perception—eventually corroding even their ability to perform everyday tasks. The mechanisms that underlie these symptoms are not yet fully understood. The disease is largely attributed to an abnormal buildup of proteins, which can form amyloid beta plaques and tangles in the brain that trigger inflammation and result in the loss of brain connections called synapses, the effect most strongly associated with cognitive decline.
In a study published this week in Science, a team of researchers led by neurologist Beth Stevens at Boston Children’s Hospital has found evidence that such synapse loss may in fact occur much earlier in Alzheimer’s disease. Rather than being a secondary effect of these protein pathologies, as experts had previously thought, this process may begin well before plaques form.
Synapse elimination, or “synaptic pruning,” is a normal process that occurs during development—and one that Stevens has been fascinated by for years. She has found some proteins that initiate an immune response to clear the body of any harbinger of illness also play a role in tagging weak or unwanted synapses for elimination. This process allows specialized Pac-Man–like brain cells called microglia to engulf the targeted synapses, paving the way for more precise brain wiring.
Throughout our lives the brain’s junctions are trimmed and pruned—a process that is crucial to normal development. Stevens and her team suspected that the mechanisms involved in such pruning might be aberrantly turned back on—hijacked, so to speak—to contribute to synapse loss in Alzheimer’s. “That’s what makes us unique in the way we’re approaching this problem,” she says. “A lot of work that initiated this project stemmed from what we learned about how these pathways work in normal brain development, and as we learn more about how it normally works we think it’ll provide us with novel insight about how to target it in disease.”
The researchers tested their theory using mouse models of Alzheimer’s, employing high-resolution imaging techniques to pinpoint when and where synapse loss occurred. In this rodent model there was a window of time before plaques would appear, during which the researchers observed that the mice were losing synapses in the hippocampus, a brain region responsible for memory and learning.
What was particularly striking, Stevens notes, is that the researchers also found high expressions of C1q, a protein involved in normal synaptic pruning. “So we wanted to know: Could [such proteins] be contributing to synapse loss in these models?” she says. The researchers knocked out the genes for C1q and C3 (a protein activated by C1q) and found that by doing so, they had protected the mice’s synapses.
To better explain this finding, the team turned to yet another protein, amyloid beta—which in its soluble form, before building up and hardening into plaques, has already been found to be toxic to the synapses. The researchers delivered this toxic form of amyloid beta to three groups of mice: a normal control group, a group that genetically lacked C1q and one treated with an antibody that blocked the function of C1q in the brain. In the first group extensive synapse loss occurred in the hippocampus—loss that did not occur in the mice in which C1q had been inhibited. “This showed us that C1q and amyloid beta were working together in the same pathway,” Stevens says. “That C1q is necessary for amyloid beta to cause this damage.” Using this experimental model, the researchers then observed the behavior of the microglia and found that the soluble form of amyloid beta stimulated microglia to engulf synapses. Inhibiting C1q, however, protected against this effect.
“This study is a major advance in our understanding of the molecular mechanisms underlying Alzheimer’s, in demonstrating a causal role for immune molecules in this disease,” says Kimberley McAllister, a neurobiologist at the University of California, Davis, who did not participate in the research. “It’s really exciting to me,” adds Tara Spires-Jones, a neuroscientist at the University of Edinburgh, also unaffiliated with the study. “It’s bringing together two parts of the field…synapse loss and inflammation problems are linked.”
Manuel Graeber, a neuropathologist at the University of Sydney who did not take part in the study but has worked extensively with microglia, believes these findings will also play an important role in focusing scientists’ attention on these cells’ function. “The paper highlights this maintenance system of the brain that’s not sufficiently appreciated,” he says. Generally researchers associate these cells with an immune response; however, the work by Stevens and her colleagues reveals these same cells in a different light. “And I think it will help correct [that] misconception.”
Stevens’s team believes their results have implications that extend far beyond Alzheimer’s disease. Because synapse loss plays a role in a wide range of other disorders such as autism, schizophrenia, Huntington’s and glaucoma, “we’re excited about the possibility that this is a more global mechanism, that it’s not disease-specific,” Stevens says. She has already begun testing this idea in other disease models.
A recent study linking schizophrenia and variations of the gene C4 also implicates the pathway involved in synaptic pruning. “These findings suggest that new therapeutics targeting this pathway could treat a broad range of neurodegenerative and psychiatric disorders,” McAllister says.
Developing such treatments, however, remains far in the future. The researchers first need to test their findings in humans—and there are other factors to consider. For example, they have not yet determined the mechanism responsible for initially switching C1q on. That factor, Stevens says, “may be relevant to a lot of diseases—something upstream we can also think about targeting,” Stevens says. C1q also plays a positive role in the brain by clearing out dead cells and helping target harmful materials, so learning how to manipulate its presence to prevent debilitating synapse loss while maintaining its normal functions will require further research.
Furthermore, other research groups have identified different avenues for targeted therapy. In another rodent study published this week in Science Translational Medicine a group of researchers from multiple institutions identified a pathway responsible for the formation of amyloid plaques. More specifically, they found that heparan sulfate, a class of molecule found on the surface of cells (including neurons), essentially “traps” amyloid peptides, causing them to aggregate and form deposits that will eventually lead to neurodegeneration and dementia. When the research team deleted a gene that allows heparan sulfate to adhere to the surface of neurons, they found much less amyloid plaque. “Rather than using an immune method or targeting an enzyme, which have side effects, we want to target this specific pathway so that the brain can naturally clear amyloid-beta peptides when they’re not trapped by heparan sulfate,” says Guojun Bu, a neuroscientist at Mayo Clinic in Jacksonville, Fla., and the study’s lead author.
Similarly, Stevens is optimistic about the future utility of her findings for developing therapies. “C1q seems like a good target,” Stevens says. “We don’t have evidence that [such proteins are] driving the whole process but we think it’s an early part, and if you knock it out or manipulate it, you could have the promise of an early impact and protecting at least the synaptic part of the story.”

A turnaround specialist reshapes Computer Sciences Corp.

Viewed from a certain angle, Falls Church, Va.-based Computer Sciences Corp. looks like a case study of everything people think is wrong with American capitalism: Misleading investors through rosy accounting. Golden parachutes and excessive executive pay. Ruthless cost cutting and outsourcing of jobs overseas. Plumping up share prices through stock buybacks, special dividends and other feats of financial engineering.
Yet from another angle you can see in this Beltway information services giant everything that has made American business successful, competitive and resilient: a willingness to acknowledge mistakes and failure, an intolerance for mediocrity and inefficiency, an embrace of globalization, an ability to adapt to changing technology and market conditions, a laser-like focus on customers and investors.
The current act in Computer Sciences’ corporate drama is largely being written by Mike Lawrie, a brash 26-year veteran of IBM who rose to become its top salesman. Having participated in IBM’s successful transformation in the late 1990s, Lawrie set out to try his own hand as a corporate turnaround specialist, first at Siebel Systems, where he lasted less than a year before falling out with its founder, and then at the British software firm Misys, which he sold to private-equity investors for a handsome premium five years later.
When Lawrie arrived in Falls Church in March 2012, CSC was in turmoil. The company was facing $2 billion in losses and write-downs from a troubled contract to computerize patient records for Britain’s National Health Service. The Securities and Exchange Commission was on the verge of charging the company with accounting lapses and failing to disclose the problems with the NHS contract.
Closer to home, CSC had performed so badly on a contract to modernize the computer system at the Internal Revenue Service that at one point the agency had mistakenly sent out $300 million in fraudulent tax refunds. The Air Force was preparing to write off $1 billion it paid CSC for a new logistics management system that never worked. Meanwhile, in its commercial business, CSC’s technology and cost structure had become uncompetitive. It trailed rivals in moving work to India and other lower-cost locations while failing to anticipate the shift toward cloud computing and standardized software.
As a result of these missteps, when Lawrie arrived CSC was about to report a $4 billion net loss for the year. Its stock, which had been trading as high as $56 a year earlier, had fallen as low as $23 per share. The board of directors had finally fired the previous chief executive, Michael Laphen, a 26-year veteran, who nonetheless walked away with an $11 million severance package and retirement benefits estimated at nearly $1 million a year.
“We were close to slipping under the waves,” Lawrie said.
What he found was a corporate culture fixated on revenue growth rather than creating value for customers and shareholders. The standard for success was best efforts, not best results. People below were unwilling to deliver bad news to those at the top — and those at the top who were unwilling to receive it. “There was no urgency, no accountability,” he said.
Lawrie also found a company highly decentralized in its management and structure. Business units were free to do their own procurement, structure their own contracts, chose their own technology, adopt their own business practices and craft their own compensation incentives. There was no unifying strategy or vision.
“When I took my first look at the company before taking the job,” Lawrie said, “honestly I couldn’t figure it out.”
A company’s rise

Computer Sciences Corp. traces its roots to the early years of the computer age. The company, originally based in the Los Angeles area, was founded in 1959 by Roy Nutt, an IBM engineer who was part of the team that created the computer language Fortran, and Fletcher Jones, who had managed the computer center at North American Aviation, an aerospace contractor. Together, Nutt and Jones wrote the system software for every major mainframe computer, making it possible for more enterprises to computerize their operations.
In the 1960s, CSC switched from serving computer makers to serving computer users — in particular, the biggest user of all: the federal government. In the 1970s, CSC became a big player in time-sharing, renting its mainframe computers to customers by the minute. In the 1980s, it rode the wave of systems integration, helping companies tie their various computer systems together.
It was during the 1990s, however, when CSC really took off. Nearly every large company, along with many government agencies, moved to outsource information technology operations — and with it, their existing hardware, software and employees. Companies such as IBM, EDS and CSC competed for these multiyear, multibillion-dollar contracts. Shortly after moving its headquarters to the Washington region in 2008, CSC’s revenue topped $16 billion, with 95,000 employees worldwide and a balance sheet loaded with its customers’ computer systems, each one custom designed and programmed.
Getting contracts was one thing, executing on them was another, as the IRS, the Air Force and Britain’s National Health Service would discover. The common rap on CSC was that while it was pretty good at running data centers or programming software that accomplished a specific task, it was much weaker in understanding its customers’ overall business needs and processes and designing effective computer solutions.
CSC, of course, was hardly the only IT service company to have over-promised and under-delivered. But soon after Lawrie and his new chief financial officer, Paul Saleh, settled in, they identified 40 contracts — representing about one-third of the company’s business — that were in trouble, either because of execution failures or because they had fallen short of profitability goals. To fix them they would have to fix the company first.
Their approach was bold and ruthless. Every member of the top management team save one (the general counsel) was replaced — in a few cases, more than once. Twelve layers of management were reduced to seven, with hundreds of vice presidents and directors losing their titles. Centralized purchasing and financial systems were put in place. At headquarters and elsewhere, private offices were torn down in favor of open-floor formats. Regular ­customer-satisfaction surveys were instituted, with the results used in setting management bonuses.
Computer centers were closed, consolidated or moved to lower-cost locations, both in the United States (Pittsburgh, Bossier City, La.) and abroad (India, Eastern Europe, Vietnam). Managers were told to evaluate employees based on performance, not just effort, with suggestions of a bell-curve-like grading system in which 40 percent of employees would be rated as “below expectations.” Thousands were laid off, denied promotions or encouraged to leave, reducing worldwide employment to 68,500 (including 7,000 in the Washington area, about half of what it once was). “Non-core” divisions, both profitable and unprofitable, were sold off even as other companies were acquired to bolster offerings in cloud computing and cybersecurity.
As for those troubled or unprofitable contracts, most were renegotiated or restructured, including the big one with the British health agency, which now uses CSC software in a growing number of its hospitals and regions. Some corporate clients were persuaded to take activities they had outsourced in-house again. Other contracts were simply allowed to expire.
Then, after years of negotiation, CSC last month finally settled with the SEC. Although it did not admit or deny that its executives had conspired to mislead investors about the troubled NHS contract, as revealed in e-mails uncovered by investigators, the company agreed to restate its financial results and pay a $190 million fine — on top of the $125 million spent on legal and accounting fees during the investigation. At the insistence of the government, former CEO Laphen agreed to pay a fine of $750,000 and return $3.7 million in bonus money he had received as a result of the inflated earnings.
Stock surges with turnaround

Computer Sciences Corp. today is considerably smaller, more focused and more profitable than it was three years ago. In the fiscal year ending in March, revenue was just over $12 billion, down from nearly $15 billion in 2012. Annual operating costs have been reduced by more than $3 billion. And if you’re willing to look past all the one-time restructuring and settlement charges and asset write-downs, which continue to be significant, what you find is that a company that three years ago was barely breaking even now posts an annual operating profit of close to $1 billion.
The big beneficiaries of this turnaround have been CSC’s shareholders, whose stock is trading at $65 per share, more than twice what it was when Lawrie took over. The overall rise in the stock market, plus the $3 billion in company funds committed to stock buybacks and special dividends, could explain about half of that increase. But the rest is certainly a reflection of the turnaround in CSC’s operations and Wall Street’s confidence in Lawrie.
Lawrie himself has also been a big winner. His pay package last year was valued at $15.4 million, including a $1.8 million bonus tied to financial goals that would normally be considered rounding errors at a $12 billion company: a $100 million increase in revenue, a $75 million increase in operating income and an $11 million increase in free cash flow. The pay package also included a grant of stock valued at $8.6 million for exceeding the rather modest goal of a 4 percent increase in operating earnings per share. Those grants brought the total value of Lawrie’s stake in CSC stock, after three years, to $56 million.
Source:http://www.washingtonpost.com/business/a-turnaround-specialist-reshapes-computer-sciences-corp/2015/07/30/c2be7c9c-32dd-11e5-97ae-30a30cca95d7_story.html

Supercomputers: Barack Obama orders world’s fastest computer

President Barack Obama has signed an executive order calling for the US to build the world’s fastest computer by 2025.

The supercomputer would be 20 times quicker than the current leading machine, which is in China.

It would be capable of making one quintillion (a billion billion) calculations per second – a figure which is known as one exaflop.

A body called the National Strategic Computing Initiative (NSCI) will be set up to research and build the computer.

The US is seeking the new supercomputer, significantly faster than today’s models, to perform complex simulations, aid scientific research and national security projects.

It is hoped the machine would help to analyse weather data for more accurate forecasts or assist in cancer diagnoses by analysing X-ray images.

A blog post on the White House website also suggests it could allow NASA scientists to model turbulence, which might enable the design of more streamlined aircraft without the need for extensive wind tunnel testing.

Such a computer would be called an exascale machine.

Bigger models

Richard Kenway at the University of Edinburgh says he thinks the plan is « spot on » in terms of strategy, bringing together both the ambition to develop new hardware and also improved analysis of big data.

He explained the computer could aid the development of personalised medicines, tailored to specific individuals.

« Today, drugs are designed for the average human and they work OK for some people but not others, » he told the BBC.

« The real challenge in precision medicine is to move from designing average drugs to designing drugs for the individual because you can know their genome and their lifestyle. »

There could also be benefits in long-term climate modelling, according to Mark Parsons at the Edinburgh Parallel Computing Centre (EPCC).

Currently, climate scientists attempt to model how the Earth’s climate will evolve in coming years, but the accuracy of these predictions is severely limited.

Today’s fastest supercomputer, the Tianhe-2 in China’s National Computer Centre, Guangzhou, performs at 33.86 petaflops (quadrillions of calculations per second), almost twice as fast as the second-quickest machine, which is American.

For Parsons, the latest US initiative is a clear attempt to challenge the dominance of the Chinese in this field.

« The US has woken up to the fact that if it wants to remain in the race it will have to invest, » he told the BBC.

£60m electricity bill

Both Kenway and Parsons point out that the challenges of building an exascale computer are not trivial and would require years of research and development.

Chief among the obstacles, according to Parsons, is the need to make computer components much more power efficient. Even then, the electricity demands would be gargantuan.

« I’d say they’re targeting around 60 megawatts, I can’t imagine they’ll get below that, » he commented. « That’s at least £60m a year just on your electricity bill. »

Efforts to construct an exascale computer are not entirely new.

Recently, IBM, the Netherlands Institute for Radio Astronomy (ASTRON) and the University of Groningen announced plans to build one to analyse data from the Square Kilometre Array (SKA) radio telescope project.

SKA will be built in Australia and South Africa by the early 2020s.

Source: http://www.bbc.com/news/technology-33718311

Mathématicien : une profession élitaire et masculine

Étude de sociologie par Bernard Zacar. Je ne suis pas d’accord sur tout toutefois l’étude est intéressante.

La sociologie des rapports entre les sexes a montré qu’à profession identique, les femmes avaient une carrière moins rapide que celle des hommes et que l’écart était plus grand dans les professions à forte majorité masculine ou féminine (Laufer, 1997). Elle a aussi pointé les infortunes de la femme mariée (Singly, 1987,2003). Le mariage comme la présence d’enfants constituent des atouts pour les carrières masculines, des handicaps pour les carrières féminines. Ceci est particulièrement vrai dans les professions supérieures et notamment scientifiques ; par exemple, dans la profession d’ingénieur qu’a analysée Catherine Marry (Marry, 2004). Les analyses qui suivent sont consacrées à la profession de mathématicien, relativement proche de la précédente [1] Voir l’encadré « méthodologie » pour les considérations… [1] .
Au fil de ces analyses, nous emploierons le mot sexe et non le mot genre, bien que ce dernier ait permis d’insister à juste titre sur la construction sociale du rapport de domination entre les sexes, non seulement parce que, comme l’âge, et contrairement à la classe sociale, ces deux autres facteurs majeurs de différenciation sociale, la condition sexuée est une donne anthropologique qu’il serait vain de nier, mais aussi parce que le mot genre renvoie à une sorte de neutralité du point de vue de la sexualité, alors qu’il faudrait considérer celle-ci dans ses polarités pour rendre entièrement raison des luttes identitaires qui se jouent entre hommes et femmes lorsqu’ils appartiennent à un même champ professionnel.

Les sciences « dures » sont parmi celles qui demeurent le moins ouvertes aux femmes en dépit de la révolution culturelle qui s’est opérée en Occident dans les dernières décennies du vingtième siècle. Les femmes qui y pénètrent sont confrontées au choix difficile entre les investissements exigés par un ethos professionnel qui place très haut la norme d’excellence et qui est marqué par la masculinité et ceux correspondant à leur rôle familial. Pour s’être partiellement relaxées, les contraintes associées à ce rôle qui leur offre des gratifications auxquelles les hommes sont jusqu’ici beaucoup moins sensibles n’en demeurent pas moins fortes.

La profession de « mathématicien académique » recrute ses agents parmi les étudiants les plus doués de leur génération. Il y a parmi eux une forte proportion d’élèves sortis des grandes écoles scientifiques, principalement des écoles normales supérieures. Quel effet a ce filtre initial de l’élite sur la carrière et les positions occupées par les mathématiciens en France ? Cet effet est-il le même pour les deux sexes ? Dans une profession dynamisée par une lutte très concurrentielle pour la reconnaissance, indissociable de la passion commune aux agents de faire des mathématiques, pourquoi les carrières féminines se heurtent-elles au « plafond de verre » ? Et d’abord, pourquoi les femmes sont-elles encore si peu présentes dans cette profession ? Nous voudrions apporter quelques éléments de réponse à ces questions.

1. UNE PROFESSION ÉLITAIRE SOCIALEMENT SÉLECTIVE ET MASCULINE

La compréhension mathématique requiert une forme d’intelligence abstraite dont la formation semble moins dépendante du milieu socioculturel dans lequel on est éduqué qu’elle ne l’est des apprentissages strictement scolaires. Le caractère universel des mathématiques rend aisées la communication et les collaborations professionnelles d’un bout à l’autre de la planète. On peut toutefois se demander si les chances d’accès à la profession, laquelle est le résultat d’une sélection progressive, dépendent de l’héritage social et culturel, et non seulement de ce que le hasard ou la nécessité biologiques ont inscrit dans la structure des jeunes cerveaux.

Cela choquerait les membres de la très universaliste cité mathématique si leurs relations professionnelles portaient la marque de leurs origines sociales respectives. Un responsable d’une société savante auquel fut montré le questionnaire de l’enquête et qui remarqua qu’était posée une question sur la situation professionnelle des parents s’exclama, incrédule, doutant de sa pertinence : « Ça compte, ça, pour les mathématiciens ? » S’il voulait dire que cette situation n’entrait guère en considération dans les relations spécifiques entre collègues, que les mathématiciens pouvaient aller jusqu’à se vivre comme libérés des contingences de leur naissance, une fois admis dans leur cité relativement éloignée du monde social ordinaire et hiérarchisant la valeur de ses membres uniquement selon ses exigeants critères propres, il avait sans doute raison ; s’il pensait que cette origine ne conditionnait en rien leur accès à la profession et leur carrière, il se trompait.

1.1. UN FORT RECRUTEMENT DANS LE MILIEU ENSEIGNANT ET DE LA RECHERCHE

Comme toutes les professions intellectuelles supérieures, les mathématiciens sont en majorité des héritiers : ils viennent préférentiellement de classes sociales cultivées. Mais encore ce phénomène est-il plus marqué chez eux que chez les scientifiques qui sont leurs voisins immédiats : leurs parents sont plus souvent proches de l’institution scolaire, qui valorise l’ascèse du travail intellectuel (au moins un parent dans l’enseignement primaire, secondaire ou supérieur ou la recherche). Ils viennent, à l’inverse, moins souvent des classes les plus nombreuses, inférieures ou moyennes, constituées d’ouvriers, d’employés, de professions intermédiaires ou de métiers traditionnels indépendants (tableau 1).

TABLEAU 1 – ORIGINE SOCIOCULTURELLE DES MATHÉMATICIENS ET DES SCIENTIFIQUES CONNEXES PROPORTION % DE CAS OÙ AU MOINS UN PARENT APPARTIENT AUX …

L’appartenance par la parenté à un milieu scientifique rend familiers des modes de raisonnement dont la possession est un atout pour les étudiants plongés dans un univers dont l’élitisme s’exprime au quotidien. Le témoignage suivant d’une mathématicienne étrangère à ce milieu par ses origines et qui fut étudiante en troisième cycle dans les années soixante en fournit une illustration éloquente :

« (…) Comme en général, comme c’était la mode, les professeurs passaient très vite sur les détails fastidieux des démonstrations et que moi, je ne voyais pas à quels objets connus, classiques, elles renvoyaient, je ne pouvais pas rétablir les jalons qui manquaient… : “Par un raisonnement standard, on prouve que…”, et je me sentais réduite à l’infériorité totale de ne pas pouvoir deviner quel était ce raisonnement standard. Je pense que quand les professeurs ne font pas un effort pour expliquer d’où viennent leurs idées, leur intuition, (car maintenant, je ne crois plus que dès le berceau, les hommes étaient prédestinés à savoir, comme moi à ignorer) eh bien, ils pratiquent, délibérément ou non, une attitude raciste et sexiste à l’égard des catégories qui n’ont pas baigné toute leur vie dans la culture mathématique et qui n’ont aucun autre moyen de savoir, hors de l’enseignement, d’où viennent les idées en cours. » [2] Vergne (Michèle) Témoignage d’une mathématicienne,… [2]

Il ne suffit pas de baigner dans un milieu caractérisé par la proximité au travail intellectuel pour devenir mathématicien, et aussi bien cette condition n’est-elle pas nécessaire. Elle semble cependant de plus en plus souvent remplie, au fil des générations : 5 % des mathématiciens de plus de cinquante ans ont un parent au moins enseignantchercheur ou chercheur ; il en est ainsi de 18 % des moins de trente ans. Des années cinquante aux années soixante-dix et quatre-vingt, la population des universitaires et des chercheurs a certes considérablement augmenté, voyant probablement son poids plus que multiplié par quatre dans la population active [3] Selon le Bureau universitaire de statistiques, les… [3] . Sans se prononcer sur l’évolution des chances relatives, étant donné cette considérable augmentation de long terme, on doit constater que le recrutement des mathématiciens se fait désormais massivement dans un milieu socioprofessionnel restreint : 29 % des plus âgés ont un parent au moins dans l’enseignement (primaire, secondaire ou supérieur) ou la recherche ; il en est ainsi de 50 % des plus jeunes. À l’inverse, 47 % des premiers sont issus de familles appartenant aux classes inférieures ou moyennes, contre 26 % des seconds.

Ce qui est l’exception au niveau de la société globale est, sinon la norme, du moins un type fréquent parmi les mathématiciens et les scientifiques connexes. Ainsi de la précocité intellectuelle. Autant que les physiciens théoriciens et mécaniciens et un peu moins que les didacticiens des mathématiques, mais plus que les informaticiens, un tiers d’entre les mathématiciens a sauté une, voire plusieurs classes à l’école primaire. 42 % ont passé le baccalauréat avant 18 ans [4] Il faut comprendre que ces bacheliers n’ont eu dix-huit… [4] , sensiblement autant que les physiciens théoriciens et mécaniciens, plus que les informaticiens et moins que les didacticiens des mathématiques. Ainsi de l’excellence scolaire. Un tiers a obtenu la mention TB au bac, le fort coefficient des mathématiques dans la détermination de la note à cet examen pouvant en partie expliquer un écart important avec les scientifiques connexes (tableau 2).

TABLEAU 2 – TRAITS SAILLANTS DE LA SCOLARITÉ DES MATHÉMATICIENS ET DES SCIENTIFIQUES CONNEXES PROPORTION % DE PERSONNES AYANT DONNÉ UNE RÉPONSE POSITIVE

1.2. UNE PROFESSION MASCULINE DONT LA MINORITÉ FÉMININE EST SUR-SÉLECTIONNÉE

La forme d’intelligence requise par la mathématique, à ses niveaux les plus hauts, ne se développe donc pas uniformément dans tout l’espace social. Il lui faut certes des neurones, mais encore leur activation propice par une socialisation qui oriente et fortifie les inclinations « naturelles » et que l’école ne peut seule assurer dans ses salles de classe. Pour reprendre les termes du titre d’un livre consacré aux normaliens et normaliennes scientifiques, l’excellence scolaire, en sciences tout particulièrement, est largement une affaire de famille (Ferrand, Imbert, Marry, 1999). En mathématiques, cette excellence dépendrait-elle toutefois du sexe biologique ? Dit autrement, le cerveau mathématicien serait-il sexué ?

Il est attesté que les jeunes filles se montrent un peu moins bonnes en mathématiques, au moins dans certaines matières telles que la géométrie, au cours de leur scolarité secondaire, principalement au lycée. Les différences de performance à des tests de mathématiques entre filles et garçons sont cependant très ténues et diminuent avec les générations, selon une méta-analyse des différentes recherches menées sur le sujet (Friedman, 1989). Il reste que les jeunes filles font moins souvent des études de mathématiques et qu’elles sont très minoritaires parmi les mathématiciens. Il y en a en France 21 % selon l’enquête que nous avons réalisée en 2002-2003.

Parmi les professions scientifiques, celle de mathématicien s’est ouverte aux femmes plus tardivement. La mathématicienne Sonia Kovalevskaïa, écrivant à une amie en 1889, un an après avoir reçu de l’Académie des sciences, à Paris, le prix Borodin, remarquait qu’il lui était inutile de songer à avoir un poste en France : « Les Français n’accepteront pas de sitôt une femme comme professeur bien que je n’aie jamais reçu ailleurs qu’en France autant de compliments. » (Detraz, 1989, page 22). Il a fallu attendre la fin des années trente pour qu’une femme ait un poste de professeur de mathématiques dans une université française. La pénétration des femmes dans la profession est lente, comme l’attestent les variations de leur proportion en fonction de l’âge [5] Selon les chiffres du Ministère de l’éducation nationale… [5] . La situation dans les sciences connexes ne serait guère plus favorable : on y compte 22 % de femmes, sans que ne se dessine une évolution nette (tableau 3).

TABLEAU 3 – PROPORTION % DE FEMMES SELON L’ÂGE PARMI LES MATHÉMATICIENS ET PARMI LES SCIENTIFIQUES CONNEXES

Plusieurs facteurs sociaux contribuent à l’explication de la moindre représentation des femmes aux niveaux les plus hauts des performances mathématiques, de sorte que l’on peut douter que le phénomène soit principalement d’origine biologique. D’ailleurs, le serait-il, principalement ou secondairement, que l’on ne saurait dire aujourd’hui précisément en quoi il l’est et si des facteurs de socialisation ne sont pas à l’origine d’une rétroaction qui accentuerait fortement, au niveau neurologique du fonctionnement cérébral, une différence biologique initialement faible : une petite différence, entre moyennes selon le sexe, d’une distribution statistique des valeurs d’une variable X, non identifiée, continue et non pas dichotomique, à variance forte, mesurant le phénomène biologique en corrélation avec le degré de performance dans tel domaine mathématique. Comme le souligne la neuro-biologiste Catherine Vidal (Vidal et Venoit-Browaeys, 2005), aucune différence significative entre les sexes ne ressortirait de la grande majorité des études relatives à l’activité du cerveau dans les fonctions cognitives supérieures. Il apparaît d’ailleurs, à la lecture des analyses que fait de plusieurs de ces études cette neuro-biologiste que les interprétations faites de données expérimentales souvent insuffisantes sont étonnamment peu critiques, comme si les chercheurs voulaient se rassurer en trouvant un fondement biologique et donc, à leurs yeux, intemporel à des différences entre les sexes dont l’anthropologie, la sociologie et l’histoire montrent qu’elles ont des déterminations sociales et culturelles, celles-ci ne seraient-elles pas les seules. Non seulement la socialisation différentielle des sexes ne prédispose-t-elle pas aussi bien les filles que les garçons aux disciplines mathématiques, mais les images contrastées des deux sexes qui prévalent dans le monde social contribuent, par les attitudes et les attentes qu’elles induisent, à accentuer la différence de performance mathématique entre eux.

Une exploitation secondaire de l’enquête INED-INSEE de 1990 sur l’éducation nous avait permis de montrer que si, dans leur ensemble, les filles collégiennes et lycéennes s’estimaient moins bonnes en maths que ne le faisaient les garçons, la différence s’estompait chez les élèves des familles des classes sociales à moyen ou fort capital culturel et dans lesquelles prévalait un modèle d’éducation égalitaire : celles dont le parent interrogé avait montré par ses choix qu’il valorisait et attendait des qualités semblables pour une fille et pour un garçon, plutôt qu’il n’attribuât à chaque sexe des qualités socialement marquées comme masculines : dynamisme, ambition, etc. ou féminines : charme, disponibilité, etc. (Zarca, 2000).

On a pu observer que les professeurs de mathématiques interagissaient plus fréquemment avec les garçons qu’avec les filles au cours de leur enseignement, ce qui n’est pas sans renforcer différemment la confiance en soi nécessaire à la réussite dans cette discipline (Hurtig et Pichevin, 1998, Mosconi, 1994) ; que les filles réussissaient mieux que les garçons à des épreuves cognitives qualifiées d’épreuves de dessin, l’inverse se produisant pour les mêmes épreuves qualifiées d’épreuves de géométrie (Huguet et Régner, 2004) ! Etc. Les activités cognitives ne sont pas machinales, elles engagent l’identité, qui n’est pas donnée mais construite sur une base sociale et culturelle qui n’a rien d’immuable en soi, même si, apparemment, certaines de ses dimensions ne changent que très lentement dans le temps historique.

Les femmes qui deviennent mathématiciennes ont plus d’atouts sociaux que leurs collègues masculins. Elles sont relativement plus souvent issues des milieux socio-professionnels de l’université et de la recherche (19 %, contre 13 %). Leur mère était plus souvent elle-même universitaire ou dans la recherche (10 %, contre 5 %).

Comme Michèle Ferrand, Françoise Imbert et Catherine Marry l’ont montré pour les normaliennes scientifiques (Ferrand, Imbert, Marry, 1999) et Hervé Le Bras pour les polytechniciennes (Le Bras, 1983), les mathématiciennes ont bien été sursélectionnées : elles ont plus souvent sauté une classe à l’école primaire, ont passé le bac plus souvent avant dix-huit ans et ont plus souvent obtenu la mention TB. C’est vrai pour les normaliens, mais aussi, en ce qui concerne la précocité, pour les autres agents de la profession (tableau 2).

Deux interprétations complémentaires de la sur-sélection des jeunes filles sont possibles : selon la première, celles-ci s’orientent vers les mathématiques seulement si elles disposent de ressources intellectuelles leur permettant d’espérer surmonter le handicap dû à la non-congruence entre les images sociales de leur sexe et de la science la plus dure et la plus retirée du monde social ordinaire. Elles sont donc plus exigeantes envers elles-mêmes que ne le sont les garçons pour participer à la course d’obstacles conduisant à une profession dans laquelle l’excellence à laquelle on se réfère, très prégnante, est celle du génie.

Selon la seconde, les femmes douées pour les sciences sont moins contraintes dans leurs choix que ne le sont les garçons à la recherche systématique de l’excellence sociale et académique. Elles ont en général d’autres cordes à leur arc et d’autres motivations qu’elles prennent positivement en considération pour s’orienter vers des professions moins distantes de l’action et de l’interaction concrètes que les mathématiques. Il ne s’agirait donc pas d’un manque de compétence intellectuelle, mais d’un imaginaire social proprement féminin.

La famille est par excellence le lieu du don et la profession celui, opposé au précédent, de la lutte concurrentielle, ici principalement pour le gain monétaire ou le pouvoir, là d’abord pour le prestige. Or, étant donné le considérable développement cumulatif et le degré de complexification dans l’abstraction de la mathématique, la profession de mathématicien est, parmi les professions scientifiques, le champ d’une lutte des plus âpre pour les positions prestigieuses, celles que le groupe accorde à certains de ses membres en reconnaissance de leurs contributions à ce qu’il valorise, qui constituent des gratifications matérielles et surtout symboliques agrandissant la personne sociale. On conviendra donc qu’il n’est pas besoin de supposer des cerveaux sexués, mais des dynamiques sentimentales que la culture a, dans la longue durée, orientées de manière contraire chez les hommes et chez les femmes, pour comprendre sinon expliquer qu’en dépit d’avancées importantes en un siècle, les femmes ne soient toujours pas aussi enclines que les hommes à jouer au plus haut niveau un jeu cérébral qui donne du plaisir aux joueurs – un plaisir de l’esprit dont l’intensité est à la mesure de la tension intellectuelle qui le précède –, mais au prix d’une sublimation spécifique des pulsions agressives en régime de paix civilisé. Ce jeu concurrentiel est transfiguré par l’émulation, c’est-à-dire par la congruence des désirs des joueurs excités les uns par les autres parce qu’ils relèvent du même illusio. Il exige le dépassement de soi et des autres dans la conquête difficile de nouveaux savoirs sur des objets d’une espèce particulière : des idéalités, grâce à la maîtrise et à l’invention d’outils puissants de la même espèce. La règle de la démonstration le caractérise, qui est des plus contraignante. Il se prolonge dans le partage des connaissances conquises avec les pairs, dont la rivalité est dans ce moment-là mise en suspens, puis dans leur transmission aux jeunes générations, d’autant plus gratifiante que l’on peut se reconnaître en celles-ci. Mais il faut être une femme pour écrire comme la mathématicienne Françoise Roy : « J’aime les mathématiques et je souhaite faire partager cet amour. Je rêve d’une écriture mathématique fluide où la jouissance ressentie lors de l’éclair de la compréhension et de la découverte ne serait pas totalement perdue. » (Roy, 1992, page 104). Ce rêve de pur partage d’un plaisir continué, sans joute et sans rivalité, est typiquement féminin. Il révèle une disposition peu propice à la mobilisation de l’énergie agressive nécessaire à la conquête des sommets d’une science qui se veut la plus haute.

Il faut pouvoir se projeter dans une activité professionnelle. Or les modèles féminins sont trop peu nombreux en mathématiques, il n’y a pas de solide tradition permettant des identifications fortes, quand des possibilités plus en accord avec la socialisation de l’imaginaire féminin existent dans d’autres domaines. D’ailleurs, la profession est à ce point dynamisée par la lutte pour la reconnaissance entre pairs de sexe masculin que, comme l’écrit Catherine Goldstein, qui la connaît de l’intérieur : « La reconnaissance d’une compétence entraîne un changement de sexe de l’intéressée, et (…), dans le même temps, le sexe fonde d’office un accès privilégié à l’héritage des meilleurs. » (Goldstein, 1992, page 152). Si dans l’imaginaire d’un homme mathématicien, la grandeur est associée à une image masculine et si la lutte est ouverte entre les hommes pour y atteindre et toucher ainsi au divin, le surgissement d’une femme dans cette lutte en perturbe les enjeux identitaires. En faire un homme est rassurant pour l’identité masculine. La comparaison avec le champ politique, où les femmes sont également minoritaires, mais où elles commencent à plus sérieusement concurrencer les hommes pour l’accès aux positions de pouvoir, est éclairante : l’image d’une femme de pouvoir peut être celle d’une mère, dévouée au bien public, protectrice. Il n’y a pas de référence mythique pour l’image d’une grande mathématicienne, il n’y a guère de déesses dans l’Olympe des mathématiques.

Technology With A Profound Purpose -Access For All

Traditionally focused on providing equal access to people with disabilities, accessibility has become a mainstream requirement that reduces technology barriers to the information everyone needs for school, work, and life.Today, accessible technology allows humans and machines to interact effectively and intuitively. And it helps organizations create a better user experience on any device by differentiating offerings and optimizing communications for employees, students, customers and constituents.
It’s really a technology with a profound purpose.

This is why it has become a critical focus for organizations around the world. Accessibility initiatives are being driven by the more than 1 billion people with disabilities (including the growing aging population), the proliferation of mobile devices, and new industry standards and evolving government regulations.

As we celebrate the 25th anniversary of the Americans with Disabilities Act (ADA) this year, it’s important to reflect how far we’ve come in providing equal access to information that makes our daily routines more manageable.

While we still have work to do addressing accessibility requirements from a compliance perspective on web and mobile applications, accessibility has presented us with a tremendous opportunity.

It is redefining the relationship among humans, technology and the environment around us. Combined with analytics, cloud, security and the Internet of Things, accessibility is helping create context-driven systems that help organizations understand everyone’s information consumption patterns in order to deliver a secure and personalized user experience on any device.

Consider your grandmother navigating a new city. Her mobile device and applications can now be tailored to her specific needs and physical abilities to help her better understand the surroundings and provide appropriate routes and points of interest. By using text-to-speech, voice recognition and location-based services, information and insights can be delivered in the most consumable way possible.

Or, imagine your friend who is vision-impaired making an online purchase. By incorporating accessible technology, such as ensuring screen readers can easily navigate a website or properly adjusting color contrast, retailers can eliminate usability issues and make it easier for your friend to learn about new offers or services specific to their needs.

Allowing more clients and employees to interact with applications whenever they want, wherever they are, and regardless of their age or physical ability, is forcing organizations to create a holistic strategy for embedding accessible solutions across the enterprise.

There are five areas organizations should focus on to be fully engaged on accessibility to help reinvigorate sales channels, increase workplace productivity and improve risk management:

Ensure that any accessibility initiative is genuine, supported from the C-Suite and includes every part of the organization.

Develop empathy and a true understanding of all users, including how physical, cognitive and situational disabilities affect the use of an application.

Place accessibility at the forefront of the design and development process to accelerate deployment and reduce expenses.

Perform rapid iterative testing for web and mobile applications and content to ensure accessibility conformance.

Lead by making accessibility, diversity and inclusion part of your organization’s culture.

Accessibility that is grounded in an organization’s values will bridge individual differences, better connect with customers, enable a diverse pool of talent in the workplace, and improve the standard of living for all members of society.

Accessibility is no longer about a niche audience. It’s about helping all of us become more independent, productive, and improve our quality of life.

While we have witnessed many milestones during the first 25 years of the ADA, let’s together make the next 25 years have even more meaning and impact … for everyone.

IBM has been committed to equal opportunity, workforce diversity, and technology innovation for people with disabilities for more than 100 years to create a more inclusive world where people of all ages and abilities can achieve their full potential. For more information, visit http://www.ibm.com/able

Frances West is Chief Accessibility Officer for IBM.

Le Cerveau et le sommeil

Lorsque nous sombrons dans le sommeil,nous ne sommes plus conscients. Pourtant notre cerveau fonctionne.

Que se passe-t-il?

Une équipe de l’université de Wisconsin a appliqué la même stimulation magnétique inoffensive à une petite partie du cortex cérébral de six sujets, durant les deux phases.

Résultat: quand le sujet est éveillé, une vague d’ondes cérébrales intenses se propage dans de nombreuses régions du cortex; en revanche, lorsque le sujet vient de s’endormir et se trouve en sommeil lent, l’activité cérébrale lors de l’éveil permettrait au cerveau d’intégrer de nombreuses informations simultanément, ce qui générerait l’état conscient.Reste à valider si cette différence de connectivité existe bien aussi lors du sommeil paradoxal.