Even as carriers around the world race to build 5G networks, some government officials are reaching for the throttle, citing fears that the new generation of wireless technology could pose health risks.
Earlier this year the Portland, Oregon, city council passed a resolution asking the Federal Communications Commission to update its research into potential health risks of 5G. (In 2013, the American Academy of Pediatrics made a similar request to the FCC about its research on cell phone use more generally.) In May, Louisiana’s House of Representatives passed a resolution calling for the state Department of Environment Quality and Department of Health to study the environmental and health effects of 5G. Meanwhile, a few Bay Area towns, including Mill Valley and Sebastopol, want to block carriers from building 5G infrastructure.
“The impending rollout of 5G technology will require the installation of hundreds of thousands of ‘small cell’ sites in neighborhoods and communities throughout the country, and these installations will emit higher-frequency radio waves than previous generations of cellular technology,” US representative Peter DeFazio (D-Oregon) wrote in a letter to the FCC echoing concerns about the new technologies involved with 5G.
There are real concerns about the way 5G is being deployed in the US, including security issues, the potential to interfere with weather forecasting systems, and the FCC steamrolling local regulators in the name of accelerating the 5G rollout. But concerns over the potential health impacts of 5G are overblown. If you weren’t worried about prior generations of cellular service causing cancer, 5G doesn’t produce much new to worry about. And you probably didn’t need to be worried before.
Few 5G services will use higher frequencies in the near term, and there’s little reason to think these frequencies are any more harmful than other types of electromagnetic radiation such as visible light.
Most concerns about health impacts from 5G stem from millimeter-wave technology, high-frequency radio waves that are supposed to deliver much faster speeds. The catch is that millimeter-wave transmissions are far less reliable at long distances than transmissions using the lower frequencies that mobile carriers have traditionally used. To provide reliable, ubiquitous 5G service over millimeter-wave frequencies, carriers will need a larger number of smaller access points.
That’s led to two fears: That the effects of millimeter-wave signals might be more dangerous than traditional frequencies; and that the larger number of access points, some potentially much closer to people’s homes, might expose people to more radiation than 4G services.
But millimeter waves aren’t the only, or even the main, way that carriers will deliver 5G service. T-Mobile offers the most widespread 5G service available today. But it uses a band of low frequencies originally used for broadcast television. Sprint, meanwhile, repurposed some of the “mid-band” spectrum it uses for 4G to provide 5G. Verizon and AT&T both offer millimeter-wave-based services, but they’re only available in a handful of locations. The wireless industry is focused more on using mid- and low-band frequencies for 5G, because deploying a massive number of millimeter-wave access points will be time-consuming and expensive. In other words, 5G will continue using the same radio frequencies that have been used for decades for broadcast radio and television, satellite communications, mobile services, Wi-Fi, and Bluetooth.
Even when carriers roll out more millimeter-wave coverage, you still won’t need to worry much. Radio waves, visible light, and ultraviolet light are all part of the electromagnetic spectrum. The higher-frequency parts of the spectrum, including x-rays and gamma rays, are what’s known as “ionizing radiation.” This is the scary kind of radiation. It can break molecular bonds and cause cancer. Millimeter waves and other radio waves, along with visible light, are considered non-ionizing, meaning they don’t break molecular bonds. They are higher frequency than traditional broadcast frequencies, but they’re still below the frequency of visible light and far below ionizing radiation such as shortwave ultraviolet light, x-rays, and gamma rays.
“Calling it 5G and changing the frequency does not change the relevant biological health factor, which is energy,” says Robert DeMott, a toxicologist specializing in risk assessment at the consulting firm Ramboll.
Visible light is a common source of higher-frequency, higher-energy electromagnetic energy than millimeter waves or other mobile phone frequencies, says Eric S. Swanson, professor of nuclear physics at the University of Pittsburgh.
That’s not to say that overexposure to non-ionizing radiation can’t have negative side effects. Electromagnetic energy produces heat, which is the “one and only” health concern posed by radio waves, says DeMott. That position is backed up by decades of research on the biological effects of non-ionizing radiation, including millimeter waves. A paper published in 2005 by the engineering professional organization IEEE’s International Committee on Electromagnetic Safety reviewing more than 1,300 peer-reviewed studies on the biological effects of radio frequencies found “no adverse health effects that were not thermally related.”
To protect against heat-related effects, the FCC and other regulators set limits on how much energy wireless devices can emit. “The normal consensus is that you don’t need to worry about a temperature increase of less than one degree Celsius because our bodies change by one degree Celsius in and of their own activities all the time, even at a cellular level,” DeMott says.
Researchers have yet to find conclusive evidence linking mobile phone use to cancer or other health problems. Still, fears persist, in part because of inconclusive studies. Many critics of 5G and other wireless technologies point to the fact that the World Health Organization’s International Agency for Research on Cancer classified mobile phones as “possibly carcinogenic” in 2011. What they don’t usually mention is that the organization selected that designation, which also applies to coffee and pickled vegetables, after a 2010 study failed to determine whether cell phones posed a cancer risk. A fact sheet on the WHO website dating back to 2002 is more sanguine. “In the area of biological effects and medical applications of non-ionizing radiation approximately 25,000 articles have been published over the past 30 years,” the fact sheet says. “Based on a recent in-depth review of the scientific literature, the WHO concluded that current evidence does not confirm the existence of any health consequences from exposure to low level electromagnetic fields. However, some gaps in knowledge about biological effects exist and need further research.”
There are, of course, individual studies that conflict with the scientific consensus that non-ionizing radiation poses health risks beyond heat. A studypublished last year by the National Toxicology Program noted an increased risk of cancer among male rats exposed to low-frequency radio waves. But the report didn’t find a similar risk for female rats, nor for male or female mice. The researchers said the tumors found in male rats were similar to those seen in previous research of heavy cell phone users, but specified that the results shouldn’t be extrapolated to humans.
These sorts of atypical results are to be expected, says Swanson. If you conduct tens of thousands of studies, he explains, you can expect that hundreds will show an increase in cancer or, or some other health concern, by pure chance. That, along with a number of badly designed studies, provide fodder for critics.
But if you want a little more assurance that your phone probably isn’t giving you a tumor, you can take comfort in knowing that, according to statistics published by the National Cancer Institute, the rate of brain cancer in the US actually went down between 1992 and 2016 even as mobile phone use skyrocketed.
Left to right, former US presidents Andrew Johnson, Richard Nixon and Bill Clinton. Photograph: Paul J Richards/AFP/Getty Images
Donald Trump is on the precipice of becoming the third president in US history to be impeached. It’s an exclusive club that no one wants to join – but who else is in it, and why?
Here’s a look back at the two prior impeachments and a third near-miss case.
Impeachment #1: Andrew Johnson (1868)
The assassination of Abraham Lincoln in April 1865 unexpectedly elevated his vice-president, Johnson, an outspoken white supremacist but strong anti-secessionist, to the White House. With the aftershocks of the civil war manifesting in bloody voter suppression and racially motivated terrorism across the South, Johnson’s presidency was immediately thrown into tumult by demands that the new president take steps to cement the war’s promise of racial equality. But Johnson vetoed civil rights legislation, unilaterally pardoned hundreds of former Confederate leaders and called for the murder of his political enemies.
Johnson was in essence impeached for undermining the cause of racial equality, the historian Brenda Wineapple wrote in her book The Impeachers.
But the bulk of the impeachment clauses against him were predicated on a relatively narrow charge of violating a contemporary “tenure of office” law (repealed soon thereafter) by removing his secretary of war, Edwin Stanton, who was instrumental in opposing racist attacks on suffrage for former slaves.
Trump: I take zero responsibility for impeachment – video
Johnson remained in office after being acquitted in the Senate by one vote – a bribed victory, historians have speculated.
Impeachment #2: Bill Clinton (1998)
While the Clinton impeachment is linked in popular memory to his relationship with the White House intern Monica Lewinsky, he was impeached for lying to a grand jury in a separate case, brought by a former Arkansas state employee, Paula Jones.
In response to a sexual harassment lawsuit filed by Jones, Clinton denied in a sworn deposition and a later video interview that he had a sexual relationship with Lewinsky. That assertion was contradicted by a report submitted to Congress by independent counsel Kenneth Starr, who documented Clinton’s relationship with Lewinsky in lurid detail.
Impeachment proceedings against Clinton were opened in October 1998, and the House of Representatives approved two articles of impeachment against him, for perjury and obstruction of justice, in December. Two other proposed articles – for abuse of power and perjury a second time – were voted down.
The Republican-led Senate – stronger than today’s, with a 55-seat majority at the time – acquitted Clinton easily on both counts, with the closer case drawing only 50 votes out of 67 needed.
Near-miss: Richard Nixon (1974)
In November 1972, Nixon won re-election by what was then the largest margin of victory in the history of US presidential elections. But five months earlier, a burglary at Democratic offices in the Watergate hotel complex had set in motion a chain of events that would end his presidency.
In his investigation of the burglaries, special prosecutor Archibald Cox uncovered a dirty campaign to attack Nixon’s political opponents, financed by a secret slush fund and directed by Nixon himself. For months, Nixon publicly denied all involvement.
But an impeachment inquiry was opened in October 1973, after Nixon fired the top two officials in the justice department for their refusal to fire Cox. A fight over evidence ensued, including tapes of Nixon’s Oval Office conversations.
In late July 1974, a third of elected Republicans on the House judiciary committee joined Democrats to approve three articles of impeachment, for obstruction of justice, abuse of power and contempt of Congress. The release of a “smoking gun” tape a week later, fixing Nixon at the center of the conspiracy, sealed the president’s fate.
Under pressure from fellow Republicans, Nixon resigned on 9 August 1974, before the full House could vote on impeachment.
The cultural practices and locales that define the hundreds of Native communities dotting the North American landscape are grounded in languages. Each is unique, with distinctdialects, accents, and slang. There are words, phrases, and concepts that do not exist in the American English lexicon, that confounding colonizer speech that Native Americans were forced to adopt and master. And nearly all of them are in danger of going extinct. In 1998, there were 175 Indigenous languages still in use within the United States. Today, there are 115. With each passing year, as elders are laid to rest and new babies are born, Native people lose their tongue.
Even though the English language was violently imposed, Native people have used it as a tool of struggle and beauty—as poet Tommy Pico said at a speaking engagement last month: “We didn’t ask for English, but it’s ours now, and look what we’re doing with it. You’re welcome.” While true, it still does not replace what is swiftly evaporating. As a Native person whose language was decimated and is only recently beginning to be stitched back together, I know the intangible feeling of hearing my own language through an elder’s voice on the phone or a cousin’s patient assistance in navigating a difficult pronunciation. It’s an experience of kinship that cannot fully be replicated in this second tongue. Learning a Native language is not only about knowledge or authenticity; it extends a symbol of a thriving and unique culture to the rising generation. It’s the cadence of survival. And if it goes silent, a great tradition is broken.
On Monday, in a small step to preserve this tradition, the House passed the Esther Martinez Native American Languages Programs Reauthorization Act, named after the legendary Tewa linguist. With the Senate vote already in the bank, the measure is headed to President Trump’s desk. Like a variety of other set-term appropriation bills, the legislation, which was first passed under George W. Bush in 2006, has to be renewed by Congress every five years to maintain the funding. And like so many other necessary pieces of legislation, it is still deficient.
The latest version of the bill, coming at the tail end of what the United Nations has dubbed the Year of Indigenous Language, will seek to lower the bill’s previous class-size restrictions, which were preventing tribes from obtaining federal grants to establish their own language programs because many smaller tribes had lower enrollment numbers than what the grant applications required. The need to lower that thresholdspeaks to the dire state of Indigenous languages in America.
The Department of the Interior approves applications for federal recognition based in part on whether a tribe has a distinct political system, land claim, and shared set of cultural practices, among other signifiers. That is to say, the federal government—the same body that sought to raze our speech, snuff out our religions, steal our land, and effectively end our various ways of life—is now in charge of determining who is Native enough to be considered a sovereign nation.
Writing for High Country Newsin November, Cherokee Nation podcast host and writer Rebecca Nagle, who also works as an apprentice in the Nation’s Cherokee Language Master Apprentice Program, laid bare the historical roots andmodern reality of endangered Native languages. The American government, for part of the nineteenth and twentieth centuries, attempted to eliminate any and all Native languages through federally funded boarding schools, where Native children were compelledto act as American citizens and nothing else. This included punishing students who dared speak their Native language, with some reports detailing kids having their mouths washed out with soap any time they uttered a word in the language they grew up hearing in their home.
Must-reads.
5 days a week.
The boarding school era and its erasure of language is a blot on the nation’s record, and one that too few non-Natives have been forced to reckon with. But this cultural genocide did not begin with the Carlisle Indian School in 1879: Carlisle and the copycats it spawned were just the mass institutionalization of a practice that had been underway for centuries.
As the colonizers first washed over the continent and its people, the various European governments and the churches they brought with them understood what Captain Richard Pratt, the U.S. military leader and Carlisle founder, meant when he said that he sought to “kill the Indian … and save the man.” As long as Natives could communicate in a tongue that colonizers could not penetrate, their cultural and spiritual practices would continue, and as a result, so too would their claim to independent nationhood and the land they’d stewarded for centuries. So, in the seventeenth and eighteenth centuries, the early peddlersof organized religion, most of them Christian, set up shop on Native land along the East Coast and worked their way west.
From the early forts to Carlisle to the Termination Era, assimilation was all a means to an end—namely land and capital—and language was always among the first things the colonizer sought to rip out. It stood as the most important barrier between the Indigenous nations and the Europeans (and eventually the Americans), so they were determined to demolish it.
The general experience of losing one’s language to American preference is not unique to being Indigenous. It’s an American philosophy, one that is echoed in the experience of the children of immigrants whose parents do not teach them their language, in an attempt to shield them from racism. The president enforces a regime of assimilation when he declares, “This is a country where we speak English. It’s English. You have to speak English!” Racist law enforcementdoes the same when officers treat the sound of another language as pretext for a stop or search. It is present in mundane interactions in which one’s own language is treated by others as a signal of danger.
Against this climate of hostility, learning one’s Indigenous language serves a purpose that is bigger than what istransactional or academic. While it is dangerous to ascribe broadly painted features to the hundreds of tribes in America, reciprocity constitutes a great deal of the Native experience from nation to nation, and this extends to language. It is gifted to us with the hope and expectation that it will be passed down to the next generation, so that they too may withstand the tricks and brute force of colonization. The work that the Esther Martinez Act will accomplish is obviously crucial, but it is difficult to look beyond the transactional nature of the bill and the programs that it establishes.
The rescue of Indigenous language by way of the federal government may be too little, too late. I’m saying this not out of defeatism, but because it will in all likelihood be the efforts of tribal nations, and not the U.S., that saves Native languages. As Nagle pointed out, the Department of Health and Human Services, in addition to a handful of other federal agencies, approved just 29 percent of tribal applications in 2018, and its funding is minuscule in comparison to the nation’s previous erasure campaign. “For every dollar the U.S. government spent on eradicating Native languages in previous centuries, it spent less than 7 cents on revitalizing them in this one,” Nagle wrote.
Many of these languages are not even a full lifetime away from disappearing. They exist for as long as the heart of the elder who carries the words continues to beat. One day, that heart will stop, and so too will the language. Without the immediate funding of these programs, the expedient approval by Trump and then by HHS and other agencies, and the active increase in participation by Native youth, it stands to reason that the number of surviving languages will drop further by 2050. Seminole Nation citizen and Creek language teacher Jade Osceola best articulated the stakes when speaking about the eighth-grade language program she’s helped keep alive:
Language is what makes you different from all other Native Americans across the country. It’s not your food. It’s not your clothing. It’s not any of that, and you can’t do your ceremonies without language. That’s what makes us different. That’s what puts us on the map.
There is a narrow timeline to correct course, which tracks similarly existential struggles playing out in parallel. In order to prevent the worst possible outcome—the extinction of Native languages—the state’s response, to merely acknowledge its role in causing the problem and make funds more easily available, is a useful but inadequate solution on its own. That’s why it’s important to remember that the push to rescue these languages did not come solely from the American government; it is happening because nations and elders and youth are rising up and resisting the slow but steady turn toward assimilation. The process of learning a language can be arduous, but in crafting a way forward, Native people have always managed to make the process more often joyful and engaging—such as the efforts of Constance Owl, an Eastern Band of Cherokee Nation citizen, to translate the Cherokee Phoenix archives, or the Navajo Nation’s recent youth-aimed recording of “Baby Shark” in Diné.
Even on the best of days, fostering these Native languages will still be costly and require persistence. But it’s worth it, to reclaim and pass down one’s Native language. When your time comes, what will be the last words you utter? More to the point, how will you say them?