On the Keystone XL Pipeline – Final Draft

To the Senate of the United States: I am returning herewith without my approval S. 1, the “Keystone XL Pipeline Approval Act.”  Through this bill, the United States Congress attempts to circumvent longstanding and proven processes for determining whether or not building and operating a cross-border pipeline serves the national interest.

The Presidential power to veto legislation is one I take seriously.  But I also take seriously my responsibility to the American people.  And because this act of Congress conflicts with established executive branch procedures and cuts short thorough consideration of issues that could bear on our national interest — including our security, safety, and environment — it has earned my veto.” – Barack Obama, February 24th, 2015

So ended, for the time being, the struggle of TransCanada and the Keystone XL pipeline for presidential permits allowing them to construct on American soil. Their inquiry, a modest affair by most standards, grew eventually to encompass many larger issues and concerns, including the relationship between land, native peoples, and the government, economic benefits for the United States, the polarized political atmosphere that is the American government, and — perhaps most importantly — the environmental havoc the project would set forth.

To start, however, we should set about with the beginning rather than the end. The Keystone XL pipeline controversy opens with the oil sands of Canada. According to the Center for Climate and Energy Solutions, the oil sands in Alberta make up approximately 97% of Canada’s oil reserves. These sands provide an oil source, yes — in fact, one of the largest of these types of oil reserves in the world — but it does not function in the same manners traditional oil resources do. According to the Washington Post and AJ+, these oil sands consist of clay, sand, and a thick, dense oil about the same consistence of molasses called bitumen. To extract this alternative oil, energy companies employ one of two methods: surface mining or drilling methods. The distribution of these methods is about equal. In surface mining, large trucks transport 400 ton loads of sands to refineries where hot water separates the bitumen from the sand and clay. When the oil sands lie deeper than surface mining may reach, drilling methods are employed. These entail drilling “wells” into the earth before filling them with steam, effectively melting the bitumen and allowing it to be removed from the sand directly. Below is a map of where exactly the oil sands fall in Canada and relate to the United States:

Image credit: http://www.ags.gov.ab.ca/energy/oilsands/

So Canada has some oil. How does that relate to any kind of pipeline? Funny you should ask! The accepted reason TransCanada wishes to build a pipeline of this sort in the United States is so that it may be connected even further to the global oil market, according once again to the Center for Climate and Energy Solutions. And how does the company wish to proceed? This handy little video — unfortunately made just before Obama’s veto and therefore a bit outdated — will cover the very basics of the oil sands and an overview of the Keystone XL pipeline itself.

For a better look at where exactly TransCananda wished to build, the below maps outline existing pipelines and how the Keystone pipeline would have fit into that network.

Image credit: http://insideclimatenews.org/news/20120430/exclusive-map-tar-sands-pipeline-boom
Image credit: http://insideclimatenews.org/news/20120430/exclusive-map-tar-sands-pipeline-boom
Image credit: http://www.nrdc.org/energy/kxlsecurity.asp

According to CNN, this extension would have allowed TransCanada to pump 830,000 gallons (3,142,000 liters) of crude oil into the United States every day through the Keystone pipeline to be transported to refineries in the Gulf. Here, supporters of the venture make the point that this oil will be mined regardless of the American government’s decision, that one way or another this crude oil will find its way to refineries in the Gulf, especially through railways, which are, in fact, more pollutant than the pipeline would have been.

However, these supporters lack a fundamental understanding of what some may call “the big picture.” In December 2009, Canada signed an international agreement called the Copenhagen Accord and in this accord, it agreed , along with several other nations of strength and large carbon emissions, to lower their footprints via CO2 levels. As of May 20th of this year, Canada’s activities on this matter have been labelled “inadequate” by the Climate Action Tracker. This nifty website shows that Canada’s activities, if left unchecked in the state they’re in, will actually increase its CO2 emissions in coming years. With the expanded oil sand exports the Keystone pipeline would have facilitated, Canada would have fallen even further behind in their efforts to reduce their effects on global climate change. To put into perspective how much greater impact bitumen has on the environment than traditional oil sources, Jennifer Grant, director of the oil sands program at Pembina Institute, states that from the sand to the gas tank, oil sands emit 23% more emissions than traditional sources. In the graphic  below, the effects of oil sand mining on Canada’s Copenhagen Accord are even more evident:

Image credit: http://switchboard.nrdc.org/blogs/aswift/canadas_new_energy_strategy_re.html

This isn’t the only environmental issue the pipeline projects or the oil sands companies have experienced, either. Jennifer Grant explains that the issue of oil sand extraction is much larger than TransCanada would have you believe. She says that the oil sands are a very large natural resource which inherently have large ties to the ecosystems around it. When specifically referencing the reclamation law enforced in Alberta, Canada, Grant states that: “Reclamation has not kept pace with the level of disturbance on the landscape today. We’ve only seen about one square kilometer of the 700 or so square kilometers that’s been disturbed reclaimed and certified by the Alberta government.” This startling fact means that around 700 square kilometers of mined land still exist in their disturbed states, having displaced the thousands of species of plant and wildlife which used to inhabit it. Canada’s vast ecosystem and habitat is collapsing at the hands of TransCanada and companies like it — and this practice would only be expanded, exploited, and extended by that failed Keystone pipeline.

On July 25th, 2010 at 5:58 in the evening, without a single notice from the people of Marshall, Michigan, oil began to steep into their water. For more than seventeen hours, the leak spilled into the Kalamazoo River, undisturbed and noxious, from a ruptured Enbridge Pipeline before it was discovered on July 26th. Calls entered emergency facilities regarding the strange odor hanging about the air, but the public remained unaware of the threat that had just entered their river and ecosystem. Ignorant of the devastation occurring just outside the doorstep, workers at Enbridge misinterpreted the “broken pipe” alarm and continue to send oil out through the ruptured pipe until an outside source notified them at 11:00 the following morning. Voluntary evacuations followed, with three hundred people in the area reporting medical ailments possibly related to the spill. The EPA responded that day with the formation of an Incident Management Team made up of federal, state, and local resources. On July 28th, that incident management team stormed Enbridge with the Clean Water Act and demanded that they begin removing oil and that they find the origin of the spill.(Michigan Radio Newsroom) The company estimated 843,000 gallons (3,191,000 liters) of crude oil spilled first into Talmadge Creek and then the Kalamazoo River, a Lake Michigan tributary. The spill was later contained to 80 river miles from Lake Michigan, which serves 10 million people lake-wide. (EPA and EPA) Five years later, the state and Enbridge both admit that the Kalamazoo River will never return to what it used to be, that there will never come a day when all of the oil leaked that day is cleaned. (Michigan Radio Newsroom)

According to the Australian Petroleum Production and Exploration Association (APPEA) via the University of Delaware, there are four main methods of cleaning an oil spill:

  1. Allow the oil to break down naturally in the environment. This method is best employed when there is no risk of pollution to nearby plant and wildlife and works best on light oil, unlike the thick oil of the Keystone XL pipeline.
  2. Contain the spills with buoyant structures called “booms” and “skim” the oil from the surface. This method holds little weight with groundwater, however, the main area of concern with the Keystone XL pipeline.
  3. Add dispersants to the affected water. These decrease the surface tension and force the oil into small droplets, more easily diluted by the movement of the water and more open to bacteria and evaporation. This method of cleanup is only useful, however, if applied within two hours of the spill occurring.
  4. Use biological agents to speed the oil’s natural deterioration. This method, called biodegradation, employs fertilizer introduced into the spill in hopes of fostering bacteria to break down the oil faster. However, many factors play into whether this method is effective, including whether the soil is sandy or rocky.

And while research on offshore oil spill cleanup is fairly extensive, its onshore counterpart seems to lack the same sort of inspection. However, Gerald Graham, president of marine oil spill prevention and planning company Worldocean Consulting, estimates that only ten to fifteen percent of an offshore oil spill is cleaned out of the ecosystem. This certainly does not bode well for onshore oil cleanup efforts, should they be needed along the Keystone XL Pipeline.

Faced with this truth, opponent’s concerns over the Keytsone Pipeline fall into a much more reasonable light. The proposed pipeline would have stretched the breadth of the  Ogallala Aquifer, one of the largest fresh water sources in the world. According to Dr. J. A. Schneider of CUNY Oswego, this aquifer holds enough water to submerge the entire continental United States in two feet. Indeed, a U.S. Geological Study from 2008 found the aquifer cover 450,000 kilometers squared, Colorado to Kansas and Texas to South Dakota. (MIT Mission 2012: Clean Water) This water source irrigates farms in eight states making up one quarter of the United States total agricultural output — and the Keystone XL pipeline was set to mow right through it. (Washington Post) Below is a map of exactly the route the project would take through Ogallala:

Image credit: http://rainydaythought.blogspot.com/2015/02/the-real-world-or-distraction.html
Image credit: http://rainydaythought.blogspot.com/2015/02/the-real-world-or-distraction.html

With the Kalamazoo River disaster in mind — and the consensus that the cause was neglect of the pipe by Enbridge — one can sympathize with those worried over the pipeline’s path through Ogallala. A spill into this aquifer could contaminate thousands of acres of farmland and send the agriculture industry straight into the red. Professor John Stansbury of the University of Nebraska predicted that a spill could result in 6.5 million gallons (24,610,000 liters) of crude oil entering the aquifer and 4.9 billion gallons (18,550,000,000 liters) of groundwater becoming contaminated. The project, should it fail as pipelines are wont to do (see: BP oil spill, Kalamazoo River spill, etc.), could spell disaster for communities throughout the midwest.

All this in mind — the promise to the environment and international community Canada  made in the Copenhagen Accord, the pollution that would spew from the sand, the poor reclamation efforts, the threat of a spill unlike one ever seen before — the proponents of the Keystone XL pipeline neglect one very major point as their time in the spotlight enters its twilight hours: their oil sands can very well stay buried in the earth for centuries to come, should the human race experience a funny, genuine slip of heart.

Sources and Alternative Reading:

The Keystone XL Pipeline by Eric

“Isn’t the Keystone XL a huge pipeline they’re building through America?!”

The Keystone XL pipeline is not a brand new, ginormous pipeline being built from Canada to Texas.  In fact, the Keystone Pipeline already exists.  Commissioned in 2010 and built by the company TransCanada, the pipeline has been transporting oil from Canada to the U.S. for several years now.  Originally, the Keystone stretched from Alberta, Canada to Steele City, Kansas, and then to a refinery in Wood River, Illinois, and an oil tank farm in Patoka, Illinois.  In 2011, an extension was built from Steele City, to Cushing, Oklahoma, the site of a large oil tank farm.  For those who are unaware, oil tank farms are storage and distribution facilities for oil.  Cushing, Oklahoma happens to have the largest in the country, representing 12.5% of the country’s stock-up (for those who wish to learn more about Cushing’s grip on America’s oil, check out NewsOk.com).  After the addition of the Steele City to Cushing extension, an additional 485 miles was added in 2014 from Cushing to refineries in Port Arthur, Texas.  Work on a pipeline from Port Arthur to Houston, Texas is set to be completed by 2016.

“So what’s the difference between the Keystone XL and the Keystone?

The Keystone XL is another pipeline running from Alberta to Steele City, designed to increase barrel production from 700,000 barrels per day, to up to 830,000 barrels per day, a 16% increase.   For a more visual representation of the Keystone and Keystone XL, check out this map from The Washington Post:

keystone map

As you can see, the Keystone XL almost looks like a “shortcut” in a huge pipeline.  The only reason it has not been built yet is because it crosses the Canadian/U.S. border, and because this is an international border, it requires a special presidential permit.  Obama has been hesitant to give permission to TransCanada to build the Keystone XL for political reasons.  Many democrats oppose the plan because the left’s constituencies tend to be more environmentally focused, and therefore they need to vote against what many people see as being a detriment to the environment.  There have also been legal battles with landowners, especially in Texas, as eminent domain has been used to obtain access to their land.  One of the most notable cases, a restraining order filed by Texas farmer Julia Trigg Crawford, officially ended in March of 2014 when the Texas Supreme Court denied to hear her case.  Another issue that has delayed the president’s response came from Nebraska.  Nebraskans wanted to prevent the Keystone XL’s pathway through their state, however in 2013 Nebraska Governor Dave Heinman approved a different path through Nebraska that ended this issue.  All in all, most of the opposition comes in on the environmental front.

“So how does this affect the environment exactly?”

A significant portion of the opposition to the Keystone XL Pipeline is because of the environmental concerns.  Several groups, such as the National Defense Resources Council and the Friends of the Earth, place most of their worry in the environmental detriments of oil-sands, or tar-sands, the type of oil that is being taken from Alberta.  Oil-sands are not actually made of tar, they are made of a mixture of sand, water, and bitumen.  Only recently have they been thought of as usable oil, because oil-sands are an unconventional type of petroleum, meaning that they are produced using methods other than the conventional well-method.  Here’s a quick video from the Canadian Association of Petroleum Producers explaining oil-sands and bitumen:

There are several reasons why oil-sands negatively impacts the environment, one being greenhouse gas emissions.  A study from Stanford University found that the oil produced from oil-sands are 22% more carbon intensive than conventional oils in the U.S. in wells-to-wheels.  Wells-to-wheels measures the emission of carbon dioxide from the beginning of oil production, all the way through combustion in automobiles.  What this basically means is that the fuel you get from oil-sands causes 22% more pollution than conventional oil.  This number is heavily debated however, as some sources such as the Canadian Association for Petroleum Producers, the sources for the previous videos, says that oil-sands are only 9% more intensive than the U.S. average supply. According to activist organization Greenpeace, oil-sands account for 40 million tons of carbon dioxide emissions per year, which makes them the largest contributing factor to emission growth in Canada.  The majority of these emissions come from extracting the oil from the ground.  There are two main ways to do this, through open pit mining, and through in situ drilling.  Mining recovers the oil-sands that are close to the surface, accounting for about 20% of oil-sand extraction processes.  This process is closely related to coal-mining operations; chunks of earth containing oil-sands are put onto truck that take the chunks to crushers to break down the earth, then water is used to thin out the thick mixture, then the mixture is transported to a plant where the bitumen is separated from the other products and turned into usable fuel.  Here is another video explaining the process from the Canadian Association of Petroleum Producers:

In situ drilling gets to oil-sands that are deep under the ground, using a steam technology called steam-assisted gravity drainage.  Steam is pumped underground to liquefy the viscous bitumen, and then pumped back up.  These drilling sites are able to “directional drill”, meaning multiple wells can be created from a single site.  Here’s a quick video, again from the Canadian Association of Petroleum Producers:

Both of these processes emit huge amounts of carbon dioxide, as they require far more energy than using wells for conventional oil.  Yet, there are still more environmental detriments caused by oil-sands.  The process of separating bitumen from the unnecessary products like clay and sand uses large quantities of water, in fact, three barrels of water are used for extraction for every one barrel of oil produced.  As one can imagine, this water becomes extremely polluted.  95% of this water, or 2.4 million barrels per day, becomes so polluted that it must be stored in tailing ponds (http://www.foe.org/projects/climate-and-energy/tar-sands/keystone-xl-pipeline).  If 2.4 million barrels doesn’t mean much to you, imagine over 100 million gallon jugs of water becoming too polluted to use every day.  These ponds are basically just pools of toxins, and there is the potential for some of these toxins to leak in to nearby water supplies.

Another more local concern regards the Alberta boreal forests.  Home to the largest land ecosystem on earth, they are incredibly important to many species.  However, they lay right on top of the oil-sand deposits.  Mining and in situ drilling sites require the clearing out of trees, and in situ drilling’s horizontal drilling can go under forests and disturb them greatly.  Some of these environmental effects seem to not rattle people though.  People have been hearing about greenhouse gasses for decades, and toxic water and forests up in Canada don’t affect American citizens.  But what does scare most Americans, are the spills.

Since it has been in operation in 2010, the Keystone Pipeline has had 14 spills, and has the possibility to spill 2.8 million gallons of bitumen in just a 1.7 mile area, according to the State Department.  That means that over 2.5 thousand gallons of extremely toxic sludge will pour out into every acre for over 1000 acres. Depending on the location of the spill, this could have a major impact on the U.S., as the proposed Keystone XL will pass through the Missouri River, Yellowstone, Red Rivers, and the Ogallala Aquifer.  For those who are not aware, the Ogallala Aquifer provides water to over one fourth of America’ irrigated land, and is responsible for two million citizen’s drinking water, according to Friends of the Earth.  Here’s a quick map of how much area is covered by the Ogallala Aquifer from the Water Encyclopedia, and check out their website for any more information:

The Ogallala Aquifer (shaded area) is in a state of overdraft owing to the current rate of water use. If withdrawals continue unabated, the aquifer could be depleted in only a few decades.

Clearly the Keystone XL has its issues, ranging from legal battles to sincere environmental concerns.  But with all these negative aspects…

“Will the Keystone XL even be helpful?”

There is the potential for job creation.  According to TransCanada, they see 20,000 jobs being created.  However, Obama released a statement in 2013 saying:               The most realistic estimates are this might create maybe 2,000 jobs                           during the construction of the pipeline, which might take a year or two,                   and then after that we’re talking about somewhere between 50 and 100                   jobs in an economy of 150 million working people.”

If TransCanada is correct in their assumption that the  Keystone XL will provide 20,000 jobs, that would be a .0001% increase in employment.  So compared to the entire U.S. economy, even using the most optimistic numbers, the jobs gain is not significantly large.  So let’s see how much energy the Keystone XL Pipeline would actually provide…

Well, according to the State Department’s report, the Keystone XL won’t really affect oil-sand production.  There are alternatives to the Keystone XL Pipeline, such as using combinations of tankers, rail-lines, and existing pipelines, that will all fulfill the same oil transportation amounts as the Keystone XL will, an additional 130,000 barrels per day.  Additionally, the alternatives are less prone to pipeline spills, so may actually be better from an environmental stand point.  In the report, the State Department also concludes that if the Pipeline is not approved, then an alternative will be used, and there will be no way of stopping it.  So either way, it looks like oil-sands are going to increase in production.  What are environmentalists supposed to do with this?  It seems like a lose-lose situation for them.  However, one way they may be able to win the battle is through economic reasoning.  Let’s take a look at this graph from Rystad Energy:

First, we’ll figure out what all these numbers mean.  On the left (the y-axis), we have U.S. dollars per barrel being measured.  A barrel is a barrel of crude oil, which contains 42 gallons of oil.  On the bottom (x-axis), total oil production in millions of barrel of oil equivalents per day is being measured.  Barrel of oil equivalent (boe) is just the amount of energy that can come from one barrel of oil, which is 1,700 kWh.  Boe per day (boe/d), is simply just the amount of barrels per day being produced.  So, as we take a look at the graph, we can see that oil-sands account for less than 5% of the world’s boe/d, almost the lowest amount compared to the rest of the sources.  Not only is it a tiny amount, but oil-sands are also the most expensive source of crude oil, coming in at an average of $88 per barrel, the next highest being North American shale (fracking), at $62.  Oil-sands may just not make economic sense, unless a new production technique can be devised to make it cheaper.  Some may argue in favor of oil-sands because we can get them from Canada, a much more stable source than the Middle East, but fracking in the United States account for more than double the oil production of oil-sands, and at a 30% cheaper price.

So overall, the Keystone XL is kind of a moot point.  The oil-sands will be produced either way, and the environmental concerns will still be there.  What America should focus on is moving away from oil-sands completely.  If we shift the argument against the Keystone XL to oil-sands in general, environmentalists may be able to win.  Most people will not be in favor of using the dirtiest oil around if they found out it was the most expensive oil as well.  Conservatives who are anti- climate change do not care about the environment, so if the environmentalists and people who oppose the use of oil-sands show them the concrete evidence that oil-sands production is a poor economic choice, then we may be able to win in stopping the Keystone XL, and the production of oil-sands all together.

Nuclear Waste Management

Nuclear Waste Management: Finding Solutions for a Sustainable Future

In order to understand where nuclear waste comes from, one must understand what nuclear energy is.  There are two nuclear processes that can create energy.  The first is fusion, or combining atoms to produce energy and a new, heavier atom.  This process can create huge amounts of energy with fairly low amounts of radioactive bi-products.  However, this type of nuclear reaction is not currently commercially feasible and only occurs naturally on places such as the sun.

The second process of extracting nuclear energy is through the process of fusion, or the splitting of atoms, namely uranium and plutonium.  Nuclear power plants use the heat that is released as part of this reaction and turn that thermal energy into electricity.  Of course, no reaction is one hundred percent efficient, and a byproduct of nuclear reactions is nuclear waste.  Nuclear waste is extremely radioactive and therefore dangerous.  It is so toxic, that if a person were to stand near the waste after it came out of the reactor, even if just for a few seconds, they would die of acute radiation sickness.

Unfortunately, this fear of nuclear waste has led to a stigma against nuclear energy.  Many people associate nuclear energy with tragedies such as Chernobyl or with weapons of mass destruction.  In reality, nuclear energy is quite the opposite.  According to world-nuclear.org , nuclear waste management is “neither particularly hazardous nor hard to manage relative to other toxic industrial wastes.”  In other words,  there are other common forms of waste that come from places such as coal and electrical power plants that are more threatening to the public than nuclear waste.  For instance, one study found that people who lived near coal powered plants had approximately 18 millirems of radiation in their bones as opposed to those people living near nuclear power plants who only had 3-6 millirems.  Additionally, for the food grown, radiation doses were between 50-200% higher in the food grown near the coal than the nuclear power plant.

Furthermore, unlike other types of waste, the potency of nuclear waste decreases over time.  The particles in radioactive waste all have a half life, which is one half the time it takes for the radioactive substance to decay to lose it’s radioactivity.  For instance, once radioactive substance, Uranium-238 has a half life of approximately 4.5 billion years.  This means, that in 4.5 billion years, 50% of Uranium-238’s atoms will have decayed to nonradioactive decay products.  Then, after 9 billion years, 75 % of the atoms will have decayed, and so on and so forth.   So, eventually, all nuclear waste becomes harmless.  However, as we can see simply with the example of Uranium-238, depending on the type of nuclear waste, this can take a very long time.  But, some nuclear wastes with shorter half lives are stored until they are less dangerous, or even until they are no longer harmfully radioactive and can be disposed of with regular waste.

Different countries categorize waste differently.  The United States places waste into three categories based on where the waste comes from: low level, transuranic, and high level waste.  Most other countries simply categorize waste by the potential effect or radioactivity of the substance.  So, most countries categorize by low level, intermediate level, and high level waste.  Since more countries use the second system, I will refer to the levels of waste as such.

Low level waste includes items such as rags, clothing, filters and anything that may have come into contact with radioactivity during the reaction process and therefore, have relatively short half lives.  These items do not require any type of shielding before handling or being transported, because they are not harmful to humans, and they are often burned or compacted before being disposed of through a shallow burial.  In the United States, there are three commercial land disposal facilities for this low level waste.  However, they only accept a certain amount of waste from certain states.  To put this type of waste into perspective, low level waste accounts for approximately 90% of volume of radioactive waste, but only 1% of the radioactivity.

The next step up is intermediate level waste, which has slightly higher levels of radioactivity than that of low level waste.  This includes materials such as chemical sludge, resins, and contaminated materials from the reactor.  These require some shielding and a little more care for disposal, for they are normally sealed in concrete or bitumen so as to protect humans and the environment from their radiation.

Last, and most troublesome, is high level waste.  This comes from the uranium that has been used in a nuclear reaction.  It is the actual fuel that has been used, or the waste left over from reprocessing the used fuel.  This type of waste is the most radioactive of all the waste products and also the most thermally hot.  It accounts for over 95% of all radioactivity in waste from nuclear energy.  Because of it’s radioactivity, high level waste must shielded and disposed of carefully and responsibly.  Furthermore, because high level waste has a long half life, this type of waste can take hundreds of thousands of years to decay.  Herein lies the issue: finding a way to depose of this high level waste in a permanent way that will keep the public safe.  As of now, most waste is stored in temporary locations  such as underwater at the plants or in nearby locations where the waste is bound in borosilicate glass and then sealed inside metal containers.

Storage field for Used Fuel
Storage field for Used Fuel

The United States has been trying to solve this issue for decades.  In 1982, Congress passed the Nuclear Waste Policy Act, which set up a deadline  of 1998 for the national Energy Department to begin moving waste from various plants to a permanent, geological waste disposal site.  However, the United States still does not have said permanent waste disposal site.  In 1987, an amendment was made to the Nuclear Waste Policy that required the Energy Department to look to the remote dessert area of Yucca Mountain in Nevada as a site for this permanent waste disposal site.  Proponents of this plan hailed the site because of it’s remote location, geological makeup, and relatively low cost and socioeconomic to the surrounding area.  Furthermore, scientists deployed a computer system called the Yucca Mountain Total System Performance Assessment  which shows that the planned nuclear waste repository facility will protect the health of nearby residents for at least 10,000 years.   But, political leaders in Nevada call this model “an almost unintelligible mix of fact and wishful thinking.” Critics of the plan were deterred by the permanence of the plan.  They argued that it was hard to extrapolate over thousands of years what impact climate change, or the metal’s durability, or even volcanic activity would have on the site.  Basically, the critics feared that the safeguards put in place to protect those living nearby would not be enough.   And in the end, the critics won out.  The Federal Government halted plans for construction at Yucca Mountain in 2010.  Whether this was a positive or negative decision is up for debate.  However, many agree that “underground storage is a practical necessity and political poison.”  In other words, some sort of permanent underground storage system is imperative because we have a buildup of nuclear waste in the United States, but no where to store it.  However, because of the permanence and controversy of such a solution, no politicians want to push for the building of such a site.  Thus, in the United States, plants keep being built and yet no permanent location has been constructed.

The only country in the world that does have a permanent, geological disposal site in the works is Finland.  Like Yucca Mountain, Onkalo, which is located on Olkiluoto Island, is appealing because of it’s isolated location as well as it’s ability to keep waste in and prevent leaks.  This facility is expertly engineered to safely store waste deep underground and then sealed.  The site is intending to open in 2020 and will continue accepting waste until about 2120, after it has been filled with approximately 5500 tons of high level waste.  If this project is followed through to completion, it will be a huge step towards ensuring the future viability of nuclear energy.

One other method of managing this high level nuclear waste is through recycling the spent fuel.  Currently, in the United States, all of the fuel used in nuclear reactions is disposed of.  This is called an open fuel cycle.

Once Through Fuel Cycle
Open Fuel Cycle

As we know, this plan is flawed not only because the country lacks a long term disposal facility, but also because nuclear waste maintains 95% of it’s energy.  Other countries in Asia and Europe have moved instead to a closed fuel cycle where the spent uranium goes back into the system to power future reactions.

Closed Fuel Cycle
 Closed Fuel Cycle

This is a huge positive of of recycling nuclear energy, that the process actually creates, instead of uses, energy.  This could also act as a way to get rid of the stored waste.  Right now, we could harness enough energy, from recycled nuclear waste alone, to power the entire United States electrical grid for one hundred years.  Unfortunately, recycling nuclear waste does create its own waste in return.  However, this final waste would have a much shorter half life than the other un-recycled waste and would decay to harmless in a matter of a few hundred, as opposed to a few million years.  The United States is still a few decades away from wide-spread commercial recycling of nuclear waste.  However, once mastered, this could be a huge stride on the path for energy sustainability.

All in all, nuclear waste could be a solution to the energy crisis the whole world faces.  The only thing standing in the way, is the question of how to dispose of the waste.  Some temporary solutions have been found.  But, the United States in particular, must figure out a method that takes care of high level nuclear waste in a permanent fashion.  Only then can nuclear energy be a safe and feasible alternative power method.

Hydro-fracking: Environmental Destruction or Fuel Beneficiary?



What originated a matter of 65 years ago during the petroleum industry is now ruining wells around the world. As it started as a minor source of fuel generation, it became one of the most popular methods for fuel generation, but now created an environmental worldwide debate.

Fracking, also known as the process of drilling down into layers of the earth while simultaneously releasing water at a high pressure, its goal to reach a series of rock and then be injected with a mixture of chemicals, water, and sand to create a gaseous mixture [1]. When looking at this from the perspective of its benefits, particularly being natural gas, which is a necessary commodity to the majority of households, power plants, and the basis of many objects we use today, fracking is a necessity. But from perspectives of environmentalists, the destruction fracking does to the ground outweighs these tremendous benefits.

One of the first questions about this booming industry focuses on why is the United States involved. From a governmental standpoint, the U.S. has relied on foreign oil, and with relations between the Organization of the Petroleum Exporting Countries and the United States at times becoming thin, this 1st world country must find a different form of energy.

Let’s begin by looking at the benefits of the fracking. The greatest benefit from an economic perspective is the price of natural gas.


Looking at this graph below, you can clearly see due to the so called “Fracking Revolution” that the natural gas prices have dropped 47% if you compared it to the price it would have been prior to 2013. Similarly, energy consumers see economic benefit, as gas bills have fallen $13 billion between the years 2007 and 2013 due to fracking. From a geographic perspective, the West South Central region and East North Central region, while includes states such as Arkansas, Lousiana, and Oklahoma as well as Illinois, Indiana, and Ohio respectively have seen upwards of over $200 per person in benefits. Essentially, if you are looking at the potential of the technology, it is widespread and already adapted, and has shown substantial returns in terms of energy and economic benefit. Yet, this does not address the health and environmental aspects.

Now with all of these benefits, this process seems like it should be widespread right? No, not so correct. When looking at fracking from a more environmental and resource conscious perspective, fracking takes up massive amount of precious resources, take for instance the most important resource to humans, water. A study from Duke University determined that, “Energy companies used nearly 250 billion gallons of water to extract shale gas and oil from hydraulically fractured wells in the U.S. between 2005 and 2014, a new study finds. [4] During the same period, the fracked wells generated about 210 billion gallons of wastewater.” Now, if you refer this back to the economic perspective, not only are you transporting massive amounts of water to run these machines, but you are wasting gas from the trucks to transport the water.

Another concern about the production of something is its “clean energy,” also known as the production of something releases nothing toxic and or has no harmful byproduct. Some studies, including the documentary Gasland, highlight the environmental effects and aftermath these products provide. Below is a segment of the documentary focusing on the aftermaths of one home in Colorado:

[5] (Begin At 11:00 and end at 16:25)

As you can see, there is at least some sort of environmental issues that fracking provides. Not only does the water look unclear, it looks black and filled with harmful chemicals, clearly violating “clean energy” and showing major signs of concern. Furthermore, what makes many question the fracking industry are that the companies being non-complaint to come, test the water, and provide an accurate judgment about the conditions they are living in and providing a solution to the problem. As you can see hear from the women, the companies come but essentially lie to the homeowners faces as it is clear that well water that is being retrieved from the ground should not be black. Moreover, as opposed to looking at the lives they are affecting, the companies seem to direct their attention more towards the economic upside of the potential customers they can reach if they put all their staff towards buying or signing with other properties to frack.

A recent scientific study titled, “The Environmental Costs and Benefits of Fracking,” looked more in depth into this video as well as the general process of fracking. The article stated that, “Primary threats to water resources include surface spills, wastewater disposal, and drinking-water contamination through poor well integrity. An increase in volatile organic compounds and air toxics locally are potential health threats, but the switch from coal to natural gas for electricity generation will reduce sulfur, nitrogen, mercury and particulate air pollution.” As the article continues, it focuses on more hazards, saying that over 36% of the underground water that is in the United States, which could potentially be used for important aspects of life such as drinking and agriculture, can be ruined if fracking continues for another 5 years.

Focusing on the United States may be something too hard to imagine, so let’s go to scale with a local area in Philadelphia. According to the Council of the City of Philadelphia, there have been major problems with the contamination due to fracking, leading to Department of Environmental Protection and the Delaware River Basin Commission to ensure the health and safety of the regions drinking water [7]. Comparing the amount of illnesses in a more localizes area, which from Philadelphia was recorded to be over 300 people, on a large scale can be affecting millions of people. Another article, titled “The Health Implications of Fracking” explains some of the reasons why the United States issues have spread nationwide. Specifically, failure of structural integrity of cements and casing, surface spills, leakage from above-ground storage, as well as the structural integrity of heavy transport vehicles are viable causes for the problems occurring. However, there should be no major blame on the leaking and the surface spills, as regardless of the health implications from the contamination coming up from the ground, fracking is causing this pollution of the water which eventually leads to these issues. This means that in theory, by eliminating fracking, the United States water pollution and eventually health issues should decrease.

One of the most shocking studies that you see below is comparing Estrogen and Androgen in contents of soil and water samples for regions that contain fracking versus regions that don’t have fracking. As you can see from the image below, the amount of combined estimated marginal means of estrogenic, antiestrogenic, and antiandrogenic activities are much higher at the ground water level versus the surface level. Furthermore, based on this data, it is a reasonable consumption that the ground water could have been affected solely by fracking, creating these massive fluctuations of these chemicals. And because of these imbalances of these chemicals, the water can have discolorations and lead to future illnesses and hospitalities of people worldwide.


Beyond the health risks can be permanent environmental risks, something much more dramatic. As you will see in this BBC video below, one theory suggested that a recent earthquake could have been caused by fracking.

[9] (Start Video At 2:00)

Now, if this theory is correct, think about the potential devastation that fracking can do. The UK has just begun its fracking boom, so for countries such as the United States, there could be almost immediate potential disasters on a larger scale. With every risk comes a reward, but with a risk not only health wise, but environmental destruction wise may be a good reason to shut down fracking.

Now in terms of future fracking endeavors, there has been some speculation in terms of what agencies are upholding fracking and how successfully are they thus far. Supposedly, the United States Congress urged the U.S. Environmental Protection Agency to do a well in depth study on hydraulic fracturing and its effects on the ground water. The U.S. Congress outlines on its government website that it wants the EPA to, “assess the potential for hydraulic fracturing for oil and gas to change the quality or quantity of drinking water resources, and identifies factors affecting the frequency or severity of any potential changes. This report can be used by federal, tribal, state, and local officials; industry; and the public to better understand and address any vulnerabilities of drinking water resources to hydraulic fracturing activities” [11]. However, there has been much speculation about this topic, including the EPA’s ability to run a in depth investigation, as plenty of new states and regions, such as Wyoming have emerged and openly stated there has been a major environmental issue in their state due to fracking [12]. If you would like to watch a video on this topic, click here.

As you watch the video, be in mind that this is an investigation of many, something that isn’t new to the EPA. Tying back to earlier in this discussion, the Gasland documentary has protested and attempts the United States government to run its own 3rd party investigation in an attempt to put this debate to rest, and if there is a problem (which seems like there is), then there should be some immediate solution. Although these small states such as Wyoming and Illinois may need fracking to produce jobs and economic stability, it should not come at the cost of environmental destruction.

From this discussion, I hope you conclude and truly understand the controversy about this product. It seems like society is divided between economists versus environmentalists. People for fracking focus on the economic benefits of lower costs and the concept of self-sufficiency from OPEC, something that sounds reasonable from just a pure business point of view. However, when taking into account environmental factors, health, and potential natural disasters including earthquakes, something needs to be taken into greater consideration. There needs to be more of an immediate study by the Environmental Protection Agency or an outside 3rd party source to determine the true effects of this oil and gas production process. The last thing society needs, especially in the United States, is a natural disaster that will further ruin the economically and the environment simultaneously.





[1] https://www.asme.org/engineering-topics/articles/fossil-power/fracking-a-look-back

[2] http://www.bbc.com/news/uk-14432401

[3] http://www.brookings.edu/blogs/brookings-now/posts/2015/03/economic-benefits-of-fracking

[4] http://www.sciencedaily.com/releases/2015/09/150915135827.htm

[5] https://www.youtube.com/watch?v=cutGpoD3inc

[6] http://www.annualreviews.org/doi/full/10.1146/annurev-environ-031113-144051



[9] https://www.youtube.com/watch?v=_E3A-D8mAb4

[10] http://www.napavalley.edu/Library/PublishingImages/fracking-infographic.jpg

[11] http://www2.epa.gov/hfstudy/executive-summary-hydraulic-fracturing-study-draft-assessment-2015

[12] http://www2.epa.gov/sites/production/files/documents/EPA_ReportOnPavillion_Dec-8-2011.pdf









The Controversy Surrounding Nuclear Waste, What Should We Do with It?

The first step in effectively analyzing this problem is understanding how we generate energy through nuclear power and its fuel source. Uranium is the main source of energy used in reactors in power plants.  Uranium is a heavy metal which contains large amounts of concentrated energy when properly used/harvested.  It is present in earths crust as well as in seawater and the ocean.  Uranium occurs in different forms, such as the two types are used as fuel, U-238 and U-235.  This difference is manifested in the isotopes found in the nucleus (the number of uncharged particles, neutrons).  The isotope U-235 is interesting because under the correct conditions it can be easily split which unleashes large amounts of energy.  This process is known as nuclear fission, the process in which the nucleus of an atom is split into smaller parts.  The nucleus is made up of protons and neutrons is unstable.  The nuclei split up and release neutrons, the neutrons strike other atoms, which split again.  One fission reaction triggers and causes a chain reaction, this process becomes continuous and self sufficient.  U-238 is fissionable but needs an energetic neutron in order to start the fission process.  This isotope decays extremely slowly with a half life of 4500 million years!  However, most of the uranium present in nuclear fuel is U-238.  Now that we understand the fuel source of nuclear power, we can discuss what goes on in nuclear power plants and the reactor core. Fission Reaction

Nuclear Fission Reaction
Nuclear Fission Reaction


Reactor Construction   

There are a vast number of different types and constructions of nuclear reactors such as the boiling water, pressurized water, breeder, and fast neutron reactors.  The two types used in the United States are boiling water reactors or pressurized water reactors.  Water is needed for to create steam.  Steam is used to move turbine generators which create electricity.

In pressurized water reactors, water is kept under pressure to prevent it from boiling and to heat it.  The heated water is used moved through tubes into turbine generators, the water turns to steam, which moves the turbine and produces energy.  Water in the reactor and the water used to make steam are kept separately and never mix

.Pressurized Water Reactor Diagram

Boiling Water Reactors operate differently, water is heated through fission.  It is boiled and turned into steam, which turns the turbine generator. In both types of reactors steam is turned back to water and is used again.BWR-schematic

Refinement Process

After uranium is mined it undergoes an enrichment process so it can be used as fuel.   The process requires the uranium to be converted into a gaseous form.  Through sublimation it is transformed into uranium hexafluoride (UF6). This process raises the uranium content from 0.07% to approximately 5%.   The uranium hexafluoride (UF6) is brought to fuel fabrication plant where is is converted into uranium dioxide power.  The powder is compressed into small pellets, which heated until them become a hard ceramic texture.  The newly created pellets are inserted into small tubes to form fuel rods, the rods are placed together to form a fuel assembly.  The number rods in each assembly differs with the type of reactor.

Uranium Ore
Fuel Pellet
Fuel Pellet
Fuel Rod
Fuel Rod

The controversy arises from the spent fuel rods as the by product of generating nuclear energy.  As the fission reactions continues, the fuel rods become used, and eventually they become no longer useful.  The plant workers use control rods to cease the reaction and remove the spent fuel from the reactor core. When the are first removed they are highly radioactive.  The rods are moved to a cooling pool, where they are submerged in water.  The water provides a shield from radioactivity and allows them to cool.  The rods are allowed to cool in the pools for about 5 years before they are moved.  They are then transported to dry cask storage, caskets made of reinforced concrete with steel liners.  Here they continue the decay process, which has a half life of approximately 4.5 billion years.  The rods are stored here until they can be permanently stored elsewhere.  This is where the dilemma of where to store nuclear waste arises from.  We have been unable to decide where is the best place to dispose of the waste.  Many critics of nuclear power see this has a major flaw and drawback.  The consequences of the waste being released back into the environment could be catastrophic.

Spent Fuel Rod Pool
Spent Fuel Rod Pool
Dry Casket Storage Diagram
Dry Casket Storage Diagram

Yucca Mountain, Nevada

In 1982, the Nuclear Waste Policy Act was passed by congress.  This legislation required the Department of Energy to create a storage site for spent fuel rods and other radioactive waste.  Waste from nuclear power plants was supposed to be moved off site to the new waste repository. In 1897 a potential site was picked and underwent inspection and examination, the Yucca mountain located in a remote desert region of Nevada.  The waste was to be stored deep inside the mountain in an underground repository. This has the characteristics of a good storage location. The mountains are extremely secluded and therefore pose very little danger in case of a leakage. The mountain is also naturally hard and thick, so it would be hard for anyone to attempt to reach these materials.  However, a major risk and flaw of this design is radioactive waste leakage into the surrounding groundwater.  A leak proof containment area or buffer zone would have to be created before this idea is viable.  Geologists have been canvasing the area, testing the sorrowing rocks and testing the mineral composition to determine if the surfaces are permeable. Ultimately all the capital and work put into the potential site was in vain.  Many of Nevada’s citizens vehemently opposed the project since its inception.  They didn’t want to be the state where all of the US nuclear waste was stored. This is an example of the NIMBY syndrome, opposition of a new development because of its proximity to one’s community. A phenomenon that has plagued the development of nuclear power plants in certain areas. In 2010 president Obama revoked the Yucca Mountain license review, thus effectively ending the project.  The task of finding a new location was tasked to the Blue Ribbon Commission on America’s nuclear Future.  The committee’s stance is that deep geological disposal is the best option to proceed with.

Yucca Mountain, Nevada
Yucca Mountain, Nevada
How The Waste Would be Stored
How The Waste Would be Stored


In depth look at the storage system
In depth look at the storage system

Since the US government has decided to close down the Yucca Mountain Repository, which was our best option at time, we are left with a huge dilemma on our hands.  Where do we put the waste now?  We are in dire need of an alternate solution!  Nuclear waste is will become a huge part of our future problems; if do not find a viable option as soon as possible.  There have been a variety of solutions proposed, many that could be considered far fetched or even impossible.  But at this point, it would be wise to not dismiss these ideas and explore them fully to determine the best course of action.

Deep Geological Disposal/ Boreholes

As mentioned before, deep geological disposal is a popular idea at the moment.  Simply explained radioactive waste is buried deep underground.  However, going about how it is deposited and buried there is the source of debate.  Spent fuel rods would be encased in steel and then be buried miles below the surface of the earth.  This prevents the waste being accessed easily or being released unintentionally.  Another advantage is that the boreholes can be placed close to the power facility which reduces the risk of transporting radioactive waste to an off site location. A major downside is that Plutonium recovery would be extremely challenging and complicated. Also pulling waste 3 miles up to surface, safely is a daunting task. Nuclear energy that is spent can eventually be reused to recover fissionable materials.  The recovery of these materials is useful because it provides fuel to existing and future power installations.

Deep Borehole
Deep Borehole

Sites similar to the Yucca Mountain Project can be considered a form of geological burial.  However an concept originating in the Czech Republic has improved on the design and has a possible solution to prevent groundwater contamination. A hydraulic cage is a moat like structure which is built around the waste containers, this creates an alternate path of contaminated liquid, incase of a leakage.  A fully leakproof storage system has not been deviled yet, so the hydraulic cage is currently a viable option and soultion.

Deep Underground Storage

Outer Space

Hubble Space Telescope Image
Hubble Space Telescope Image

NASA’s Jet Propulsion Laboratory has their own take on the matter, shooting spent fuel rods into space.  The idea behind this is that the universe has natural radioactive properties, therefore it should also be able to take it away.  The sun is undergoing a constant nuclear reaction that is enormous, approximately 330,000 times larger than earth.  At this rate, more than 10,000 tones of spent fuel rods could be absorbed very easily.  Radioactive material would pose little threat to humans if it was floating aimlessly through space and eventually being absorbed into the sun.  However, getting it there safely is major impediment to the progression of this idea.  There isn’t a hundred percent foolproof method to get it there. Rockets launches and lift-offs have failed in the past resulting in fires, crashes and explosions.  Any 3 of these results are dangerous and have the potential to spread into the atmosphere and over large areas quickly. Also if we ever want to recover these materials to re use, we cant, there are floating in the solar system

Ice and Glaciers

In the 1970s it was proposed that we could burrow waste into glaciers by placing a sphere of waste into an ice sheet and letting re solidify. This idea was dismissed quickly for obvious reasons.  Ice sheets move which would cause waste to float in the ocean as a radioactive iceberg.  The iceberg could also melt which would lead to a toxic leakage into the sea.


Deep Sea Storage

seafloor l

The bottom of the seafloor is composed of a heavy and thick clay substance. Coincidentally this material is a great at absorbing radioactive decay by products.  This process would require deep underwater drilling and boring.  This storage method becomes a touchy political subject because its being stored at sea not on land, this would require international cooperation to even be successful.

Synthetic Rock Material

This material was specially designed to absorb and retain waste products.  The material comes into different forms to cater to the variety of waste being disposed of.  This material is designed to imitate geologically stable materials; they wont leak the waste outside of the containment area.


When describing waste it is often difficult to visualize and comprehend the gravity of the issue.  I will provided some quantitative figures and qualitative measurements to help those understand the problem.

A typical nuclear power plant annually generates about 20 metric tons of used nuclear fuel. The entire nuclear industry in total creates of about 2,000 – 2,300 metric tons of used fuel per year.  According to the Nuclear Energy Institute, “Over the past four decades, the entire industry has produced 74,258 metric tons of used nuclear fuel. If used fuel assemblies were stacked end-to-end and side-by-side, this would cover a football field about eight yards deep.” Thats a lot of waste!

The World Nuclear Association provides some figures regarding LLW and HLW, “Low-level waste (LLW) and most intermediate-level waste (ILW), which make up most of the volume of waste produced (97%), are being disposed of securely in near-surface repositories in many countries so as to cause no harm or risk in the long-term….The amount of HLW produced (including used fuel when this is considered a waste) is in fact small in relation to other industry sectors. HLW is currently increasing by about 12,000 tonnes worldwide every year, which is the equivalent of a two-storey structure built on a basketball court or about 100 double-decker buses and is modest compared with other industrial wastes.”

After careful consideration of my research and all options, I have come to realize that there is a viable solution to this large problem.  My solution combines various methods together to create the most effective system of storage.  I would store the waste in a deep underground repository in a remote area that is heavily monitored.  People wouldn’t be allowed to live in close proximity to the facility, in order to minimize the potential harmful effects of accidental radiation exposure.  The underground storage area would be lined with synthetic rock material that would prevent groundwater contamination.  The waste containers would be surrounded by a hydraulic cage, so in the case of a leak, the waste would have an alternative rout and not contaminate ground water supplies.  The area is also accessible enough for efficient plutonium and material recovery.  I believe this idea takes from all options and uses the positive attributes to create an almost infallible storage facility.


  2. “Nuclear Fission.” Nuclear Fission. N.p., n.d. Web. 18 Sept. 2015.
  3. “What Is Uranium? How Does It Work?” World Nuclear. World Nuclear Association, n.d. Web. 18 Sept. 2015.
  4. “How Do Nuclear Plants Work?” -Duke Energy. Duke Energy, n.d. Web. 18 Sept. 2015.
  5. “How Nuclear Reactors Work.” – Nuclear Energy Institute. N.p., n.d. Web. 18 Sept. 2015.
  6. “The Plan for Storing US Nuclear Waste Just Hit a Roadblock.” Wired.com. Conde Nast Digital, n.d. Web. 18 Sept. 2015.
  7. “Repository Development and Disposal.” Nuclear Energy Institute, n.d. Web. 18 Sept. 2015.
  8. Finland’s Crazy Plan to Make Nuclear Waste Disappear.” Popular Mechanics. N.p., 10 May 2012. Web. 18 Sept. 2015.
  9. “NuclearConnect – ANS.” ANS. N.p., n.d. Web. 18 Sept. 2015.
  10. “What Is Nuclear? / Nuclear Waste.” What Is Nuclear? / Nuclear Waste. N.p., n.d. Web. 18 Sept. 2015.
  11. “Radioactive Waste Management.” World Nuclear Association, Aug. 2015. Web. 18 Sept. 2015.
  12. All images courtesy of google

New and Improved: The Smart Energy Grid

In theory, something with the word “smart” in its title sounds innovative and beneficial to our society. But does that measly adjective always explain the vast scope of the topic? With our rapidly-expanding technology and increasing energy consumption, we need the Smart Energy Grid.

So what is the grid and the vast scope of information it encompasses? The “grid,” to which the title refers, serves as a system of electricity, which may be comprised of transformers, wires, substations, transmission lines, switches, and a multitude of other mechanisms. Together, this network transports electricity from wind farms or power plants to one’s home or industry. The electric grid on which on which our society relies came about in the 1890s. It began as a series of small grids, that in no way connected to each other and were hastily created without spending much money. In this video, Maggie Koerth-Baker speaks about her book, When the Lights Go Out, and the history of the first energy grids:


In the video, Maggie introduces H.J. Rogers, a name of which many people are unaware. Rogers, a very rich man, lived in Appleton, Wisconsin. In the 1880s, he received license to Thomas Edison’s work concerning electricity. Roger’s advancements would contribute much to the electric world, as Edison barely beat Roger in instituting his electric grid. However, as technology and understanding have developed over the past century, so too has the grid. For more on the electric grid and its history, click here.

The current energy grid is comprised of at least 9,200 electric generating units. Within this, there are more than 300,000 miles of transmission lines. If you find yourself surprised at these statistics, you should be, as they are considered a feat in the realm of engineering. However, this electric network is reaching its capacity – the limit to its expansion – whereas our society and energy usage is not. We need an alternative, and fast. Here’s where the Smart Grid enters the scene, and rightfully so:

As the U.S. Department of Energy described in the video, The Smart Energy Grid will serve as a “new and improved” network, capable of keeping up with technological advancements that constantly occur in today’s society. How else will we be able to create the iPhone 14s? Besides the digital aspects, the grid will focus on two-way communication, rather than the common irreversible interaction seen in the previous system. In other words, the energy web will strive to counter to the demand of its customers: us. The transformation guarantees an increase in power efficiency, meaning a quicker and more dependent process of dealing with energy and electricity.

The Positives

Let’s address the hypothetical functions of this ideal grid. This system will work to incorporate power generation systems, including large-scale renewable energy systems. It will also provide better security, lower operational expenses of utilities, and reduce electrical costs by buildings, both commercial and residential. In an environmental sense, the new grid will decrease emissions such as greenhouse gas and increase energy conservation and efficiency, as mentioned above. The grid is expected to cut greenhouse emission by 13-25%, which is about equal to taking 1-2 million cars off the road each year¹. At the same time, energy consumption is expected to decrease by about 4%¹. While it would not be difficult to accidentally use more energy with the grid, the smart meters and other measuring devices record this energy usage, so we’re always in the know. Finally, the network will restore electricity more quickly following power outages, otherwise known as blackouts.


Blackouts impact security, communication, the economy, and many other aspects of life. They can be caused by a conglomeration of issues, sometimes completely unrelated to each other. For instance, in August of 2003, 55 million people and in eight states and Canada lost power in their homes. In addition, 246 power plants went offline. The blackout was caused by a generator outage and two defective monitoring programs. This event, known as the Northeast Blackout of 2003, had detrimental economic impacts. It ended up costing the U.S. somewhere between four billion and ten billion dollars. Blackouts like these occur all the time, usually due to factors like lack of communication, neglect to care for transmission lines and electrical sites, and the domino effect that can often occur in a network of smaller systems like this. More than anything, there were not people looking at the grid as a whole.


The Smart Energy Grid, however, is working to eliminate the causes and effects of these power outages. With more efficiency, it will take less than ten seconds for controllers to see the “big picture.” There will also be more significant retribution for failure to manage facilities, like trimming trees around transmission lines. In theory, the new grid will allow controllers to see the problem quicker and fix it either before, or directly after, the blackout. The grid will also isolate these outages, which avoids the possibility of them becoming even more spread out. It will also allow easy rerouting when systems fail or go offline. With smart meters dispersed, controllers can “ping” the meters to see if power is restored to all the customers. While there are still a number of components to sort out regarding the new grid, the prospect of saving $100 billion per year (average cost resulting from blackouts per year) motivates scientists and engineers to keep trying.

A smarter grid can also help during extreme weather events due to the increased resiliency. The two-way interaction, mentioned above, will allow the network to automatically reroute when equipment or systems shut down. This lessens the effects of emergencies, such as earthquakes, severe storms, including hurricanes and tsunamis, attacks, or large-spread fires. The grid will also ensure that power is returned to emergency services first, and utilize power generators, specifically customer-owned, when the utilities cannot provide electricity. Some Smart Grid technologies could be seen during the restoration of Hurricane Sandy, as customers with smart phones could receive updates. The key was communication, especially between field personnel and those in the control room. All these ideas add to the beneficial elements of the new grid. Read more to learn more about how the smarter grid could help in extreme weather situations.

After receiving a fund of $200 million by the Recovery Act, CenterPoint launched a pilot program, where they provided 300 customers with a smart meter (see picture below) and in-home energy monitor to allow them to observe their home energy use. When the company collected initial results from the experiment, more than 70 percent responded they had already taken steps to decrease their energy usage based off of the real-time information their in-home energy monitor provided. Also, 97 percent of the customers who participated plan to continue using their energy system after the pilot program is finished. This program showed how one may benefit from the new technology of a smart grid.


The Negatives

Unfortunately, we cannot decide to implement the Smart Energy Grid one day and install them all over the world the next day. There a multitude of economic, technological, and scientific obstacles that could go hand-in-hand with the transformation. First off, it would create a significant economic strain on governments. The National Electric Manufacturers Association, or NIST, have begun the installation of “smart meters,” which allow those living and working in buildings to monitor and regulate their electricity usage. NIST is expecting to invest around $40 billion to $50 billion in smart meters alone. The Federal Recovery Act funds has invest $7.8 billion dollars in ameliorating the grid³. While many initial programs, like the smart meter, have received funding from laws (stimulus law) or campaigns and a portion of the population argues that the system will save money, implementation of the Smart Energy Grid as a whole will require a substantial amount of economic aide to succeed nationwide. Read more about the start of the Smart Grid installation here.

There are also technological hurdles when it comes to updating the grid. One of the main problems is that society is constantly changing where we get electricity, how much we use, and who uses it. We have proven over the last decade alone how quickly technology can transform. Not only could the smart grid struggle to keep up with these advancements, but there could also be contradictory issues regarding the introduction of new technology during installation. The transformation to a Smart Energy Grid will require many users of the old grid, meaning most of society, to adapt to the digital aspects and requirements of the new one.

In this video, James Woolsey, former CIA director, claims smart energy grids are “stupid,” for devices such as smart meters are not resistant from hacking. He fears the technology will be broken into, and completely shut down the grid.

Many people also worry about the scientific factors of the Smart Energy Grid. More specifically, they fear for their own health. This process would include constant exposure to Radio Frequency radiation, because appliances would transmit data through power bursts that sit well above safety standards. Smart meters, explained earlier, will also let out an even higher frequency to hubs located in neighborhoods. In the past, many have claimed that exposure to RF radiation can lead to infertility and cause medical implants, like insulin pumps or pacemakers, to disfunction. This is not proven, however. The Environmental Defense Fund states that the radiation in smart meters is like that in cell phones, but less¹¹. While society worries about exposure to cell phone radiation, it is a non-ionizing form of radiation that is not extremely harmful, like Alpha or Beta radiation. This risks do not compare to the 30% cut in air pollution that the Smart Grid is expected to produce by 2030¹¹. Around the 30 minute marker in the video below, Dr. Dietrich Klinghardt speaks about the noticed effects upon smart meter installation. Of the many impacts he discusses, Klinghardt specifically notes copper increase, TGF-Beta 1 increase, and hormone abnormalities. These are often the effects that supports of the Smart Energy Grid either forget or purposefully disregard.

With the new grid, the storage of power could prove difficult. Some solutions? One idea is to store energy as large amounts of compressed air in geological vaults. Other have proposed a system of compact, energy-dense batteries in homes. The most popular idea, however, is to improve the existing lithium battery. This idea is most dependable, as they have high efficiencies, can be recharged, and can hold high densities of energy, which is very necessary for the Smart Energy Grid.

Finally, it is important to look at the expectations  of the Smart Energy Grid. The Brattle Group, which “provides consulting services and expert testimony in economics, finance, and regulation to corporations, law firms, and public agencies,” proposed 6.5% energy savings for customers. In addition, The Electric Power Research Institute (EPRI) believes that reductions in the loss of transmission lines due to voltage regulation could save from 3.5 to 28 billion kWh in 2030. It is also expected that net annual CO2 emissions could be between 0.7 Gt and 2.1 Gt by 2050².






  1. https://www.smartgrid.gov/files/your_smart_grid_environmental_benefits_toolkit_11-2008.pdf
  2. https://www.iea.org/publications/freepublications/publication/smartgrids_roadmap.pdf
  3. http://www.scientificamerican.com/article/smart-grid-nist-standards-commerce-department/
  4. http://www.c2es.org/technology/factsheet/SmartGrid
  5. http://boingboing.net/2012/08/03/blackout-whats-wrong-with-t.html
  6. http://www.resilience.org/stories/2011-03-23/problems-smart-grids
  7. https://www.ase.org/resources/realizing-energy-efficiency-potential-smart-grid-alliance-white-paper#DataDisplay
  8. https://www.smartgrid.gov/the_smart_grid/smart_grid.html
  9. http://e360.yale.edu/feature/the_challenge_for_green_energy_how_to_store_excess_electricity/2170/
  10. http://energy.gov/articles/how-smart-grid-helps-homeowners-reduce-their-energy-use
  11. https://www.edf.org/SmartMeterResponse


The “Controversy” of Hydrofracking

Many citizens of the United States have heard of the term fracking due to some exposure to it. Although the knowledge of the term fracking is popular, a seldom amount of people actually understand the process, as well as the benefits and the possible drawbacks. Through this miscommunication, there has been a resulting population with certain thoughts that tend to be incorrect. This misconception about the process has led to decisions being made that may have been incorrect in deciding how to proceed. The reason that this vagueness is present is due to the fact there has yet to be a true, in-depth analysis of all of the benefits and drawbacks of the process. Therefore, although there have been many claimed drawbacks to the process of hydrofracking, there has yet to be conclusive evidence that this new energy producer should be ruled out as a possible solution to overseas reliance for energy. Furthermore, in order to get a true population gauge on hydrofracking, you need to ensure that all of the population fully understands the process. Simply put, there are too many false ideas present to automatically rule it out as an unsafe method of creating energy while also considering the advantages to it.


As previously mentioned, the first way to help fully understand hydrofracking, and the process that it performs, is to educate the public on the process of extracting the natural gas. To begin, the goal of hydrofracking is to pull natural gas out of shale formations underneath the surface of the Earth. Specifically, hydrofracking in the United States has been focused on two areas of major shale formation in the Utica and Marcellus shale deposits. The process of hydrofracking involves creating a well that goes, on average, about 1000 feet below the outer level of Earth. Once the well has reached this depth, wells then change direction from vertical to horizontal as the shale is not extremely wide; but rather, a very long stretch of shale. Because of this, the horizontal wells allow the drillers to access more shale deposits. Next, the drillers bring in trucks that inject a liquid called a propant that is a mixture of certain liquids, which has the goal of making the original perforations bigger, while also leaving the sand it carries behind. By remaining in the shale deposits, the sand serves to hold open rock letting the methane and other natural gasses escape, and travel up the well, to be collected by the drillers. The drillers repeat this process all along the horizontal portion of the well. Once the drillers have created as many pads as possible within the well, they began the extraction process. The extraction process consists of water first flowing through the well, followed by the natural gas or oil. The water that first comes back up the well is in approximately 30 percent of what was originally injected.



Although the process of hydrofracking is clearly mentioned in the previous paragraph, it is stated from a perspective that is aimed to be neutral. Thus, a highlight of all of the perceived negative aspects of the process are to follow. The most well known negative effect of hydrofracking is the ability to set your water on fire. As seen through the video below, featured prominently in the movie “Gasland”, hydrofracking apparently has the ability to force gases to infiltrate and contaminate the water supply of areas surrounding the drilling location. However, a subsequent report(, written by the State of Colorado Oil and Gas Conservation Committee, showed the the movie had a major flaw in blaming this gas leak on fracking. The Committee begins by stating the difference between biogenic and thermogenic methane. The key difference for the sake of fracking is in the creation and composition of each of these two gasses. Biogenic gas is created naturally through the decomposition of buried organic material, and is composed of primarily methane and ethanol. Meanwhile, thermogenic methane is created through the fracturing of rocks that secrete the gas, this form of methane features many other different gasses in the composition. The reason that the makeup of the gasses matter is because it allows scientists and labs to collect samples and figure out the probable source of the methane if it contaminates an area. Through this process, Colorado was able to test multiple sites featured in Gasland and found that the primary source of methane was biogenic contamination not thermogenic. In one case, the tests did reveal that there was a mixture of thermogenic and biogenic methane, and thus, the two parties agreed to a settlement. Therefore, there is a clear public misconception about the damage done to the surrounding area while fracking is occurring. The reason this can be said is because of the widespread audience and acceptance of all that was presented; meanwhile, the counter movement to this movie is relatively unheard of, as seen in the low name recognition of the counter-film, “Truthland”.


These facts being stated, there is other evidence surrounding fracking that can easily be seen as a drawback. The biggest drawback in fracking can be seen through one key issue, injection fluid. The reason that the argument can be made that this is the biggest issue is due to the fact that not only does it show environmental damage; but rather, it also showcases the unregulated side of fracking that poses a potential damage to not only the environment but also the communities. Many believe that the water injected into the wells is monitored due to the precautions set out by the Safe Drinking Water Act(SDWA); but in reality, the SDWA has a provision in it that directly states that the injected water is not controlled, nor is it considered an underwater injection. The Act does still control the injection of diesel fuels, requiring companies to receive approval from the EPA before proceeding with injections involving diesel. Moreover, there is a glaring problem in the idea of the water that is extracted from the ground. This water is commonly refereed to as flowback, and poses the risk of destroying environments if mishandled, as it often is. Flowback water can contain a myriad of toxic medicals, brine and a myriad of radioactive materials. As discussed in this article, there are methods being put to use that recycles the water; however, the amount of flowback water being produced the can not be recycled is exorbitant. This water, more often than not, is disposed of in a manner that is not environmentally friendly. Of these possible solutions for dealing with this water, the options are as follows: pay a large amount of money, digging deep into your profit margins, to correctly treat and dispose of the water; secretly treat the water in a sub-par manner, highly illegal, and then dump the water into another body of water or into the ground; finally, a commonly chosen solution, is to take the water to another state and inject it deep into the ground, legal, but destroying the ecosystem. The reason that the final solution is chosen the most often is due to the fact that although this method seems to have a high overhead, with movement and injection costs, it also allows companies an easy way out that nine times out of ten will not have any further costs to the, unlike other treatment methods.

The makeup of an average injection fluid. http://graphics8.nytimes.com/images/2011/07/08/science/frack/frack-blog480.jpg

As seen in the previous paragraph, there are currently problems associated with the way hydrofracking is presently carried out; however, due to the negative public thoughts associated with hydrofracking, the research needed to help improve these methods has ceased. Fortunately, there has still been a major discussion of the hydrofracking process put forth by the MIT Technology Review. This discussion brings forth the idea of substituting the water used in hydrofracking with gas, specifically carbon dioxide. This process of using carbon dioxide can be just as effective as water, and has already been used by certain energy companies but only in small settings. The reason that this process has not been used more openly is due to the fact that in the current access to carbon dioxide, it would not be cost effective to change over to solely this method. However, there is a clear correlation between the public distaste for hydrofracking in general and the inability to research and create trial periods for this method across the country. This method is not proven to be completely effective, but in order to move forward there needs to be more acceptance of this process by the public.

An example of a flowback water holding site. http://www.swarthmore.edu/sites/default/files/styles/main_page_image/public/assets/images/environmental-studies-capstone/Overspray%20Frack%20Water%20Waste%20Pit.jpg?itok=ZkiobGlX
Therefore, there is not enough conclusive evidence to state that the process of hydrofracking is more negative or positive all inclusive of the environment as well as society. The reason that hydrofracking needs to remain as an option to be thoroughly researched is because of all that this newfound energy source could yield for the country. The resource potential of the United States, although not as strong as China’s, is extremely high. This strong resource potential clearly shows that hydrofracking in an improved manner could take America’s reliance on other countries overseas down to levels never before seen, thus driving down the rate of energy supplies and strengthening the American economy as well. This independence is shown by the graph below. The graph, featuring predictions from the United States Energy Information Association, shows that hydrofracking from shale deposits could increase to be worth 31% of the United States’ energy reliance. Furthermore, hydrofracking is beneficial as it shifts more consumers away from other methods of energy burning to natural gas, which burns in a manner that is more environmentally friendly than that of products such as coal.

Due to both the stated drawbacks and benefits of hydrofracking, there needs to be more conclusive data researched and studied to see if hydrofracking in some, if any, capacity should be implemented back into the United States. As the public misconceptions of hydrofracking rise, as well as the acknowledgement of true problems, the process has declined in popularity as more and more companies wanted to distance themselves from this negative image. However, these misconceptions have been created due to the popularization of materials that were misguided in their presentation of associated issues. Due to this blurring of truth, as well as the failure to correctly present the truth when it was found, the process of hydrofracking was placed in a negative light. This negative light led to the destruction of innovation in the hydrofracking field. Thus, when presented with a possible solution to a real problem in the process, energy companies were unwilling to accept it and put it into action. Furthermore, this is the essential issue currently plaguing hydrofracking, not that it is a bad process; but rather, it has not been fully evaluated for all that it can provide. Similarly, there needs to be a reopening of the hydrofracking debate, given all of the new information that has come to light in recent years, to decide whether or not the process is, or can become, one that is more beneficial then destructive for both the communities reliant on it, as well as the ecosystem supporting it.

Energy of the Future: Fusion and the ITER

The Energy Problem

As humans begin to deplete our fund of natural resources, we face a rising problem of uncertainty whether we will be able to sustain our current lifestyles in the future. We have become comfortable in our mode of energy consumption and for the vast majority are ignorant of our destructive patterns. We have abused the earth and are quickly facing a crisis unlike anything we’ve encountered. If we wish to continue living our lives with such extravagance and convenience, we must turn to other paths of energy production and management. Which is why I have chosen to write about the ITER and its potentially transformative features that will revolutionize how we get our energy. ITER stands for: International Thermonuclear Experimental Reactor and is an agreement among the international community to solve our energy crisis by the means of Nuclear fusion. The ITER and its mission to create sustainable energy in seemingly infinite amounts is one that should be supported by all international governments and promoted by all scientific organizations.

ITER and Cadarache

When people hear the word fusion, not much comes to mind. Perhaps some of them will know of the reaction that happens on our sun, turning hydrogen into helium. But when people hear about fusion here on Earth, they turn to disbelief. But what was once disbelief is now becoming a reality. Throughout the years of technological innovation and international cooperation, the ITER has come to life. The ITER project is based in the Cadarache facility located in the commune of Saint-Paul-lés-Durance.


The Cadarache facility has been functioning as an atomic energy research center since 1959 when Charles De Gaulle started France’s energy program. Though ITER’s aspirations are large, Cadarache is not the only place to house nuclear fusion research in Europe. Since 1970, forty laboratories have opened in efforts to learn more about fusion on Earth and how to make it efficient. Much of the research we have today is due to collaboration between scientists in Europe and from around the world. Due to France’s history of using nuclear power as their main source of energy, it was appropriate to house projects like the ITER in their country.


For more information about Cadarache or nuclear research in Europe click here (Cadarache) and here (Euro-Fusion).

What is Fusion?

The aim of the scientists working on ITER is to produce a machine (tokamak) that replicates the process occurring on the sun. When dealing with nuclear reactions there are but two different types, both releasing immense amounts of energy. There is fission, which is the release of energy by the splitting of an atom. This happens through either a nuclear reaction or radioactive decay and is used in the commonly known atomic bomb. The other type of reaction, which is greater in energy yield, is fusion. This is the process that takes place on the sun and is the transformation of hydrogen into helium. The reaction can only take place in settings of intense heat and or intense gravitational force. Inside the sun, the gravity is so strong that it overcomes the positive repelling forces of the hydrogen atoms. In a normal setting, hydrogen would simply bounce away from each other because they repel. Only in specific places like the cores of stars does this incredible phenomenon take place, due to the effects of gravity. Without the massive amounts of gravitational pull of stars, fusion would never take place. Gravity is the reason why solar dust from around the universe eventually collects together to create larger masses like planets, stars and galaxies. The reason the scientists want to replicate this reaction on Earth is because of its enormous energy gain. During the fusion between two hydrogen atoms, the resulting helium nucleus has less mass than the two hydrogen atoms combined. This is because some of the mass from the hydrogen atoms were converted into energy, satisfying the law of conservation. Calculating the amount of energy gained by such a loss in mass can be figured out by Einstein’s equation of E=mc2. Multiplying the mass lost by the speed of light squared is the amount of energy given off. This means that even the smallest amounts of mass can produce the largest amounts of energy. The big question is: How did scientists here on Earth figure out how to imitate this marvel of astral conditions? The answer is the Tokomak.

Here is a video showing the equations that follow the process of fusion.

The Tokamak

The Tokamak is a machine using magnetic fields to contain gas in a chamber and heat it into plasma. Soviet Russians built the Tokamak in the 1950’s shortly after WWII to see the true effects of nuclear fusion in a controlled environment. The machine creates two different types of magnetic fields–the torodial and polodial– which keep the plasma contained within the machine and prevents it from touching the walls. The temperature required to stimulate the atoms into fusing is 100 million degrees Celsius, which is why Americans back in 1955 were skeptical of such a scientific accomplishment. The Tokamak runs like a ring around a central coil, by which the ring itself is covered in separate coils. The plasma is created by releasing gas into a vacuum chamber and then is heated by running a current through the gas.

A diagram of the Tokomak RIng
A diagram of the Tokamak Ring (https://en.wikipedia.org/wiki/Tokamak#/media/File:Tokamak_fields_lg.png)


The magnetic fields created in the Tokamak are made by running a current through the plasma, therefore creating a magnetic field that runs vertically. This works because plasma can conduct electricity well enough because of it’s exited state and movement of electrons. Additional currents and magnets are used in the center of the ring as well as above and below to create the Torodial field which runs horizontally around the ring. The combination of these two fields creates the flow of plasma needed to create the right conditions for fusion. The heat created by the current running through the plasma in the ring is only one third of the amount of heat needed to reach 100 million degrees celsius. This is why scientists have come up with other ways such as Neutral Beam Injection and Radiofrequency Heating to make up the last part of heat. Neutral Beam Injection works like a thrown baseball getting slowed down by air resistance. Scientists shoot neutral hydrogen atoms into the plasma which is moving at incredible speeds. The ions in the plasma react with the neutral atoms and ionize them in the process. As these ionized atoms slow down, they create friction with other plasma particles and give off energy as heat. Radio frequency heating works much like heating food in a microwave. The Tokamak creates additional oscillating currents around the ring, and matches the frequency of the current to heat up certain areas in the plasma with high energy absorption. After all of this the Tokamak will hopefully reach temperatures above 100 million degrees and begin to smash the hydrogen atoms together.

Here is Discovery Channel’s dramatic take on what goes on in a Tokamak.


Click here to see how the magnetic fields affect the plasma within the Tokamak.

Inside the ITER

Within our specific ITER Tokamak, the scientists decided to use deuterium and tritium–both isotopes of hydrogen– to be the atoms fusing together to create Helium. The reason the scientists chose these two isotopes is because they create the highest energy gain at the lowest temperatures. This means a higher efficiency in terms of how much energy the scientists need to put in versus how much energy is gained from the system. When at a temperature of around 150 million degrees Celsius (ITER’s fusion temperature), the deuterium and the tritium will combine to create one helium atom, one neutron and a lot of energy.


The energy created will be in the form of heat and in the motion of the neutrons released. Because these neutrons will have no charge, they will leave the plasma and hit the walls of the Tokamak transferring their motion to heat. In the ITER, the heat is absorbed by the walls and is then sent to cooling towers to be measured and processed. Because the machine is dealing with temperatures ten times hotter than the core of the sun, scientists must figure out how to contain this heat and store it as energy. This is one of the biggest problems scientist face working on the ITER and is why they need more time before they build an operational plant that produces and stores energy.

Hydrogen Production

Because the ITER will use large amounts of hydrogen for the ultimate goal of fusing it into helium, the Tokamak will need a constant supply. Hydrogen is all around us in nature, but extracting it from compounds takes time and energy. The production of hydrogen can be done in many ways (such as Electrolysis), but is usually created through the process of Steam Reforming. Natural gases such as methane are mixed with steam at high temperatures and the hydrogen attached to the CH4 (methane) and H20 (water) gets knocked off. This produces hydrogen and carbon monoxide which again is mixed with more steam to create carbon dioxide and more hydrogen. The ITER itself will not result in CO2 emissions, making it a clean energy source, but the preparation for the nuclear fusion will . Although there are ways to prevent the CO2 from being released into the atmosphere (injecting it into oils or natural reservoirs), these preventions are not usually used.

Additional Research

The goal of the ITER is to receive 500 MW of power from 50 MW of input power; in other words, ITER strives to make ten times the energy that was put into the machine. The record amount of energy produced from a fusion reaction was 16 MW at the JET (Joint European Torus) in Culham UK. This happened almost twenty years ago, so the experiments have a long way to go before they are able to produce ten times the energy. Still, scientists are optimistic because the Tokamak at the JET, the MAST,  has produced over “30,000 man-made stars”. The MAST is under improvements to make the “stars” stronger and hotter so that more energy can be created in the process. These improvements will be done by 2016/17 and will be a great addition to the research needed by the ITER before its completion. In addition to the MAST, scientists in Culham are in the process of creating a reactor that would test the parts for the eventual nuclear plant created called the DEMO. The scientists involved have gone through meticulous planning, and what seemed like a fairy tale might become reality in the future. Below is a diagram showing the research track for the future projects.


Fusion vs. Other Energies

If the goal of the ITER is to produce 500 MW from 50 MW of input, the Tokamak will have to produce more energy than is put into the system. Most of the energy going into the system will be used to heat the hydrogen into plasma. The efficiency of the heating system inside the ITER will be around 40%, which will bring the temperatures to a required level. The ITER itself will not produce electricity because it is only being used a model of research for the DEMO. Scientists want to see how they can attain and contain such high temperatures before putting it into use for electricity production. Most of the heat from the ITER will be transferred to coolant stations where the incredible amounts of heat will be dispersed. In comparison to a Coal power plant, where the efficiency is only 33%, ITER seeks to maximize efficiency by creating lots of heat with little energy. Also, ITER’s processes alone do not result in CO2 emissions or toxic wastes such as the processes that occur within a Coal plant. Coal makes up around 40% of the energy used in the United States, a fact that will soon change as companies and governments put limits to CO2 emissions.

DEMO and the Future

The ITER project is merely a way to investigate into the science behind nuclear reactions on Earth and is not the last step to a sustainable energy source. ITER started construction in 2015 and will likely start operations in 2027. Until then, scientists are are in a conceptual stage of creating a plant (DEMO) that will use nuclear fusion to produce energy sometime by 2050. The time scale by which these projects work on are large due to the amount of resources and knowledge required to set such an experiment in place. The ITER alone needed the help from more than 25 countries (EU, China, India, Japan, Korea, Russia, US) and numerous research centers to see construction even start. For DEMO to become reality, it will need the same amount of cooperation between countries for this result to be feasible. The importance of the ITER project is unparalleled here on Earth as our energy crisis becomes increasingly more apparent. The fact that these countries are able to work together for such a goal is admirable if not impressive, but the pressing need to solve this global problem is a daunting task. If successful, replicating the reactions of the Sun here on Earth will truly be a testament to our advanced understanding of the universe as human beings.

Works Cited

“CEA Cadarache – Welcome on Cadarache Center.” CEA Cadarache – Welcome on Cadarache Center. <http://www-cadarache.cea.fr/index_gb.php>N.p., n.d. Web. 18 Sept. 2015.
Discovery Channel. “The Tokamak – How the Universe Works.” Youtube.com. N.p., n.d. Web. 19 June 2012. <https://www.youtube.com/watch?v=UlC1kHfODyk>.
Elearnin. “Nuclear Fusion | Fusion Energy Explained with Hydrogen Atom Example | Physics Animation Video.” YouTube. YouTube, n.d. Web. 18 Sept. 2015.<https://www.youtube.com/watch?v=Cb8NX3HiS4U>
“EUROfusion | Eurofusion.” EUROfusion. <https://www.euro-fusion.org/eurofusion/>N.p., n.d. Web. 18 Sept. 2015.
“Fusion Energy.” : The Tokamak. <http://www.ccfe.ac.uk/tokamak.aspx>N.p., n.d. Web. 18 Sept. 2015.
“Hydrogen Production: Natural Gas Reforming.” Hydrogen Production: Natural Gas Reforming. Office of Energy Efficiency, n.d. Web. 26 Sept. 2015. <http://energy.gov/eere/fuelcells/hydrogen-production-natural-gas-reforming>.
 “Improving Efficiencies.” IGCC, Supercritical. World Coal, n.d. Web. 26 Sept. 2015. <http://www.worldcoal.org/coal-the-environment/coal-use-the-environment/improving-efficiencies/>.
“ITER – the Way to New Energy.” ITER. <https://www.iter.org/>N.p., n.d. Web. 18 Sept. 2015.
“Will ITER Make More Energy than It Consumes?” Will ITER Make More Energy than It Consumes? N.p., n.d. Web. 26 Sept. 2015. <http://www.jt60sa.org/b/FAQ/EE2.htm>.



Fracking: A Controversial Source of Energy

What is fracking?
Fracking, short for “hydraulic fracturing,” was invented over seventy years ago.¹ It is a technique that extracts gas from conventional wells. Drillers use vertical well shafts to descend 10,000 feet down into the layer of shale deep in the ground, turn the drill 90 degrees, and then drill a series of horizontal wells over a thousand feet deeper into the shale. They then blast a mixture of two to four million gallons of water, chemicals and sand in order to fracture the rock and release natural gas.

A simple visualization of a fracking well
A simple visualization of how fracking works. Credit

In the states of Pennsylvania, New York, and West Virginia, fracking is done in Marcellus shale.² Natural gas wells have been drilled in Marcellus shale for over fifty years, but recent advancements in deep horizontal drilling and fracking have greatly increased the profitability of extracting gas from this shale.

The mixture of water, chemicals and sand that is used in fracking is referred to as “fracking fluid.” Exclusively in the state of Pennsylvania in the year of 2011, about 12 to 20 million gallons of water were use every single day in fracking fluid. About 1,500 horizontal fracking wells used approximately .5% to .8% of the water Pennsylvania used daily.²

The large amount of water that fracking requires is taken from multiple different sources. Fracking in the Marcellus shale uses 72% of its water from water sources in Pennsylvania, such as rivers, groundwater, lakes, and creeks. The remaining water is acquired from drilling companies, abandoned mines, underground pipelines, and rainwater on the well pad. In addition to this water, 1% of fracking fluid is composed of 50 known chemicals. These chemicals are used to prevent microorganism growth and the corrosion of metal pipes, and to maintain fluid viscosity (a fluid’s resistance to flow).³ The sand that is in fracking fluid is used to reduce friction, which allows the fluid to be pumped faster and at a lower pressure.


What are some of the effects of fracking?
The invention and increase in popularity of fracking has greatly affected the oil industry. In the month of October in 2013, for the first time in over 50 years the United States produced more oil than it imported.¹ The costs of heating and electricity have gone down due to the decreased amount of coal consumption. The economy of the United States has also shown benefits because of the increased activity in the gas and oil sector. Barnett Shale, the form of shale found deep in the ground in the state of Texas, is the largest producible reserve of onshore natural gas in our country due to fracking.

Because of  fracking, there is enough domestic gas in the United States to meet our country’s needs for the foreseeable future. The estimated recoverable gas from US shale source rocks using fracking is about 42 trillion cubic meters, which is nearly equal to the total conventional gas discovered in the United States over the past 150 years, and is equivalent to about 65 times the current US annual consumption (according the the IHS, a business-information company in Colorado). The growth in activity of the American gas industry has greatly benefitted our economy. The gas industry accounts for nearly 3 million jobs and 385 billion dollars in direct economic activity.⁴ Stable supplies of gas from fracking depend on a steady rate of new well completions,  and the resulting gains in employment and economic stimulation from new wells attribute to the benefits of fracking.

Data released from the U.S. Energy Information Administration (EIA) over the years of 2005 to 2015 reveals that the consumption of coal has decreased from 1.93733 quadrillion Btu (September 2005) to 1.457655 quadrillion Btu (September 2014), while the average consumption of natural gas has increased from 1.456699 quadrillion Btu to 1.880758 quadrillion Btu (September 2015).¹¹ Although this reveals the positive trend that Americans are consuming less coal, the fact that natural gas consumption has grown may not have a positive effect on our environment. I will explain some of the underlying dangers of increased natural gas production and consumption later in this blog.

The brown line represents natural gas consumption and the blue line represents coal consumption. Credit

Here is the calculation of Btu of coal consumption and natural gas consumption per person in the United States for the years 2005 and 2014:

2005 U.S. Population: 295,500,000 (Google)

2014 U.S. Population: 318,900,000 (Google)

Coal Consumption:

2005 – 1,937,330,000,000,000 Btu/295,500,000 people = 6,556,108.291 Btu/person

2014 – 1,457,655,000,000,000 Btu/318,900,000 people = 4,570,884.29 Btu/person

Natural Gas Consumption:

2005 – 1,456,699,000,000,000 Btu/295,500,000 people = 4,929,607.445 Btu/person

2014 – 1,880,758,000,000,000 Btu/318,900,000 people = 8,591,859.296 Btu/person

As displayed by this data, fracking has effected the way that the United States has consumed natural gas. This in turn has effected the financial matters of the natural gas consumers in our country. From 2007 to 2013 (a time in which the fracking industry experienced an expansion), consumer gas bills decreased by 13 million dollars per year. Per gas consuming household, the decrease was about 200 dollars per year.¹⁰ While many defenders of fracking attribute this positive change completely to fracking, many factors must be considered when it comes to the natural gas industry in our national and global economy. Here is an informational video in which some of the economic benefits of fracking are explained:

However, these benefits come at a deep cost to our environment and those living near fracking wells. Over 15 million people in America live within one mile of an oil or gas well. Some homeowners living near these wells have experienced pollution of their drinking water due to fracking chemicals released from leaking gas wells and disposal ponds, and methane leaks have caused flames to shoot out of some people’s kitchen taps.¹ Leaking well heads can cause severe water contamination when flowback fluid (which contains natural salts, heavy metals, hydrocarbons and radioactive materials from the shale) leak into streams or sink into groundwater. Within the first two weeks of fracking, nearly one fifth of the entire amount of fracking fluid used flows to the surface of the well.⁴ Considering the massive amounts of fracking fluid that are used, as discussed earlier, this is a substantial amount of potentially destructive fluid. The river shown below contains radium levels 200 times greater in the area where fracking fluid was disposed (even after being processed by a treatment plant) compared to other areas of the river.⁸


A river in Pennsylvania that has high levels of radioactivity due to fracking flowback. Credit


In addition, air quality has been effected by diesel fumes from the truck convoys traveling down rural roads to well pads.¹ There has been a growing awareness of these dangers and the denial of the gas and oil industry has lead to an even greater public outcry. In 2013, a documentary on HBO titled Gasland Part II aired with hopes of “exposing power politics of fracking.” The trailer of this documentary can be viewed below:

One aspect of the fracking controversy is that some people believe fracking has been a cause of a spur of man-made earthquakes in Arkansas, Colorado, Oklahoma, New Mexico, and Texas. However, there is little evidence that fracking actually causes earthquakes.⁵ While some seismic activity is triggered by fracking in wells, the U.S. Geological Survey has stated that none of the recent man-made earth quakes are directly related to fracking. Bill Elsworth, a USGS seismologist, claims “We don’t find any evidence that fracking is related to any of these magnitude 3 earthquakes that we have been studying.”

A perceived benefit of hydraulic fracturing is a decreased dependence on coal. Although fracking reduces coal consumption, evidence has shown that there hasn’t been a change in greenhouse gas emissions. This is because researchers have discovered a recent increase in the release of methane gas. Methane is released from the natural gas wells which are used in fracking and is far more potent than the carbon dioxide that is released when coal is burned. It is estimated that “3.6–7.9% of the lifetime production of a shale gas well (compared with 1.7–6% for conventional gas wells) is vented or leaked into the atmosphere from the well head, pipelines and storage facilities.”⁴ Therefore, although a decrease in coal consumption curtails the emission of carbon dioxide, the release of methane gas still contributes to the climate change of our planet.

A breakdown of the climate footprint and the growing impact of methane due to shale drilling
A breakdown of the climate footprint and the growing impact of methane due to shale drilling. Credit
A visualization of the dangerous trapping ability of methane gas
A visualization of the dangerous trapping ability of methane gas. Credit

















While some methane gas is emitted from natural sources, around sixty percent of total methane emissions are from human activities, such as industry, agriculture, and waste management activities, as displayed in the chart below:

U.S. Methane Emissions by Source. (2013)  Credit

As of 2013, the amount of methane gas released from natural gas and petroleum systems, which includes fracking, was nearly three times the amount that was released from coal mining, and slightly less than twice the amount released from landfills.⁹ As the popularity of fracking has continued to expand in recent years, the amount of methane released from fracking (in the overall natural gas and petroleum systems category) is due to be even larger than depicted in the above pie chart.


What is the future of fracking?                                                            Fracking provides our nation with the valuable ability to decrease our dependence on other countries for natural gas. It has contributed to the growth of our economy and a decreased dependence on coal as an energy source. Mark Soback, geophysicst at Stanford University, claims “Fracking comes with promise and with risk . . . it’s clear that it’s a remarkable resource. It’s abundant, and as a transition fuel between today and the green-energy future, natural gas really is the answer.”  Zoback and a team of scientists surveyed the existing data about fracking and concluded that despite the threats that fracking imposes on our environment and the quality of life of American citizens, strictly enforced regulations can help minimize these threats. Regulations could include reducing the amount of toxic chemicals in fracking fluid and closely monitoring the integrity of the well throughout  it’s creation and continued use.⁶ For more information regarding the impending federal regulations for fracking that the Obama administration is currently working on, click here.

Between the years of 2004 and 2011, the United States experienced a 75 percent increase in it’s natural gas reserves because of fracking.⁷ Due to the extreme short-term benefits of fracking that have changed the way our country acquires and consumes natural gas, it is doubtful that the continued development of fracking will be halted any time soon. However, the expansion of fracking in other nations has been a heavily debated topic. In the United Kingdom, a company titled Caudrilla was granted planning permission to begin drilling in Balcombe, West Sussex in early 2013. High amounts of public opposition eventually deterred the company’s progress and they stopped drilling in the area. Although commercial fracking is not yet occurring in the United Kingdom, it is known that there will be an increasing presence of drilling rigs in it’s rural landscapes.⁷  The debate on the expansion of fracking, not only in the UK, but also in the United States and other countries, is one that must take into consideration the opinions of those living near the drills themselves, the effect that it has on the world economy, the impact on climate change due to an increase in methane gas, and the many other important factors that fracking entails regarding the human race and the environment that we live in.

A young member of the anti-fracking protests in Balcombe. Credit



Sources used:

1: http://www.theguardian.com/environment/2013/dec/14/fracking-hell-live-next-shale-gas-well-texas-us 

2. http://exploreshale.org/#

3. http://physics.info/viscosity/

4. http://www.nature.com/nature/journal/v477/n7364/full/477271a.html#ref2

5. http://www.theguardian.com/environment/2012/apr/18/us-earthquakes-fracking-gas

6. http://onlinelibrary.wiley.com/doi/10.1002/scin.5591820519/full 

7. http://onlinelibrary.wiley.com/doi/10.1111/2041-9066.12032/full 

8. http://www.usatoday.com/story/news/nation/2013/10/02/fracking-radioactive-water-pennsylvania/2904829/ 

9. http://www3.epa.gov/climatechange/ghgemissions/gases/ch4.html

10. http://www.brookings.edu/blogs/brookings-now/posts/2015/03/economic-benefits-of-fracking

11. http://www.eia.gov/beta/MER/?tbl=T01.03#/?f=M&start=200509&end=201506&charted=1-2-3-5-12