When People Want to Work: The Correlation of Employee Satisfaction and Absenteeism

By Kriss Gross

Oct 14, 2014


In today’s workforce many people work simply for the sake of working, being able to make a livable wage, and taking care of their families. What is unfortunate is that far too many organizations take advantage of an employee’s satisfaction of having a job and fail to take the necessary steps to have employees who enjoy the work they do and the organization they do it for. To the contrary, there are companies that recognize the value of productive satisfied employees and the financial benefit of keeping these employees on their payroll. So, what is it that entices individuals to stay with an employer that goes beyond the ability to pay the bills and possibly provide health insurance for their families? As an industrial/organizational (I/O) psychologist, discovering the elements of a company’s structure, which make it an organization that people strive to stay involved with, is important. When researching companies that retain satisfied employees, the I/O researcher might want to determine the correlates of employee satisfaction and lower absenteeism.

To research this dynamic the hypothesis is put forth as high employee satisfaction is correlated with low employee absenteeism. In order to understand the reasons that employee satisfaction results in lower absenteeism the researcher needs to understand the dynamics of organizations that result in this positive correlation. Company structure, manager styles, and individual attitudes can contribute to a company’s people wanting to work. Company structure that relates to job satisfaction involves organizational culture, pay and benefits, and promotional opportunities that enticed the employee to seek initial employment. Once hired management and employee motivations are the main variables that determine whether an employee stays with an organization.


Independent Variables        

Related research has determined of the many variables that influence job satisfaction and its effect on absenteeism, management style is an independent variable (Riggio, 2007, p. 27) that can be manipulated by not only the researcher, but the organization as well. This variable was addressed by Frooman, Mendelson, & Murphy (2012) in literature discussing how management styles can have a positive or negative correlation with the levels of absenteeism in organizations. Recognizing that this variable can be improved upon, companies offer management training seminars, teaching communication skills, diversity and sensitivity training and other skills that help improve the ways that managers interact with their employees.

Dependent Variables

          When I/O psychologists research the hypothesis of job satisfaction and lower absenteeism they must consider how the independent variables will affect the dependent variable of absenteeism. They must also take into account that this variable is two-fold, legitimate and illegitimate absenteeism. Of course, legitimate absenteeism is beyond the control of the employee, i.e. illness, child care issues, and family emergencies; however, illegitimate absences are days taken from work when the employee is not necessarily ill, but wants to extend a long weekend or because it’s a nice day, they decide to spend time with their children (Frooman, et al., 2012, 5b).

Extraneous Variables

There are factors that can influence the outcome of research determining the correlates suggested in the hypothesis, known as extraneous variables (Riggio, 2007, p. 28). These variables could be found in the behaviors of subjects who are aware they are being observed or the motivation of participants could lead to their desire to be “helpful” when responding to research questions. Another extraneous variable to consider is the length of time an employee has been with the organization, as those with tenure may have the propensity to overlook or ignore circumstances that a newer employee may find discouraging. To ensure validity of the research, participants could be randomly assigned to either the control group or the treatment group. Another option is to perform the research using naturalistic observation (Shaughnessy, 2008, p. 100), by gathering the data in as natural an environment as possible, with the observer blending in with environment.

Research Design

To observe workers in a natural environment, the correlational method (Riggio, 2007, p. 30) is used to gather the information the researcher needs to support their posited hypothesis. Because this method does not involve manipulation of the independent variable, the observer need only record information as it occurs. This method is preferred in settings such as the workplace, due to desire to get natural reactions within the studied environment. On the other hand, using an experimental design would require those being observed to be made aware of an observer’s presence, which could confound the results of the study.

Data Collection Techniques

When employing a correlational method to research the given hypothesis, data collection can be achieved using several avenues. Narrative recording (Shaughnessy, 2008, p. 110) has the trained observer recording information as it occurs, such as verbal responses, facial expressions, or body language related to a directive given by a supervisor, as well as the manner in which the directive was given. An example could be recording the subject’s look of exasperation after receiving direction from a supervisor who is constantly hovering or needlessly repeating a request. The way the supervisor communicates with subordinates could be recorded as well, i.e. was the request made in a respectful, mentoring tone or given in a rude, demanding fashion (Riggio, 2007, p. 32). In conjunction with the natural observations, the researcher can gather archived data from personnel records to attain a record of absences of observed employees. Another method could use survey questions, which ask participants to answer predetermined questions, usually employing a Likert scale (Vogt, 1999) to measure attitudes, knowledge, perceptions, and values.

Employee Satisfaction

          Using the Likert scale to predict employee satisfaction, researchers can analyze the organizational attitudes and perceptions of the employees and how they rate their satisfaction of their individual positions, co-workers, company culture, benefits and promotional opportunities. However, of most importance to this research is the employee’s overall satisfaction with those in positions of leadership and the effect their leadership styles have on their subordinate’s overall satisfaction with their employment with the organization. Because this study is using leadership styles as its primary focus to address employee satisfaction and its correlation with lower absenteeism, the results attained in the study by Frooman, et al. (2012, 2a) offer compatible and supportive data.

Employee Absenteeism        

As an employee’s level of satisfaction is proposed to be a determinant of absenteeism (Riggio, 2007, p. 33), the Likert scale can address the reasons an employee would have legitimate or illegitimate absences. While legitimate absences can be physical or psychological in nature, illegitimate absences can have psychological elements as well, such as an employee needing a “mental health” day to deal with the demanding nature of an oppressive supervisor, a perceived unmanageable work assignment, or even a dispute with a co-worker. The questionnaire used by the researcher could have a section for open-ended responses; thus, allowing the employee to give more detailed information about specific areas of concern. If there seems to be a pattern among the respondents, then human resources (HR) managers and I/O specialists can address these issues with the supervisors in question. Due to the confidential nature of the questionnaire, respondents are more likely to feel more at ease in sharing information that is causing the need for absences that are less than legitimate in nature.


When specific outcome pertaining to a given variable coincides with another measure of the same variable it is referred to as a correlation (Shaughnessy, 2008, p. 45). As with the subject of this research, there is an assumption that job satisfaction correlates with a decreased level of absenteeism. The provided questionnaire addressed issues involving how managerial styles can be an integral influence on the perceptions an employee has in regard to their rates of absenteeism. When measuring the degree of correlation, a positive correlation (Shaughnessy, 2008, p. 513) is concluded when the hypothesis is proven; thus, there is evidence that increases in lower rates of absenteeism are a result of an employee’s increased job satisfaction. A negative correlation (Shaughnessy, 2008, p. 512) would exist if the individual’s satisfaction with their employment increased; yet there was a decrease in the rates of lower absenteeism. When no correlation exists, it means that one variable has no measurable effect on the second variable. As it concerns this research, a correlation of r = -.70 translates to the negative correlation between the two variables, i.e. increased job satisfaction did not result in decreased rates of absenteeism.

Problems and Strategies

While the correlational method provides an avenue to assess the relationship between two variables, one shortcoming is the inability to make cause and effect inferences. In an effort to draw such conclusions researchers may make inappropriate inferences and falsely interpret the data.  However, using meta-analysis, the data that is gathered during the research can be used make decisions to improve or change the existing relationship based on data provided in that study and comparing it with other studies based on the same proposed hypothesis or topic of concern (Riggio, 2007, p. 31-35).

Another issue to keep in mind involves the interpretation of the data gathered and ensuring that internal validity (extraneous variables) and the external validity (ensuring the generalizability to other settings) of the research has been taken into account. This can be done by abiding by the ethical guidelines and principles set forth by the American Psychological Association (APA) (Riggio, 2007, p. 44).


The broad basis for this report was to address the supposition that an employee’s job satisfaction correlates to lower rates of absenteeism. Also addressed were the elements involved in conducting the research to gather and prove the posited hypothesis. As managerial styles have been determined, in supporting research, to have a direct bearing on employee satisfaction, the questionnaire provided to the employees gave them the opportunity to relate their perceptions and attitudes toward those in supervisory positions. The information gathered therein can be used to analyze whether this relationship has any bearing on employee absenteeism as it relates to studies of a similar nature. The analysis resulted in a correlation of r = -.70; which translated to the negative correlation between the two variables, meaning that increased job satisfaction did not result in decreased rates of absenteeism. However, this conclusion leaves open the need to further the research to determine the factors that lead to increasing levels of employee absenteeism.



Frooman, J., Mendelson, M., & Murphy, J. (2012). Transformational and passive avoidant leadership as determinants of absenteeism. Leadership & Organization Development Journal 33(5), 447-463. Retrieved from http://search.proquest.com.libproxy.edmc.edu/docview/1022684970

Riggio, R. (2007). Research methods in industrial/organizational psychology.  Introduction to Industrial/Organizational Psychology, 5th Edition. Retrieved from http://online.vitalsource.com/books/055821715X

Shaughnessy, J., Zechmeister, E., Zechmeister, J. (2008). Observation. Research Methods In Psychology, 8th Edition. Retrieved from http://online.vitalsource.com/books/007-7376463

Vogt, W.  (1999). Likert scales.  Dictionary of Statistics And Methodology. Sage: Thousand Oaks, California. Retrieved from http://www.rpgroup.org/sites/default/files/Surveys%20Interactive%20Activity%20-%20Examples%20of%20Likert%20scales.pdf



Examining Your Community’s Source of Energy

By Kriss Gross

October 21, 2015


Jacksonville, NC has two main electrical power companies that supply power to the residents of Jacksonville, JOEMC and Duke Energy. The three main sources of power for North Carolina are as follows: Coal-Fired – 40.2 %, Nuclear – 28.0%, and Natural Gas-Fired – 25.7% (EIA, 2015). Due to its location, Jacksonville has no power plants in its immediate vicinity. The nearest coal-fired plant is in Elizabethtown, NC, approximately 80 miles southwest of Jacksonville. The next closest facility is the Brunswick Nuclear Plant, 82 miles due south. The final power plant is the natural gas-fired Sutton Steam Plant in Wilmington, 56 miles south. None of these facilities is close enough to have any direct bearing on the air quality here in Jacksonville. The only indication of a power plant in the area is when one drives south on Hwy 17, they might be able to see the billows of steam off in the horizon.

The Brunswick Nuclear Plant, part of Duke Energy Power, is located west of the Cape Fear River, near Southport, North Carolina. Each unit has a General Electric Boiling Water Reactor with generator rated at 821 MWe. Brunswick is located on a 1200 acre site. The intake canal is 2-1/2 miles long. The discharge canal is 5-1/2 miles long and discharges 2000 feet offshore into the Atlantic Ocean. Commercial operation started for the two Brunswick units for Unit 1 – 1977; Unit 2 – 1975 (Nuclear Tourist, 2006).

Back in 1939, the Tidewater Power Co. (now known as Duke Energy) refused to sell power to the then Jones-Onslow Rural Electric Association (REA); thus the Jones-Onslow REA would be forced to build its own generating plant. The plant was constructed near the main gate of Camp Lejeune and was later sold to the United States government. JOEMC is a distribution only company that is part of an electric cooperative. It does not generate any power and has been providing electrical service to the area for over 76 years (JOEMC, 2015).

Duke Energy (2015) has been supplying power in North Carolina for over 100 years and has been using coal to power its plants since 1911.The first coal-fired plants were used to supplement the company’s hydroelectricity plants and then, in the 1920s, the demand for electricity outgrew the availability of hydroelectric generation. Duke Energy shifted to coal as its primary energy source in 1926, when Buck Steam Station, in Spencer, NC, began producing electricity. Burning coal, the capacity of the Buck Steam Station increased from the 60 megawatts supplied by the Wylie Hydroelectric Station to 369 megawatts of electricity.

Considering the available sources of electric power today, the consumption of this author’s household was 871 Kwh for August/September. However, when an average was done for the year, using online bill data, the average jumped to 2087 Kwh, for an annual usage of 25,247 Kwh. When this figure is translated into the 17,216 occupied homes (City-Data, 2015), then the total is a staggering 434,652,352 Kwh or 434 gigawatts (gw) per year.

The city of Jacksonville has just over 78,000 residents, but among these residents are the military families of Marines and Navy personnel aboard Camp Lejeune Marine Corps Base. Whenever there is a large military presence host cities do their best to ensure that these family’s needs are met in the local community. As such, Jacksonville has a lot of shopping centers, strip malls, and “big box” stores, such as Target, Walmart (2), Sam’s, Kohl’s, Dick’s Sporting Good, Best Buy, Home Depot, Lowes (2), and more. What does all this translate to? Rooftops, a lot of open, empty, sun-soaking rooftops.

The Environment North Carolina Research & Policy Center (hereafter referred to as the Center) is an organization whose mission is to “investigate problems, craft solutions, educate the public and decision-makers, and help North Carolinians make their voices heard in local, state and national debates over the quality of our environment and our lives” (Schneider, Burr, & Ouzts, 2014). Capitalizing on North Carolina’s (NCs) over 250 sunny days, the Center has proposed using the abundant, underutilized space found on the thousands of shopping centers and big box stores across the state to install solar panels.

The use of solar energy to produce electrical power in NC is an avenue that is slowly finding its way to the forefront; utilizing the available spaces on commercial rooftops is an ecologically and economically smart move. Although solar isn’t the only viable, renewable energy source that makes sense in NC, as wind power is also an abundant, almost constant energy provider. Between the two of these renewable and endless resources, there is no need to mine for coal, or frack the land for natural gas, or drill for petroleum. There is also no need to cut down more trees or plant more sustainable crops for biomass either, although these energy sources are certainly better for the environment than the use of coal, natural gas and petroleum.

Continuing then with the Center’s proposal for solar, the organization has clearly put some concerted effort into this concept. NC is already ranked as fifth in the nation for its use of solar as an energy source and placed second in 2014 for the number of solar installments for the year (SIEA, 2014). The 396.6 MW installed in 2014 is enough to power the entire city of Jacksonville, as well as many of the smaller neighboring communities.

In August 2007, North Carolina became the first southeastern state to adopt a Renewable Energy and Energy Efficiency Portfolio Standard (REPS), also known as a Renewables Portfolio Standard (RPS) North Carolina REPS require investor-owned electric utilities in to meet 12.5% of their retail electricity sales through renewable energy resources or energy efficiency measures by 2021. RECs, like JOEMC, and municipal electric suppliers must source 10% of the electricity they distribute from renewable supplies by 2018. Solar generated energy sales are required to reach 0.2% by 2020 for investor-owned utilities. Furthermore, the REPS sets statewide targets for energy recovery and electricity derived from swine waste and from poultry waste, as these industries are among the largest contributors of damaging waste (EIA, 2015).

Just how realistic is the mass implementation of solar in NC? It is very realistic, almost to the point of being redundant. When power companies continue to rely on environmentally damaging fossil fuels like coal, they essentially imply that they really don’t care about future generations, only the amount of money they save by continuing to burn damaging fossil fuels. As such, herein lies the problem. For states like NC that have the capacity to use solar as their number one energy source, it is critical to have a strong renewable energy policy in place. Unfortunately, the current policy expires at the end of 2015 (EIA, 2015). Even more damning is the fact that utility companies really don’t want their customers to produce their own power, since the loss of energy pulled from their power grid costs the company money; not to mention the money they have to pay those customers for feeding their excess energy into the grid. For the 24 percent of Onslow county’s residents, who make less than $25k annually (OCDC, 2015), high electric costs can mean the difference between paying a light bill, buying groceries, or buying needed medicines.

Getting back to the push by the Center to install solar on empty rooftops, there is no doubt that it is going to be an uphill battle, but one that is definitely worth the fight. One big part of the war is getting the consumer involved, because without a paying customer no one wins. In what might seem an unlikely alliance, solar installers in the Chapel Hill area joined forces with local religious organizations. Many religious buildings like churches and synagogues have very large roofs that would support large solar arrays; thus allowing for these usually non-profit organizations to lower their utility bills. It is also more and more common to see solar installations on NC university campuses, like that found at NC State University (Jeffrey, 2015). What this union implies is that consumers are more and more likely to support solar power, as they see firsthand the negative impact that burning fossil fuels has on their community. Although Jacksonville is not directly affected by the pollution from coal-fired plants, the residual and indirect impact is seen in increasingly high electric bills. Bills, by the way, that will get much worse with the expected increase in overall temperatures, which are beginning to accompany climate change.

The installation of solar on a massive scale, like that proposed by the Center, would reduce the kilowatt-hours (KWh) used by consumers from the local power distribution hubs. Because these solar arrays can produce 3,000 MW of rooftop solar power, big box stores would generate more than 4 million megawatt-hours (MWh) of electricity annually, which could offset the total annual electricity used by these buildings by as much as 60 percent. Additionally, by replacing this massive amount of “dirty electricity” (Schneider, 2014) with solar, this prevents 3 million metric tons of climate changing pollution from entering the atmosphere, the equivalent of 600,000 passenger vehicles. This reinforces how it is critically important that consumers take a stand and demand that the RPS is renewed and push to increase the amount of renewable energy used to generate power in NC.

According to the National Renewable Energy Laboratory (NREL) (Schneider, 2014),  “North Carolina has enough properly oriented and available rooftop space to install 23 gigawatts (GW) of rooftop solar capacity—enough to supply the equivalent of 21 percent of the state’s 2012 electricity use” (Schneider, 2014).  On a national scale, the price of solar photovoltaic (PV) modules had steeply declined in 2012, an average of 41 percent from $1.15/watt to $0.68/watt and average installations costs also fell by 16 percent, which is unmatched by any other technology backed power generating system. While the initial investment in solar can seem daunting, the tax incentives and eventual payback are worth the expenditure.

The biggest payback comes in the overall reduction in the amount of carbon emitting pollution from coal-fired power plants. As solar and other renewables replace the use fossil fuels, the costs of using renewables will most likely fall, just as the amount of CO2 emissions will fall. To continue the move to renewable energy like solar, North Carolina’s current energy policies have been a chief component in NC’s leadership standings among top solar producing states. However, in order to retain their position the following is a snippet of the policies that will need to be renewed or maintained to accomplish this goal and continue to develop and improve the state’s rooftop solar market (Schneider, 2014).

  1. Enable third-party sales of electricity.
  2. Fairly compensate large solar energy producers in power purchasing agreements.
  3. Improve the state’s net metering laws.
  4. Extend incentives for investing in solar technologies in North Carolina.
  5. Reduce siting, permitting, and interconnection restrictions.
  6. Defend and strengthen the state’s renewable energy standard.

Enacted on June 20, 2002, the “Clean Smokestacks Act” (Session Law 2002-4) is a critical measure in protecting the quality of North Carolina’s air. The proposed impact of the bill is an overall reduction of sulfur dioxide and nitrogen oxide emissions from coal-fired plants by 75% by 2013. The largest contributors, Duke and Progress Energy companies had spent a combined $2.8 billion, as of 2008, to become compliant. However, in spite of these measures, future carbon dioxide emissions are projected to maintain continued rise. Improvements in overall air quality can only be realized if these and other fossil-fuel burning companies change their mode of operation and reduce these practices and increase their involvement in nuclear energy, hydropower, solar energy, wind energy, biomass energy sources, and or clean-coal power plants (Appalachian State University Department of Technology & Energy Center (ASU), 2010).

The environmental and economic benefits of adding or converting to renewable energy are many, starting with the most important, the generation of energy that does not contribute greenhouse gas emissions from fossil fuels and reduces some types of air pollution. This aspect alone has the most powerful impact, in that improved air quality means the reduction in respiratory and other illnesses, such as asthma, cardiovascular diseases, adverse pregnancy outcomes, and even death. Improved health is complimented with the creation of economic development and the new jobs in manufacturing, installation, and others that result from this growing industry (NIH, 2015).

In the end, the biggest benefactor is the climate as a whole. By reducing the poisonous emissions created by coal and other fossil fuel facilities, the damage to the global climate will decrease. By using renewable energy, replanting decimated forests, and attending to the process of cleaning up the mess that has been made of this planet, there is hope for future generations of people, as well as the wildlife and the habitats in which we all live.



Appalachian State University Department of Technology & Energy Center. (2010). North Carolina state energy report. North Carolina Energy Policy Council; North Carolina Energy Office. Retrieved from https://www.nccommerce.com/Portals/14/Documents/Publications/ANNUAL%20NC%20ENERGY%20REPORT%20final%20feb%202010%20v2-1.pdf

City-Data.com. (2015). Jacksonville, NC. Retrieved from http://www.city-data.com/housing/houses-Jacksonville-North-Carolina.html

Jeffrey, J. (2015). Shunned by big utility, N.C. solar installers turn to religious leaders for support. Triangle Business Journal. Retrieved from http://www.bizjournals.com/triangle/news/2015/08/26/nc-solar-installers-religious-support.html

JOEMC. (2015). Our history. Jones-Onslow Electric Membership Corporation. Retrieved from https://www.joemc.com/the-cooperative/about-us/our-history/

National Institute of Environmental Health Sciences (NIH). (2015). Air Pollution. U.S. Department of Health and Human Services. Retrieved from http://www.niehs.nih.gov/health/topics/agents/air-pollution/

Nuclear Tourist. (2006). Brunswick Nuclear Plant. Retrieved from http://www.nucleartourist.com/us/brunswick.htm

Onslow County, North Carolina Data Center (OCDC). (2015). Quarterly update. Onslow County Planning & Development Department. Retrieved from http://www.onslowcountync.gov/ OT_data_center%20(1).pdf

Schneider, J., Burr, J., & Ouzts, E. (2014). Solar on Superstores: How Commercial Rooftops Can Boost Clean Energy Production in North Carolina. Environment North Carolina Research & Policy Center. Retrieved from http://www.environmentnorthcarolina.org/sites/environment/files/reports/NC_SolarRoof_scrn.pdf

Solar Energy Industries Association (SEIA). 2014 Top 10 solar states. Retrieved from http://www.seia.org/research-resources/2014-top-10-solar-states

U.S. Energy Information Administration (EIA). (2015). North Carolina state energy profile. Retrieved from http://www.eia.gov/state/print.cfm?sid=NC

Recycling, Reducing, and Reusing

By Kriss Gross

October, 14, 2014

Authors note: Due to the nature of this paper, some sections are in first person as they are a direct reflection of personal recycling habits and a one week recycling log was kept as a requirement of the assignment.

Recycling, Reducing, and Reusing

As of the 2014 census, the population of the city of Jacksonville, NC was 69,047; however, this does not include the outlying Onslow county areas, where I reside, which has a total population of 187,589 residents. I have four 30 gallon collection bins; one for trash, one for glass, plastic, and cardboard, one for aluminum cans, and one for tin cans (labels removed).  During the week of logged recycling, my household recycled one large paper grocery sack. Plastic, glass, cardboard is picked up curbside; however, aluminum and tin cans are sorted separately and taken to metal recycling for cash. The total logged collection for the week is as follows:

To recycling bin:

  • 11 #1 plastic soda bottles, creamer bottle
  • 3 #2 plastic bottles
  • 1 #3 milk jug
  • 1 #5 empty medicine bottle
  • 1 #7 baby food tub
  • 1 glass drink bottle
  • 2 cardboard food boxes

To recycling center:

  • 2 tin soup cans
  • 5 aluminum cans

Recycling is picked up curbside every Thursday evening and the aluminum and tin is placed in a separate 50 gallon bin outside until it is full; then it is taken to metal recycling. All told, the week of logged recycling likely weighed around 5 pounds. If every resident of Onslow County recycled the same amount, that would remove 937,945 pounds (470 tons) of refuse from the landfill in just one week. In one year the total would be 24,387 tons.

One local material recovery facility (MRF), Sunoco, shared that the rate of recycling at the facility had risen from 950 tons per month in 2011, with a previous contractor, to 3,000 tons per month in 2012, with Sonoco. These figures, however, represent residential curbside and business customers. Sonoco General Manager Ray Howard shared that the company, founded in 1899, operates six facilities and serves over 125 communities, annually collecting more than 3 million tons of old corrugated containers, various grades of paper, glass, metals and plastics. The facility also offers tours to local students three days a week, which help educate the younger generations the importance of recycling and what happens to the stuff when it leaves the curbside bin (Kay, 2013).

Evidence that more people are taking responsibility, as well as the necessary steps to reduce landfill levels, was shared in the improved numbers shared by Sonoco’s GM. It is also evident that more people are beginning to care about the environment, by the reduced amount of trash that is observed along American highways. As a former resident of the Midwest, where there is a bottle deposit, the number of cans and bottles along area highways and county roads had diminished significantly.  However, determining whether the people in this community recycle enough is a subjective question; one that is based on perspective. What is enough? The answer depends on who one asks; although if one is a resident in the country of Sweden, 99 percent is the answer one might get.

Sweden is a country just slightly larger than the state of California, has 9.8 million people, and is only one percent away from zero-waste; with 99 percent of all household waste being recycled in one form or another. Sweden’s recycling programs reflect an entirely different perspective on waste management, going from 38 percent of household waste being recycled in 1975, to the entire country increasing that number to 99 percent today. They have accomplished this amazing feat with several government and industry backed programs. Weine Wiqvist, CEO of the Swedish Waste Management and Recycling Association, relates his perspective that Swedes can do more. In consideration of about half of all household waste being burned to produce energy, he submits the reusing of materials or products leads to using less energy to create a product, rather than burning one and making another from scratch (Fredén, 2015).

Government involvement in another aspect of American lives is not always an option that many wish to consider; however, with the induction of the Environmental Protection Agency (EPA) in 1970, the government has been involved with environmental matters for nearly half a century. While there are no federal laws regarding recycling, as this has been left up to individual states, government offices have mandates that require the purchase of and proper disposal of recycled and recyclable materials. However, the EPA does have regulations regarding hazardous wastes, landfill regulations, and is setting recycling goals to exceed current estimates of 35 percent. The Resource Conservation and Recovery Act (RCRA), under the umbrella of the EPA (2015), protects communities and resource conservation. The EPA assists the RCRA by developing regulations, guidance and policies that ensure the safe management and cleanup of solid and hazardous waste, as well as programs that encourage source reduction and beneficial reuse; the operative word here is encourage (Miller, n.d.).

According to the Northeast Recycling Council, Inc. (NRC) (2011), Forty-seven (47) of the fifty U.S. states and the District of Columbia report that at a minimum, they have at the very least banned the disposal of certain items at their solid waste facilities, such as Wyoming, where they have banned the disposal of lead acid batteries. Of these 47 states only 19 have mandatory recycling of at least one commodity and “Bottle Bill” laws have been in effect for many years in 11 states, including California (1986), Connecticut (1978), Iowa (1990), Michigan (1976), and Vermont (1972). The following is the most commonly banned items and the number of states enforcing such bans (NCR, 2011):

  • Lead acid batteries 43Waste oil 34
  • Tires 31
  • Untreated Infectious Waste 29
  • CRTs 22
  • Mercury containing products 20
  • Liquid wastes 19
  • Yard Waste (grass) 19
  • Yard Waste (leaves) 19
  • Computers 19
  • Ni-Cad batteries 18

In jurisdictions that have mandatory recycling programs the following are the items most common among them and the number of states enforcing the mandates (NCR, 2011):

  • Lead acid batteries 13
  • Corrugated cardboard 9
  • High-grad office paper 9
  • Aluminum & tin cans 8
  • Waste oil 9
  • Glass containers 8
  • Newspaper 8

As for North Carolina (NC), which has no mandatory recycling laws or regulations, there are disposal regulations, some of which have been in force since 1989. The most recent disposal ban, which became effective as of July 1, 2011, applies to televisions and computer equipment. NC promotes the use of public electronics recycling programs, electronics manufacturer recycling programs, recycling electronics through retail outlets, and charitable donations. The items banned in NC landfills include: ABC container recycling (hard liquor is only sold at state operated stores), electronics, fluorescent lights, mercury containing thermostats, oil filters, plastic bottles, and wooden pallets ((DEACS, n.d.).

Before explaining how recycled material is reused, it is important to understand the process materials go through in order to be reused.  As curbside recycling has gained momentum, the single-stream or “zero-sort,” recycling of the 1990s has also gained traction. Although some community recycling programs still require separation of recyclables at the consumer level, single-stream consumers use a single bin for all their recyclable materials; thus, allowing the system to be flexible and making the addition of newly accepted materials easier, without adding another pickup bin (Germain, 2013).

Yet, if it considered zero-sort, how does it get sorted? The modern technology used in more than 200 single-stream Materials Recovery Facilities (MRFs) nationwide allows for the automated sorting of most recyclables. Outside what looks like large industrial buildings the collection trucks line up, waiting to be directed inside the facility. Once inside the trucks are emptied and a loader transfers the materials onto a conveyor belt, for the beginning of the sorting process. In most MFRs the conveyor has a series of screens to separate items such as cardboard or newspapers, from the containers. The containers continue along the conveyor to a “manual sort,” where workers will remove any trash or materials incorrectly sorted (Germain, 2013).

As the conveyor continues, magnets pull off steel items which are sent to a container. Because aluminum cans are not magnetic, they are separated with an eddy current separator. The electric eddy currents induce a magnetic field that repels the aluminum cans off the conveyor into a waiting container. Industrial machines, called air classifiers, use air to sort objects of different sizes and densities, floating the remaining plastic over a gap in the conveyors, allowing heavier glass items to fall into containers below (Germain, 2013).

Requiring further sorting, plastic items are sorted using optical scanners that detect the various types of plastic and then blow them onto the correct conveyors for final baling. When sorting is complete, the recycled materials will be baled, shredded, crushed or compacted before being shipped to manufacturers to be repurposed into new products (Germain, 2013).

Now that the recyclables have been sorted and baled the process of reinventing the materials can begin; starting with paper, which is often considered problematic. Because not all paper is the same, paper such as a higher quality white office printer paper can be used to make more high-grade white recycled paper. Old newspapers, office paper, junk mail, and cardboard are used in making lower-grade paper products such as newsprint (Germain, 2013).

The majority of household metal waste is made from steel and aluminum. Steel cans and aluminum drink cans are melted down and turned into new food and drink cans. Because the process of mining aluminum is an energy-intensive and environmentally harmful process, recycling these items is more cost effective and environmentally friendly (Germain, 2013).

The easiest material to recycle is glass, as it is simply melted down and new glass is made. Glass is also an example of a recycling before the concept became an environmental concern. Bottle banks, large containers where used glass was collected, were considered to be the original examples of community recycling in many countries (Germain, 2013).

The biggest item to be recycled is plastic; it is also the most problematic. Because plastic lasts so long in the environment without breaking down it is also one the most unfriendly to the environment. Its lightweight properties allow it to float across oceans; ending up on beaches and shorelines is responsible for the injury and death of many animal species. Plastic is also difficult to recycle, as there so many different types, and because of the mass amounts of this material, it has no real value. The issues that make plastic a problem to recycle are also the reasons it is important to just that. Reducing this environmental threat also reduces the amount of virgin materials needed to make other plastic items, as plastic drink bottles can be recycled into insulation for thermal coats and sleeping bags and thicker plastics can become flower pots and plastic pipes (Germain, 2013).

Returning to Sweden and their recycling accomplishments, the Swedish government sponsors advertising that encourages and educates its citizens of the benefits of recycling, In response, Swedes separate their newspapers, plastic, metal, glass, electric appliances, light bulbs, batteries and in many cities consumers separate food waste, which is reused, recycled or composted. Continuing the process, Swedes turn newspapers into paper mass, bottles are reused or melted into new items, plastic containers are reduced to raw material, and food waste is composted to become soil or biogas through a complex chemical process. These measures are just a few of the steps taken that have led Sweden to its near “zero-waste” goals, something that other countries can take lessons from to improve their own communities and environments (Fredén, 2015).

As a consumer and someone who recycles, there is always more that can be done to decrease the amount of material that ends up in the landfill. An individual’s impact on creating renewed resources from recycled items depends on the level of dedication and concern one has for their environment. By purchasing items made from recycled materials, by purchasing items that are environmentally friendly and biodegradable, and by continuing to reduce the amount of materials in the waste stream, individuals can have a positive impact on the recycling process.



Division of Environmental Assistance and Customer Service (DEACS) (n.d.). Banned materials. North Carolina Department of Environmental Quality. Retrieved from http://portal.ncdenr.org/web/deao/recycling/banned-materials

Environmental Protection Agency (EPA). (2015). Resource Conservation and Recovery Act (RCRA) Overview. Retrieved from http://www2.epa.gov/rcra/resource-conservation-and-recovery-act-rcra-overview#how does rcra work

Fredén, J. (2015). The Swedish recycling revolution. Swedish Institute. Retrieved from https://sweden.se/nature/the-swedish-recycling-revolution/

Germain, A. (2013). The Story of Modern Single-Streaming Recycling. Inside Science Minds. Retrieved from https://www.insidescience.org/content/where-does-it-go/1457

Kay, L. (2012, September 30). $2 million in improvements at recycling center. The Daily News, Jacksonville, NC. Retrieved from http://www.jdnews.com/article/20120930/News/309309946

Miller, C. (n.d.). States lead the way: Pioneering recycling efforts in the US. National Solid Wastes Management Association. Retrieved from http://www.waste-management-world.com/articles/print/volume-7/issue-5/recycling-special/states-lead-the-way-pioneering-recycling-efforts-in-the-us.html

Northeast Recycling Council, Inc. (NRC). (2011). Disposal bans & mandatory recycling in the United States. Retrieved from https://nerc.org/documents/disposal_bans_mandatory_recycling_united_states.pdf

U.S. Census Bureau. (2015). Jacksonville, North Carolina. U.S. Department of Commerce Retrieved from http://quickfacts.census.gov/qfd/states/37/3734200.html

Woodford, C. (2014). Recycling. Explain That Stuff. Retrieved from http://www.explainthatstuff.com/recycling.html

Environmental Issues and the Industrial Revolution

By Kriss Gross

September 21, 2015

The slow and steady destruction of this planet began long before the industrial revolution; although that era takes the brunt of the fault for our planet’s ills. The agricultural revolution, which occurred 10,000 years prior, and the population growth that ensued are as much to blame. While the industrial revolution was the beginning of an era that witnessed the human capacity to improve their way of life, by improving production with power versus human driven machinery, by improving transportation with the use of locomotives, and by improving overall commerce and economic conditions (McLamb, 2013). However, improving their way of life resulted in population growth and urbanization, which led to deforestation, air and water pollution, and even disease.

It began with the use of what were thought to be inexhaustible fossils fuels like coal, then natural gas and oil. These new fuels fostered the innovations that took simple hand tools and replaced them with power tools, then to the coal driven machinery of the steam engine, steel and textile mills, and other manufacturing entities. Unfortunately all of these improvements led to population increases beyond that of the agricultural revolution and a rising population in the growing cities meant there was a need for quick, cheap housing for the workers who came to work in the factories and foundries. However, in Britain, the growing cities were ill prepared to take care of the sanitation issues of this rising population. Not only that, the pollution from the coal burning factories filled the air and left these densely populated areas covered in soot and grime that coated the streets, homes, and workers clothing (McDougal Littell, 2008).

The agrarian or agricultural era depended heavily on wood for housing, tools, heat for homes. But as simple craftsmen’s work was replaced by factories, wood was needed to build and fuel the new machinery; the need to feed an increasing population meant that forests were cleared to make room for the crops to feed the rising populous. This resource had become became even more depleted as glass and soap makers required large quantities of wood ash in their production process. Even more demanding was the growing iron and ship building industry; thus by the time of the industrial revolution, the search for a new source of energy production had become critical (Elwell, n.d.).  But the damage had already been done, as the lack of trees compounded the problems caused by carbon emissions. Without the forests that would absorb the CO2 and emit oxygen, the air became even more polluted and this is a concern still today (Eco-Issues.com, 2012).

The cramped areas where workers were housed were cheaply built, many from slate, which seeped water. Workers who lived in lower levels of the housing units often suffered from these damp cold surroundings. Because the dwellings were not built with toilets or washrooms, bathing was done in a tin bath with water pumped from a community well, buckets of rain water, or water from a nearby river; although for many, there was simply no bathing at all. Worse still was the lack of any systems of sanitation and this bathwater and all manners of household waste was thrown into the courtyard areas between the buildings. Toilets were essentially a “cesspit” and during the night, men referred to as the “night men” would come and clean up the discarded waste, which was then dumped into nearby rivers. These rivers eventually led into the River Thames which was a source of much of area’s drinking water. There was also contamination of the community well due to runoff and seepage from the courtyards and cesspits (Farshtey, n.d.).

The contaminated waters led to the diseases, most notably cholera, an infectious often fatal disease in the small intestines that causes vomiting, diarrhea, and dehydration. Tuberculosis, typhus, and typhoid were rampant in the small, cramped industrial communities that housed the working poor. These diseases led to tens of thousands of deaths, with the first cholera outbreak in 1831, this disease killed over 10,000 London residents in 1849 (BU, n.d.).

Infectious disease was compounded by the archaic medical practices used by trained doctors and “untrained quacks” of still popular medieval remedies such as bloodletting and leeching. Doctors also treated the ill with potions containing toxic ingredients like mercury, iron, or arsenic and they also encouraged vomiting and laxatives, which resulted in dehydration and the early deaths of infants and young children who lose water more quickly than adults (McDougal Littell, 2008).

While all the aforementioned factors had a direct and immediate effect of the people who lived in that era, the progress that began during the industrial revolution continues to have lasting, global environmental effects today. The improvement of production with power versus human driven machinery was first seen in the cotton mills, with the introduction of the spinning jenny, the flying shuttle, the water wheel and Compton’s mule. These innovations increased the amount of cotton that could be woven into yarn and then cloth. To operate the machines the factories were located next to fast moving rivers, where the water was used to power the machinery. With the introduction of coal as an energy source and the use of the Watt’s improved version (McLamb, 2013) of Thomas Newcomen’s Steam Engine which was powered by coal, these factories were moved to the cities where an abundance of labor was more readily available and were the beginnings of what became the “factory system” (Easton, et al., 2014). However, these coal burning factories expelled noxious coal smoke and soot into the air. Despite the rising levels of pollution, it was also these textile mills that hired many of the unskilled workers arriving in the cities, which raised the populations and led to the aforementioned squalid living conditions in the urban areas around the factories and the ensuing pollution of the rivers from the non-existent sanitation systems.

Due to increased demand in textiles and the limited resources in which to make it, innovations in transportation made it possible for factory owners to import the resources, such as cotton from the southern regions of the United States and India, much faster than with sailing ships. There were also increases in demand for the materials needed in the iron foundries, a growing industry that would lead to the design and manufacturing of the locomotive, railways, steam ships, bridges, and buildings. However, the process of turning iron ore into iron used massive amounts of wood to generate the necessary heat, putting further strain on an already dwindling forest. In 1709, Abraham Darby, who was an iron producer, developed a way to bake coal into a substance called coke, a smokeless fuel that burned hotter than coal; thus replacing the charcoal that had been previously made from wood. With this newfound use for coal iron mills were built near coal mines; thus resulting in strong ties between these two industries (Easton, et al., 2014).

New uses for iron were discovered as the innovations from this industry grew and iron was used to make utensils, cooking implements, and building materials for factories and housing; thus changing the way building were designed. Design innovations saw spectacular feats in architecture, such as London’s Crystal Palace, in 1851, and the Eiffel Tower, in 1889. (Easton, et al., 2014). Transportation improved with the use of railways, locomotives, and the iron bridges that crossed rivers and gorges; thus linking communities to the resources needed for the factories as well as the workers and customers.

All of this progress wasn’t without its drawbacks, as the industrial revolution made its way across the Atlantic to the northern United States. The discovery of anthracite coal in the coal regions of Pennsylvania brought with it the damaging air pollution that mired the skies in London, as it was used as a primary source of energy to power factories, steam boats, machinery, and eventually electrical plants. And much like London, growing populations in the cities brought the same difficulties with sanitation and water contamination. The New York City Board of Health, 1832, posted handbills with the “Preventatives of cholera” and not to consume raw vegetables and unripe fruit. The first two American cities, Chicago and Cincinnati enacted laws to promote cleaner air in 1881. Yet even after the connection was made that the water was the source of epidemic, due to misplaced confidence of public health officials that the self-purifying capacity of rivers, lakes, and the sea, continued to allow waste water to be discharged without treatment (Boundless, 2015). Of course, like London, urban sprawl resulted in clearcutting of forests and the overtaking of land to accommodate the growing cities.

Continuing forward, the main impacts of concern are continued population growth, environmental factors as they regard pollution, and personal and organizational resource consumption (including energy resources). Since the first observation of “Earth Day” in April on 1970 the global population has risen from 3.7 billion to well over 7 billion today (Ecology.com, 2015). While Earth Day1970 received broad public support and saw a wide spectrum of grassroots interests in attendance, it also received a fair amount of criticism. Radical activists and established conservationist and preservationist organizations were worried that Earth day organizers had pandered to the press, government, and corporate elite. Long established organizations such as the Sierra Club, the National Wildlife Federation, the Audubon Society, and other traditional conservative clubs believed that Earth Day would pervert the idea of wilderness protection in favor of urban and social justice issues. As they feared, Earth Day 1970 brought about a new political atmosphere of reacting against the adverse effects of industrial growth (Silveira, n.d.).

The 1970s also educated the American public on new issues regarding toxic chemicals, energy, and the possibilities of social, economic, and political decentralization; as an endless series of toxic chemical episodes garnered publicity, it brought more energy and momentum to the movement. Just a few of the episodes were the discovery of “polychlorinated biphenyls (PCBs) in the Hudson River, abandoned chemical dumps at Love Canal and near Louisville, Kentucky, and disasters at Kepone in Virginia” (Silveira, n.d.). Furthermore, the 1973-74 winter energy crisis rattled the American public and brought home the effects that shortages of oil had on the limits of human consumption. Favoring the corporate and technical elite, the government basically ignored the environmental concerns that were rising across the American landscape. However, changes were starting to take place with a shift from legislative to administrative environmental regulation. President Nixon submitted his plan creating the Environmental Protection Agency (EPA), which called for the reorganization and consolidation of many administrative agencies into the EPA. The plan went to Congress on July 2, 1970 and without opposition, went into effect sixty days later (Silveira, n.d.).

Comprised of attorneys, engineers, and economists, and due to the mainstream attention of growing environmental issues, the EPA developed a complex regulatory structure categorizing and addressing environmental issues by pollutant and medium. Ecological systems became a central concern and in its first 60 days the EPA brought enforcement actions, five times the amount doled out from the agencies that had been encompassed by the EPA. Out of increasing concerns over pesticides, the Pesticides Act of 1972 and the Coastal Zone Management Act of 1972 were passed. The latter was in response to the damage from dredging and filling, industrial siting, and offshore oil development.

The National Environmental Policy Act (NEPA), which was enacted on the first day of 1970, is in great contrast when compared later environmental legislation enacted in the following two decades. The Clean Air Act, passed in late 1970, was an environmental legislation that was overly conventional, detailed, and complex; on the contrary, NEPA was short, simple, and comprehensive. NEPA established national policy “to protect the environment, created a Council on Environmental Quality (CEQ), and required that environmental impact statements be prepared for major federal actions having a significant effect on the environment” (Alm, 1988).  In stark contrast to the environmental laws at that time (1988), which were hundreds of pages and bookshelves worth of regulations, NEPA was simple. Without much oversight, the CEQ built a staff and staked out an agenda, with their highest priority of becoming the federal environmental policy arm. Environmental impact statements and annual report requirements were both lower priority (Alm, 1988).

In the early 1970s, major advances were made in the policy area, as the CEQ developed an in-depth environmental program which “included amendments to the Federal Water Pollution Control Act, the Toxic Substances Control Act, forerunners to the Resource Conservation and Recovery Act (RCRA), and the Safe Drinking Water Act and amendments to the pesticides legislation”. The CEQ’s formative years were spent laying the foundation for almost all current environmental legislation, with the exception of Superfund and legislation regarding asbestos control (Alm, 1988).  While the CEQ’s role has diminished somewhat over the years, it still stands as guardian over the EPA and the CEQ’s annual reports, which review of environmental issues and trends, remain comprehensive and relevant to issues at hand.

Population growth, as it regards urbanization; environmental factors as they regard pollution, and personal and organizational resource consumption (including energy resources) has been greatly affected by legislation since the Industrial Revolution. Laws now ensure that safer homes, apartment, and office buildings are constructed, as well as the oversight of the Occupational Safety and Health Administration (OSHA), which oversees employers to make sure that workers are trained, provided safety equipment, and that their work environments are safe. The EPA has enacted numerous laws which ensure that industries are taking the necessary steps to operate their businesses in a safer, more environmentally friendly manner. Although, big business will still take shortcuts to reduce the bottom-line, it is not without consequence. Of course organizational and personal consumption plays a critical role in the lengths that corporations will go to meet the increasing demand for technology and the energy it requires.

While our environment has seen major improvements over the past two centuries, since the beginning of the Industrial Revolution there is still so much more to be done. The best way to continue to make the needed changes to repair the damage that was and is still being done, is the continuing education of the inhabitants of this planet. There needs to be more “in your face” programs, public service announcements, and comprehensive education programs that will bring to the forefront the damage that has been done and what every day citizens can do to correct it and stop it. People need to get involved with and get more vocal about the problems and lack of attention that is given to issues that are right under their feet and in the air they breathe. This includes teaching the current and next generations the importance of eliminating litter, cleaning up their environment, and taking a stand against those who flagrantly harm the ecosystems. We can no longer stand by and wait for the government to step up, because if we do that there will be nothing for future generations to enjoy, let alone an inhabitable planet to live on.


Alm, A. (1988, Jan/Feb). NEPA: Past, present, and future. EPA Journal. Retrieved from http://www2.epa.gov/aboutepa/nepa-past-present-and-future

Ausubel, J., Victor, D., & Wernick, I. (1995). The environment since 1970. Consequences: The Nature and Implications of Environmental Change 1(3), 2-15. Retrieved from http://phe.rockefeller.edu/env70/

Boston University School of Public Health (BU). (n.d.). A brief history of public health.  Retrieved from http://sphweb.bumc.bu.edu/otlt/MPH-Modules/PH/PublicHealthHistory/PublicHealthHistory6.html

Boundless. (2015, Jul 21). Industrialization and the Environment. Retrieved from https://www.boundless.com/u-s-history/textbooks/boundless-u-s-history-textbook/the-market-revolution-1815-1840-13/the-industrial-revolution-110/industrialization-and-the-environment-596-9029/

Easton, M., Carrodus, G., Delany, T., McArthur, K., & Smith, R. (2014). The industrial revolution. Oxford Big Ideas Geography/History 9, 269-313. Retrieved from http://lib.oup.com.au/secondary/geography_history/Big_Ideas_Geography_History/9/Oxford-Big-Ideas-Geography-History-9-ch5-Industrial-revolution.pdf

Eco-Issues.com. (2012, August 27). The industrial revolution and its impact on our environment.  Retrieved from http://eco-issues.com/TheIndustrialRevolutionandItsImpactonOurEnvironment.html

Ecology.com. (2015). World population counter. Retrieved from http://www.ecology.com/humans/population/

Elwell. F. (n.d.). The industrial revolution. Rogers State University, Claremore, OK. Retrieved from http://www.faculty.rsu.edu/users/f/felwell/www/Ecology/PDFs/IndRevolution.pdf

Farshtey, K. (n.d.).  Environmental Impact of the Industrial Revolution. Southwestern Academy. Retrieved from http://mrfarshtey.net/whnotes/Cities-IR.pdf

McDougal Littell. (2008). The industrial revolution. Modern World History: Patterns of Interaction. Retrieved from http://webs.bcp.org/sites/vcleary/ModernWorldHistoryTextbook/index.html

McLamb, E. (2013). The continuing ecological impact of the industrial revolution. Ecology Global Network. Retrieved from http://www.ecology.com/2013/11/11/continuing-ecological-impact-industrial-revolution/

Silveira, S. (n.d.). The American environmental movement: Surviving through diversity. Boston College. Retrieved from https://www.bc.edu/content/dam/files/schools/law/lawreviews/journals/bcealr/28_2-3/07_TXT.htm

Withgott, J. H. & Brennan, S. R. (2008). An introduction to environmental science. Essential Environment: The Science Behind the Stories, 3rd Edition. Retrieved from http://online.vitalsource.com/books/9780558787899

Nature’s Fury: The Global Effects of Natural Disasters

By Kriss Gross

March 9, 2014

Long lines at gas stations and store shelves empty of essential items like water, non-perishables, batteries, ice, candles, and generators are sure signs of an impending hurricane for anyone living on the United States’ (US) eastern and southern coasts. Due to advances made in monitoring and forecasting modeling, hurricanes are one of the few weather phenomena that people are able to make preparations for.  Unlike hurricanes, Mother Nature does not always allow for lengthy preparations to be made, especially in cases of tornadoes, where people may only have a few minutes to get to a safe place.  Earthquakes give no warning and often times lead to tsunamis, especially the quakes that happen deep in the ocean.

Monitoring the Phenomena

The World Meteorological Organization (WMO) website shares impending storm  information on the foundation of advisories released by Regional Specialized Meteorological Centers (RSMCs), Tropical Cyclone Warning Centers (TCWCs), and official warnings issued by National Meteorological and Hydrological Services (NMHSs) for their specific countries and areas of origin. Media outlets compile the advisory and warning data when planning their news bulletins that will be released to viewers in the areas of concern (WMO, 2014). These weather centers share maps and tracking data received from satellites, unmanned aerial vehicles (UAVs), manned weather research aircraft and coastal and fixed ocean data buoys.

The technological advances of modern communication allow for people to have the chance to survive many of today’s natural disasters. Satellites and some unmanned aerial vehicles (UAV), like the National Aeronautics and Space Administration’s

(NASA) Global Hawk, carry a Hurricane Imaging Radiometer (HIRAD) developed by Georgia Tech Research Institute (GTRI) engineers that track and record data from storms. This information is used to continue research into the development of more sensitive tracking devices, which afford more accurate details about developing storm systems (Wallace & Toon, n.d.).

The WMO website allows users worldwide to see developing weather advisories and warnings. Another international site, http://weather.org/stormwatch.htm, lets users type in the area this wish to observe. The sidebar lets users choose from a variety of phenomena, from hurricanes, tornadoes, snow and tides, to earthquakes, tsunamis, floods, and fires. Weather.org also has a Farmer’s Almanac and a newly added Aurora forecast. The advantage of websites like this, that allow interaction from the user, is people can check weather in any area they may be wanting to travel to or they may have friends or family in a location being affected by bad weather.

US residents can obtain information from www.weather.org, and the National Weather Service at www.nws.noaa.gov/, www.wunderground.com, and many locally supervised sites, such as those connected with local television and radio. In Europe, websites like www.weatherpal.se/, www.meteoalarm.eu/, and www.wunderground.com/severe/europe.asp allow users to click on their country and region to view weather in any area of interest or concern. Animated icons show different alerts, like flooding from snowmelt, high winds, and rising sea levels. The websites offer the option to view the information in various languages as well.

Japan, Taiwan and Mexico have earthquake early warning systems and the US has various seismological networks; while not necessarily focused on early warning, the Advanced National Seismic System, includes approximately 100 seismic monitoring stations. The “Global Seismographic Network” (GSN) is a fixed, digital network of seismological and geophysical sensors of connected telecommunications networks. Together, the US Geological Survey, the National Science Foundation and the Incorporated Research Institutions for Seismology, this GSN allows for worldwide monitoring of the Earth, using as many as 150 modern seismic stations distributed globally (Bogue, 2012).


Hurricanes’ destructive forces affect landmasses along the Atlantic and eastern Pacific oceans, the Caribbean Sea, and the Gulf of Mexico. With winds exceeding 155 miles per hour (MPH), hurricanes are capable of causing cataclysmic damage to coastlines and several hundred miles inland. Tornadoes and microbursts are common phenomena that coincide with these horrific storms and bring further destruction in the form of flying debris and heavy rainfall, which often leads to flash flooding and land or mudslides, in areas away from the hurricane.  Organizations like the Federal Emergency Management Agency (FEMA) and the Department of Homeland Security (DHS) work together to educate citizens of the dire effects of these storms, not just in the US, but in neighboring oceanic communities as well.

Tornadoes, like the one that hit Joplin, MO in. In the late afternoon of May 22, 2011, an EF5 multiple-vortex tornado struck Joplin, Mo. Reaching a maximum width of over one mile and with winds peaking at 250 mph, the tornado destroyed or damaged virtually everything in a six-mile path.

The devastating tornado claimed 161 lives, making it one of the single deadliest U.S. twisters since 1953. The Joplin tornado was only the second EF5 tornado to strike Missouri since 1950. It was the seventh-deadliest tornado in U.S. history and the 27th-deadliest in world history.


To meet NOAA’s “commitment to create a Weather-Ready Nation”, where the US is capable of preparing and responding to situations that affect “the safety, health, environment, economy, and homeland security, NOAA’s Office of Weather and Air Quality financed $1.3 million for seven multi-year proposals in 2013; thus enabling scientists and partnering universities to swiftly and efficiently “transfer new technology, research results, and observational advances through NOAA’s Joint Hurricane Testbed (JHT) to operational hurricane forecasting”. John Cortinas, director of NOAA’s Office of Weather and Air Quality, managers of the U.S. Weather Research Program that fund JHT projects, stated, “These important projects will help improve the information and tools that NOAA forecasters and researchers use to forecast tropical cyclones that impact the U.S. population and economy” (Allen, 2013).

Political Impact

Administrations and policy makers have the arduous task of determining the when, where and the how much of recovery, relief and eventually rebuilding efforts after devastating storms have torn communities apart. In the US, during hurricane Sandy, the 2012 presidential election was drawing near, leaving opponents with the decision to continuing to campaign or attend to their communities. President Obama put campaigning aside only long enough to tour the battered coastal areas, declare states of emergency and authorize the release of disaster relief funds. The American Red Cross, FEMA and other organizations made their way to the various areas to set up aid and relief sites. On November 6, 2012, “Residents in some of the affected areas are allowed to vote in the presidential election via email or fax, and some states allow voters to vote at any polling station” (CNN, 2013).

Economic Impact

The ultimate damage from these storms is loss of life; compounding matters, damages from these most recent storms, like those in the past, have left people without homes, without power to homes that survived, with businesses ruined (resulting in joblessness), and vehicles damaged or destroyed.  Community infrastructures also felt the impact as businesses have had to make the decision as to whether or not they rebuild, relocate or both.

Superstorm Sandy

In the large urban areas, like New York City, public transportation was reduced due to flooding. A 2013 CNN report shared that New York’s Metropolitan Transportation Authority (MTA) estimates over “$5 billion dollars in losses: $4.75 billion in infrastructure damage and a further $246 million in lost revenue and increased operating costs.” “According to a report released by the National Hurricane Center, Sandy is expected to rank as the second-costliest tropical cyclone on record, after Hurricane Katrina of 2005, and will probably be the sixth-costliest cyclone when adjusting for inflation, population and wealth normalization factors” (CNN, 2013).  The arguments ensued, with a very publically speaking New York City Mayor Michael Bloomberg saying, “Superstorm Sandy cost the city and local businesses some $20 billion dollars” and Governor Andrew Cuomo stating in an NPR interview that, “The taxpayers of New York cannot shoulder this burden, and I don’t think it’s fair to ask them to shoulder this burden. This state and this region of the country have always been there to support other regions of the country when they needed help. Well, we need help today.” Cuomo made a pint about Congress’ allocation of billions of dollars spent to aid Florida and other Gulf Coast states after hurricanes like Katrina and Andrew.  Commentator Joel Rose also noted that New Jersey’s Governor Chris Christie stated that New Jersey’s storm damages were an estimated $29 million (Rose, 2012).


“University of North Texas Professor Bernard Weinstein put the total economic loss from Katrina (August 23, 2005) to be as high as $250 billion”, as he also considers the economic impact of the disruption in gas production as well as the damages incurred from the storm. 19% of the US’s oil production was affected by Katrina as well as damage caused by a smaller hurricane, Rita. The combination of the two storms, Rita (September 26, 2005) and Katrina affected 19% of U.S. oil production by destroying 113 offshore oil and gas platforms, damaging 457 oil and gas pipelines, and spilling nearly as much oil as the Exxon Valdez (1989) oil disaster. This caused oil prices to increase by $3 a barrel, and gas prices to nearly reach $5 a gallon. To stop the escalation in gas prices, the U.S. government released oil from its stockpile in the Strategic Petroleum Reserves. The storm also decimated Louisiana’s sugar industry, with the American Sugar Cane League estimating a $500 million in lost annual crop value. This area of Louisiana is also home to 50 chemical plants, responsible for 25% of the nation’s chemical production along with of 12 Mississippi’s coastal casinos, accounting for $1.3 billion annually (Amadeo, 2012).


Being prepared for an impending natural disaster could mean the difference between life and death; while technology can help predict storms, like hurricanes, some phenomena does not give any warning. Tornadoes, often spawned from hurricanes, give little or no leeway, leaving only minutes to get to a safe place. Earthquakes, a naturally occurring phenomenon, happen with no warning; although, some are preceded by smaller tremors. Born from oceanic earthquakes, tsunamis add insult to injury, by quickly developing after the earth stops shaking.

Because of the lessons learned from previous disasters, regions that are in hurricane prone areas, build structures on stilts and composed of materials that can withstand the high winds and survive potential flooding.  Evacuation routes are in place along coastal areas and because of the available lead time can secure homes and businesses and in the case of mandatory evacuations, there is time to depart the area. Websites, like www.ready.gov, list preparedness guides, giving users the information and guidance needed to prepare (Ready.gov, 2014).

For residents, in what the US calls “Tornado Alley” many residents have built “safe rooms”. These rooms are generally in a basement, in the centermost past of the ground floor or on a concrete floor in their garage. Many Midwest homes use storm cellars (or fruit cellars) that were built decades ago (personal knowledge). Residents have learned to have a pre-established communication plan and emergency kit (Ready.gov, 2014).  Although tornado sirens are commonplace in regions like “Tornado Alley”, they are often not heard inside homes or businesses. Because of this, NOAA also recommends that all residents, especially those in tornado prone areas, should have a NOAA Weather Radio All Hazards (NWR).

“NWR numbers 1000 transmitters, covering all 50 states, adjacent coastal waters, Puerto Rico, the U.S. Virgin Islands, and the U.S. Pacific Territories. NWR requires a special radio receiver or scanner capable of picking up the signal” NWR broadcasts warnings and post-event information for all types of hazards: weather (e.g., tornadoes, floods), natural (e.g., earthquakes, forest fires and volcanic activity), technological (e.g., chemical releases, oil spills, nuclear power plant emergencies, etc.), and national emergencies (e.g., terrorist attacks). Working with other Federal agencies and the Federal Communications Commission’s (FCC) Emergency Alert System (EAS), NWR is an all-hazards radio network, making it the most comprehensive weather and emergency information available to the public” (NOAA,  2014).

Regions prone to earthquake have modified the way structures are built, with buildings being able to shift with the earth’s movement; thereby minimizing damage. Preparing their homes for possible earthquakes, residents can follow guidelines included at sites like www.ready.gov for information to make their homes safer. Unfortunately, for low-lying area, like those in many regions of Asia, population density has many residents living in communities that sit directly on fault lines, putting them in a direct path for disaster. For those who survive the initial earthquake, moving to higher ground to escape an ensuing tsunami may be their only means of survival (Ready.gov, 2014).


Allen, M. (2014) Research behind the high-resolution rapid refresh weather forecast model. National Oceanic and Atmospheric Administration (NOAA). Retrieved from http://research.noaa.gov/News/NewsArchive/LatestNews/TabId/684/ArtMID/1768/ArticleID/10458/NOAA%E2%80%99s-Newest-Weather-Model-Provides-Clearer-Faster-Forecast-of-Severe-Weather.aspx

Allen, M. (2013). NOAA invests $1.3 million with university and federal researchers for hurricane forecasting advances. National Oceanic and Atmospheric Administration (NOAA). Retrieved from http://research.noaa.gov/News/NewsArchive/LatestNews/TabId/684/ArtMID/1768/ArticleID/10253/NOAA-invests-13-million-with-university-and-federal-researchers-for-hurricane-forecasting-advances.aspx

Amadeo, K. (2012). How Much Did Hurricane Katrina Damage the U.S. Economy? About.com/US Economy.  Retrieved from http://useconomy.about.com/od/grossdomesticproduct/f/katrina_damage.htm

Bogue, R. (2012).  Monitoring and predicting natural hazards in the environment. Sensor Review, 32(1), pp. 4-11. Retrieved from http://search.proquest.com.libproxy.edmc.edu/docview/916982912

CNN. (2013). Hurricane Sandy fast facts. CNN Library. Retrieved from http://www.cnn.com/2013/07/13/world/americas/hurricane-sandy-fast-facts/

Federal Emergency Management. (2014). Hurricane Sandy Impact Analysis. Retrieved from http://fema.maps.arcgis.com/home/webmap/viewer.html?webmap=307dd522499d4a44a33d7296a5da5ea0

Folger, T. (2012). Tsunami science. National Geographic. Retrieved from http://ngm.nationalgeographic.com/2012/02/tsunami/folger-text

Knabb, R., Rhome, J., & Brown, D. (2006). Tropical Cyclone Report-Hurricane Katrina. National Hurricane Center. Retrieved from http://www.nhc.noaa.gov/pdf/TCR-AL122005_Katrina.pdf

Missouri Storm Aware. (2014a). What is a tornado? Tornado Facts and History. Retrieved from http://stormaware.mo.gov/tornado-facts-history/

Missouri Storm Aware. (2014b). What is Storm Aware? Preparing for a Tornado. Retrieved from http://stormaware.mo.gov/preparing-for-a-tornado/

National Weather Service. (2014). National Hurricane Center. National Centers for Environmental Prediction. Retrieved from http://www.nhc.noaa.gov/

National Oceanic and Atmospheric Administration (NOAA). (2014). NOAA Weather Radio All Hazards. Retrieved from http://www.nws.noaa.gov/nwr/

Ready.gov. (2013). Retrieved from  http://www.ready.gov/about-us

Rose, J. (2012). Sandy may be costliest hurricane to hit east coast. National Public Radio (NPR). Retrieved from http://www.npr.org/2012/11/26/165945325/sandy-may-be-costliest-hurricane-to-hit-east-coast

Science Channel. (2014) Top 10 Natural Disasters. Discovery Communications, LLC. Retrieved from http://www.sciencechannel.com/life-earth-science/10-natural-disasters.htm

Wallace, L. & Toon, J. (n.d.). Monitoring hurricanes: Georgia tech engineers assist NASA with instrument for remotely measuring storm intensity. Georgia Institute of Technology. Retrieved 3/6/2014 from http://gtri.gatech.edu/casestudy/gtri-hurricane-imaging-radiometer-HIRAD-NASA

World Meteorological Organization (WMO). (2014). Official observations/official warnings. Severe Weather Information Center. Retrieved from http://severe.worldweather.org/


Nuclear Medicine: We’ve Come a Long Way

By Kriss Gross

March 6, 2014

The days of doctors treating physical ailments with only the use of blood samples and microscopes are long gone, being since replaced or assisted with the use nuclear technology. Nuclear medicine has paved the way to finding the causes for cardiac diseases, cancer, bone problems and other internal ailments that a typical x-ray would not detect. There is now a variety of scans being done, with each having a specific amount of information that can be acquired. Depending on what a doctor suspects an ailment might be determines the type of testing that will be used.

What is Nuclear Medicine?

Like all living organisms, humans are made of biomolecules and are maintained by a kinetic balance called homeostasis. When this balance becomes irregular, due to disease or injury, the body’s molecular system can start to malfunction. Through the technology of nuclear medicine, physicians are able to explore these imbalances and determine the best avenue to begin the healing, when healing is a possibility (Mansi, Ciarmiello, & Cuccurullo, 2012).

Nuclear medicine is different from x-ray, ultrasound and other diagnostic testing in that it uses small amounts of radioactive materials (tracers) that are injected, swallowed or inhaled. The type of tracer used depends on the part of the body that the nuclear imaging devise will be studying. Nuclear medicine can aid in the determination of medical ailments by testing the function of specific organs, tissues, or bone by allowing the physician to visualize the presence of abnormalities due to changes in the appearance of the structure (Iagaru & Quon, 2014a).

Because of technological advances in hybrid imagery and the release of new radiopharmaceuticals (which do not use radioisotopes), nuclear medicine is experiencing continued growth in the United States (US). According to Stanford Medical School physicians, Iagura and Quon (2014a), “continued growth of the field will require cost-effectiveness data and evidence that nuclear medicine procedures affect patients’ outcomes. Nuclear medicine physicians and radiologists will need more training in anatomic and molecular imaging. New educational models are being developed to ensure that future physicians will be adequately prepared.”


Through these advances made in nuclear medicine, the imaging devises available to aid physicians in determining injury and illness have experienced great strides in enabling a more effective diagnosis of illness and injury. Positron emission tomography (PET), bone scintigraphy (bone scan), hybrid imaging such as magnetic resonance imaging (MRI), and white blood cell (WBC) scans are just a few of the technologies available today.

Positron emission tomography (PET)

The most notable application of nuclear imagery is in the cardiology field, with over 1,000 procedures per 100,000 people being performed in the US. The majority of these procedures are taking place in a hospital setting; however, the number of nuclear imaging clinics has seen a substantial rise (Delbeke & Segall, 2011). Positron emission tomography (PET) scans examine the body’s chemistry. Other common medical tests, such as MRI scans and computed tomography (CT) scans, only reveal structural aspects of the body. The advantage of PET scans is their ability to enhance the details about bodily functions. One PET procedure allows physicians to “gather images of function throughout the entire body, uncovering abnormalities that might otherwise go undetected” (Iagura & Quon, 2014a). Because PET scans are a biological imaging examination and disease is a biological process, PET scans are able to detect and stage most cancers sooner than they could be visualized with other common examinations. This early detection also allows physicians access to vital information concerning heart disease and neurological disorders, such as Alzheimer’s. PET scan’s non-invasive, accurate system allows physicians to determine whether a suspected abnormality is malignant or benign, which in turn saves many patients from the need to endure painful and expensive exploratory surgeries, which may not always detect the stage or detriment of a disease.  The accuracy of the PET scan aids in earlier detection and diagnosis, putting time on the side of the patient, which increases the chances that treatments will be successful.

While no special preparation is needed prior to a PET scan, some tests require fasting, the elimination of caffeine and a brief cessation of certain medications. Prior to the procedure, the patient is either injected with or given orally a small dose of a radioactive substance, a radiopharmaceutical or tracer, which locates in the specific areas to be tested. This substance emits energy (gamma rays) that are detected with a gamma imaging device, aided by a computer that produces images and measurements of the specified organs or tissue (Iagura & Quon, 2014a).

Bone Scintigraphy (Bone Scan)

Bone scintigraphy (bone scan) is the second most widely used application of nuclear imagery; although, these procedures only account for approximately 17% of nuclear imagery performed in the US (Delbeke & Segall, 2011). Skeletal scintigraphy, when performed correctly, has proven to be an effective method “in detecting anatomic and physiologic abnormalities of the musculoskeletal system”.  Different skeletal diseases or injuries, such as accidental and non-accidental trauma, arthritis, bone cancer, and congenital or developmental anomalies, reflect individualized patterns that are observable within the bone scan procedure; therefore, increasing the likelihood of early detection, diagnosis, and treatment (Greenspan, 2013).

Patients receiving a bone scan are asked to stay hydrated before and during testing and are given the smallest possible intravenous dose of a radiopharmaceutical (tracer), usually Technetium-99m or similarly effective compound. General dosing guidelines are followed with dosages for small children and adolescents being based on the patient’s weight. Prior to the scan, which takes place within 2-4 hours of the tracer’s administration, the patient is then asked to empty their bladder, to remove any visual inaccuracy of the scan’s imagery. If the bladder refills during testing the scans will be delayed; although, catheterization may be necessary to avoid interruptions (Greenspan, 2013).

Magnetic Resonance Imaging (MRI)

Unlike PET and Bone scans, MRI scans are noninvasive procedures that do not require radiation to acquire an internal image. MRI’s imaging machinery uses a large magnet and computer to create the internal body images, often referred to a “slices”. These slices display a limited number of body tissue layers at a time. These layers are then examined on the computers monitor, allowing physicians to detect and observe any internal abnormalities. MRI scans can take from 15 to 90 minutes to complete, with an average complete examination taking from 1.5 to 3 hours (Iagura & Quon, 2014b).

Closed MRI machines are large, hollow cylindrical tubes surrounded by a circular magnet. In preparation of an exam, patients receiving an MRI are asked to remove all jewelry, including piercings. Transdermal patches, such as nicotine, birth control, and nitroglycerin patches, (which contain trace amounts of metal) also require removal. Patients suffering from chronic pain or have difficulty lying still may be given mild sedative to facilitate an uninterrupted exam. Prior to any MRI exams, it is important for the patient to inform the physician of any metals that may be in the patient’s body. This includes artificial or prosthetic limbs or joints, bullets or shrapnel fragments, ear implants, pacemakers, IV ports, and any other accidental or intentional metals that might interfere with the exam or harm the patient (Iagura & Quon, 2014b).

White Blood Cell (WBC) Scans

To look for internal infection or inflammation a physician may order a white blood cell (WBC) scan, also known as Leukocyte scans. WBC scan is done to look for a hidden infection. It is particularly useful if your doctor suspects an infection or inflammation in the abdomen or bones, like those that may be experienced after a surgery. WBC scans are nuclear imaging scans that use radiopharmaceuticals (tracers) to look for infection or inflammation in the body. In a procedure referred to as tagging, blood is taken from a patient’s vein, the white blood cells are separated from the sample, mixed with a fractional amount of radioactive material (radioisotope, referred to as indium-111), then returned to the patient’s blood stream, 2-3 hours later, via an intravenous injection. The patient’s body undergoes the scan 6-24 hours later. The scanning machine, which resembles as x-ray device, detects the radiation emitted from the tagged white blood cells and a computer then displays the image created by radiated blood cells (Dugdale, 2012).

WBC scans take 1 to 2 hours to complete and usually take place in a hospital setting; however, outpatient clinics are also available. While there are no special necessary preparations, much like an MRI, patients are required to remove all jewelry, piercings, and other metal containing objects, including hearing aids and denture apparatus containing metal. Patients are asked to wear loose fitting clothing (without metal snaps or zippers) or don a hospital gown. Your physician will need to be told if during the previous month, you have undergone a gallium scan, are receiving dialysis, receive nutrition through an IV or steroid therapy, have hyperglycemia, or are taking long-term antibiotics; as patients may be asked to discontinue the use of antibiotics prior to the test. WBC scans are not recommended for women who are pregnant or if trying to become pregnant, birth control is recommended during the course of WBC procedures (Dugdale, 2012).


Radiopharmaceuticals involve small amounts of radioactive materials (tracers) that are injected, swallowed or inhaled, with the type of tracer used depending on the part of the body that the nuclear imaging devise will be studying.  Radiopharmaceuticals (not using radioisotopes), like Technetium-99m (Tc-99m), account for about 50,000 medical imaging procedures daily in the United States. Tc-99m is the most routinely used medical isotope today Tc-99m is derived from the parent isotope Mo-99, predominantly produced from the fission of uranium-235 in highly enriched uranium targets (HEU) in aging foreign reactors.  North America’s supply of Tc-99m was heavily disrupted after Canada’s Chalk River nuclear reactor experienced an outage several years ago (Ambrosiano, 2013).

In an effort to reduce supply interruptions and eliminate the “potential use in nuclear weapons, acts of nuclear terrorism, or other malevolent purposes” (White House, 2012), the Los Alamos National Laboratory announced that “for the first time, irradiated low-enriched uranium (LEU) fuel has been recycled and reused for molybdenum-99 (Mo-99) production, with virtually no losses in Mo-99 yields or uranium recovery”. This further demonstrates the feasibility of the separation process and the probability of environmentally, cost-friendly fuel recycling (Ambrosiano, 2013).

Advantages, Disadvantages, and Safety


The obvious advantages of nuclear medicine are realized in the number of patients who are surviving cancer, managing Alzheimer’s and Parkinson’s, and overcoming serious bone injuries. Nuclear imaging has become an irreplaceable tool in determining the reduction or recurrence of cancers, making its use as important as any of the medications used in a patient’s treatment. Because only one scan is needed to obtain a full body representation, repeated testing is often unnecessary, proving these procedures to be more cost effective as well (Iagura and Quon, 2014).


The medical disadvantages of nuclear imaging are more apparent with individual patients and the inability to apply the technology to all patients. Certain physical factors limit the use of MRI imaging when the patient has imbedded and internal metals, i.e. pacemakers, surgically implanted feeding tubes, pins, rods and other permanent metals. MRIs are also not recommended for pregnant patients prior to 3 months into pregnancy. Pregnancy is also a factor in potential use of PET and WBC scans as the possible dangers during pregnancy are yet to be determined (Iagura & Quon, 2014b).

Economically, nuclear imaging is expensive and many insurance companies limit its use without verifiable need is determines; thus leaving some patients with decreased levels of treatment or no treatment at all.  Another factor is due to limited access to reliable sources of the isotopes needed to perform the imaging. The US is addressing this issue with accelerated commercial projects to produce the molybdenum-99 isotope domestically, reducing the use of highly enriched uranium (HEU) and increasing the use of low-enriched uranium (LEU), like the advancements being made at the Los Alamos National Laboratory (White House, 2012 & Ambrosiano, 2013).


Nuclear imaging procedures are considered the safest, most prevalent imaging exams being used today. Patients receive radiopharmaceuticals in minimal doses that deliver the smallest amount possible to achieve the diagnostic information needed; often exposing the patient to less radiation than an x-ray (Greenspan, 2013). The scanning device does not produce any radiation and the radiation emitted from the radioisotopes is minimal; as the materials breaking down quite rapidly, all small traces of radioactivity have generally diminished in 1 or 2 days. There are no verifiable cases of injury due exposure to radioisotopes (Dugdale, 2012).    The education and training received by radiologists, technologists, and physicians requires responsible behavior that ensures the safety of the staff and patient alike. In order to produce the quality image required for diagnostic success an “as low as reasonably achievable” (ALARA) approach is maintained to ensure minimal dosages and exposure (Greenspan, 2013).

In Closing

Nuclear medicine and medical imaging has come a long way and regardless of the continuing hurdles the advancements already gained allow physicians of today and those of the future to pursue new avenues in prevention, diagnosis and healing of the many ailments that patients and physicians face together. PET, MRI, WCB, and improving radiopharmaceuticals are improving and saving lives every day. Future discoveries and continued research will aid in finding the causes for cardiac diseases, cancer, bone problems and other internal ailments that in the past could lead to continued illness and premature death. While there is still a long way to go, a future free from disease, illness and permanent injury is no longer so far away.



Ambrosiano, N. (2013). Domestic production of medical isotope Mo-99 moves a step closer. Los Alamos National Laboratory. Retrieved from http://www.lanl.gov/newsroom/news-releases/2013/May/05.13-domestic-production-of-medical-isotope-mo99.php

Delbeke, D. & Segall, G. (2011). Status of and trends in nuclear medicine in the United States. The Journal of Nuclear Medicine, 52. Issues and Controversies in Nuclear Medicine, pp. 24S-8S. Retrieved from http://search.proquest.com.libproxy.edmc.edu/docview/913590094

Delbeke, D., Royal, H., Frey, K., Graham, M., & Segall, G. (2012). SNMMI/ABNM joint position statement on optimizing training in nuclear medicine in the era of hybrid imaging. The Journal of Nuclear Medicine 53(9), pp. 5. Retrieved from http://search.proquest.com.libproxy.edmc.edu/docview/1041061503

Dugdale, D. (2012). WBC scan. U.S. National Library of Medicine. Retrieved from http://www.nlm.nih.gov/medlineplus/ency/article/003834.htm

Greenspan, B. (2013). Skeletal scintigraphy. ACR–SPR Practice Guideline for the Performance of Skeletal Scintigraphy (bone scan). Retrieved from http://www.acr.org/~/media/839771405B9A43F7AF2D2A9982D81083.pdf

Iagaru, A. & Quon, A. (2014a). Illuminating and treating diseases. Stanford School of Medicine. Retrieved from http://nuclearmedicine.stanford.edu/

Iagaru, A. & Quon, A. (2014b). Magnetic Resonance Imaging-MRI, Patient Prep Instructions. Stanford Medicine Imaging. Retrieved from http://stanfordhospital.org/clinicsmedServices/medicalServices/imaging/docs/MRI_Booklet.pdf

Mansi, L., Ciarmiello, A., & Cuccurullo, V. (2012). PET/MRI and the revolution of the third eye. European Journal of Nuclear Medicine and Molecular Imaging, 39(10), pp. 1519-24. Retrieved from http://search.proquest.com.libproxy.edmc.edu/docview/1073650386

White House. (2012). Fact sheet: Encouraging reliable supplies of molybdenum-99 produced without highly enriched uranium. Office of the Press Secretary. Retrieved from http://www.whitehouse.gov/the-press-office/2012/06/07/fact-sheet-encouraging-reliable-supplies-molybdenum-99-produced-without-

Hybrids: A Cleaner Way to Drive

Kriss Gross

February 28, 2014

Imagine for a moment, that overnight, several oil tankers that were headed to the U.S. were sunk by terrorists. In response to this news, the gas stations have long lines and the price at the pump has gone up a dollar from what it was yesterday. To make matters more unsettling, the weather service is notifying viewers that the tropical storm that was several hundred miles out to sea has shifted its direction, aimed for the east coast, and is turning into a hurricane. With the possibility of power outages, people are not only filling their vehicles, but gas cans for generators as well. The situation becomes increasingly tense as local governments ask that communities stick together and help their neighbors, as the impending fuel shortages will impede the National Guard’s ability to provide aid and security during and after the storm. While this scenario is fiction, it is not unrealistic and emphasizes the need for the U.S. to decrease its dependence on foreign oil and increase its forward motion toward more fuel efficient modes of transportation (Stein, 2013).

As of a 2012 report by the Department of Energy, the United States spends almost $1 billion a day to purchase oil, from other countries, which Americans use to power their cars, trucks, planes, trains, and ships. An additional $55 billion is spent annually on the effects from the emissions from these transportation modes, i.e. health and environmental damages. In response to this information “advances in electric vehicles, engine efficiency, and clean domestic fuels open up cost-effective opportunities to reduce our oil dependence, avoid pollution, and create jobs designing and manufacturing better cars, trucks, and petroleum alternatives” (U.S. Department of Energy (DOE), 2012a). In order to increase the U.S. consumers desire to consider purchasing a hybrid, manufacturers are addressing the issues of price, fuel economy, and overall sustainability.  Of course, as with anything that may affect consumer spending, the political aspect, both nationally and internationally, presents another aspect to be considered.

Building a Better Hybrid

Hybrid vehicles are more than just a car or that runs on battery power; hybrids embrace all the available technology that will ultimately reduce America’s consumer dependence on foreign oil to power our transportation industry. So what makes a car a hybrid? “There are three degrees of hybridization, such as mild, full, and depending on the hybrid, a different drivetrain” (Union of Concerned Scientists, 2013). To be considered a hybrid, the vehicle must meet the first three of five characteristics; idle-off capability, regenerative braking capacity, power assist, and engine downsizing are considered “mild” hybrids; when electric-only drive mode is added, it is considered a “full” hybrid. The final characteristic, extended battery-electric range makes the vehicle a “plug-in” hybrid.  In order to meet the goal of reducing foreign oil dependence and reducing environmental impact, the manufacturing industry is addressing one of the significant issues of hybrid vehicles, price; the designers of hybrid vehicles are addressing the issues of battery cost, electric drivetrains, structural weight, engine efficiency, and fuel (DOE, 2012a).

Batteries and Drivetrains

Better Batteries. In the past, America has fallen behind in the development of a better vehicle; however, since 2009 the DOE’s Office of Energy Efficiency and Renewable Energy (EERE) and U.S. auto makers are making strides to change this. One way is the manufacture of advanced vehicle batteries, an industry that has jumped from two factories in 2009 to 30 in 2012, allowing for the U.S. to be able to produce the number of batteries and components large enough to fulfill and support the production of one million plug-in hybrid and electric vehicles by 2015. Not only will this industry advancement put America in a good manufacturing stance, it also decreases unemployment by tens of thousands American workers (DOE, 2012a). While hybrid vehicles have been on the market for several years, their popularity was diminished because of the cost of the vehicle. EERE has worked to reduce that cost by reducing the cost of the battery system by more than 35% since 2008 and working to further reduce the cost by 70% by 2015 (DOE, 2012a).

Electric Drivetrains. The reduction of the cost of the battery components is only one dimension of the hybrid that is being addressed, as the drivetrain is also going electric. The drivetrain is the mechanics of powering the drive wheels and with hybrids there are three different options, series, parallel, and series/parallel; with the type of drivetrain used depending on the overall use of the vehicle. The series drivetrain has an independent electric motor to start the vehicle’s motion and a computer that determines whether the power to run the motor comes from the battery or the gasoline engine. Because the engines in series drivetrain vehicles are generally smaller than conventional engines and the battery cell is larger, these vehicles are better suited for the stop and go traffic of large urban areas and being considered for use in buses and other urban vehicles, such as taxis and limo services. Through the use of a computer and transmission the parallel drivetrain is the choice of most hybrid vehicles being manufactured for consumer use. This drivetrain operates from the energy supplied by both the gasoline engine and the battery cell (smaller than that used with a series drivetrain), and also uses the regenerative braking to recharge the battery. With the engine directly connected to the drive wheels, the inefficiency of converting mechanical power to electricity and back is eliminated, making these hybrids better suited to highway driving; finally, while more expensive, series/parallel drivetrains combine the best of both systems with a larger battery cell and a generator as well (Union of Concerned Scientists, 2013).

The combination of these technologies results in vehicles that are consuming less fuel and reducing environmental impact with the reduction of CO2 emissions. Since the passing of the “Clean Air Act” (1973) and its revision in 1990, the Environmental Protection Agency (EPA) is spearheading programs such as the “National Clean Fleets Partnership”; where corporate compliance is evident in major companies like UPS, FedEx, Pepsi, Schwan’s, and others, where they are upgrading their fleets to electric, hybrid, and alternate fuel vehicles and redesigning their routes to further reduce drive time and fuel consumption (DOE, 2012b).


As the concept, production, and utilization of hybrid vehicles becomes more mainstream, the flaws in production of these vehicles also become more apparent; especially in regards to the recycling of the battery. Every battery eventually comes to a point where it can no longer be recharged and must be replaced. In the case of batteries used in hybrid vehicles the issue has grown with the size of the battery. Because of the caustic nature of the materials used in building these batteries, they cannot be simply thrown away. Current regulatory policy has put the responsibility of recycling these batteries on the manufacturer. As seen with current hybrid models, such as the Toyota Prius and the Honda Civic, the manufacturer, Panasonic, reclaims the batteries and reuses the materials in the production of new batteries. However, other companies, who do not reuse the batteries’ components in the manufacture of new batteries, burn them. This practice results in another complicated and environmentally disastrous situation; one that requires strict regulations and standards to be applied and enforced (Lewis, Park, & Paolini, 2012, pg. 5).


            In keeping with the need to reduce oil dependency, the power needed to operate the factories that are manufacturing the hybrid batteries, drivetrains and vehicles, needs to come from sustainable sources. After all, it’s counter-productive to manufacture a product, with an end goal of reducing negative environmental impact, in a facility that is creating more pollution than will be reduced by its product. The solution is building factories that use energy systems that are sustainable, i.e. solar, wind or biomass and converting current operations to systems that apply combined heat and power (CHP)(DOE, 2012c). The same principle could be applied to the recharging stations that will need to be built for electric vehicles (EVs) to plug into to recharge (Stein, 2013, p.11).

Lighter Weight Materials

Magnesium alloys, high-strength steel, titanium, and carbon-fiber composites are the next step in developing a lighter weight vehicle. Research is determining that for every 10% of reduced vehicle weight there is a 7% gain in fuel efficiency. DOE has a goal to reduce overall car weight by 50% by 2015; thereby reducing overall fuel costs by $4,300 over the life of the vehicle (EERE, 2012). Because of reduced fuel demands, the U.S. will ultimately also reduce its dependence on foreign oil imports by 25%. Another continuing advantage of using these lighter weight materials would be a dramatic reduction in the need to mine for iron ore and other steel related materials; further reducing production costs. These lighter weight materials would be used, not only on the body of the vehicles, but the engine and other internal components as well (Schutte, 2012).


On a national and international level, the politics of “going green” is not uncharted territory. The implementation of the Clean Air Act (1973) and the ongoing concern of foreign oil dependence have made the future of hybrid vehicles inevitable; however, achieving these goals requires policy change, a more focused pursuit of battery fuel energy, and concentrated efforts to reduce overall cost of ownership of hybrids.

Domestic Policy.  Due in part to the high cost of hybrid ownership, advancements in battery production is needed to reduce these costs; thereby making hybrids more affordable for a broader section of consumers. While the number of companies, in the U.S. that are manufacturing hybrid batteries has risen, tax incentives and monetary awards to companies that further the advancement of battery technology, would go a long way in realizing a larger volume of consumer hybrid vehicle purchases. Although there were tax credits given from 2006 through 2009, they were limited due to the number of hybrid vehicles that were being imported, instead of being manufactured domestically. Further complications and causes for a less than stellar forward mobility is, with decades to establish a firm foothold, the oil industry has no desire to reduce its current dominance or relevancy and continues to lobby against interests that are trying to make substantial gains in battery fuel development and implementation (Lewis, Park, & Paolini, 2012, pg. 6).

Foreign Policy. The advantages of reducing U.S. dependency on foreign oil are numerous; most notably the tenuous relationship between the U.S. and China, concerning their combined dependence on oil produced in the Persian Gulf, specifically, Saudi Arabia. By reducing oil dependency, the U.S. could reasonably decrease its military presence; thereby cutting the cost of maintaining that presence and also removing the need to continue the provision of weapons, as has been done to keep the U.S. in favor with the Saudi Oil suppliers. Of course it is unreasonable to think that reducing foreign oil dependence means that U.S. military presence can be reduced to zero, the circumstances are not so simplistic; although, just as there remains a military presence in Germany, South Korea, and Japan, at least the volume of military personnel can be drastically reduced. By eliminating the need to protect sea trade routes, like the Strait of Malacca, 70-100 billion dollars could then be available to further the advancement of hybrid battery production and other areas of hybrid sustainability, i.e. the charging station infrastructure (Lewis, Park, & Paolini, 2012, pg. 6).


            “If the United States stopped using gasoline to power its automobiles, it would essentially become energy independent overnight” (Stein, 2013, pg. 6). Although the statement may have some truth to it, it is hardly plausible and more likely that complete energy independence will take several years to occur; with the biggest issue being the affordability of hybrids and EVs for the general public.  While the U.S. is making steady progress in economic recovery, the high number of Americans that are still unemployed and struggling to make ends meet also means there is a considerably high number of consumers who are not even thinking about hybrid or electronic vehicles, let alone considering a purchase. “For example, in 2009, there were 8.8 million families living below the poverty line. For an idea of what that measures, for a family of four made up of two adults and two children, the poverty line was $21,756.93” (Stein, 2013, pg. 16). That level of income is less than the outright purchase price of most hybrid vehicles available in today’s market.

Personal Choice

            While conducting my research on hybrid vehicles available today, making a purchasing decision is far from easy. I chose 5 hybrid models to research and while I am in no position to purchase a vehicle, I did my research under the assumption of a better financial picture. It is also important to understand that what may be important to one consumer when considering a vehicle purchase may be of no concern to another. When considering a hybrid purchase, I looked at what’s important to me and quite frankly I am not impressed with my options. First and foremost is the vehicle must be made in the USA. I really liked the Subaru hybrid model; however, after discovering it was neither designed nor manufactured in the U.S., I removed it from my list. Imagine my dismay when I discovered that the Ford Fusion is assembled in Mexico and even more disturbing, of the models I chose to research, not one was both designed and manufactured in the America. So setting my disappointment aside I continued my comparisons based on other personal criteria. (See page 12) Because I like to travel, when finances permit, I spend several hours in my car and comfort and ergonomics are essential. I am long-legged, so legroom is important, as well as cargo space for luggage and camera gear. So after basing my decision on best overall fuel economy and the amenities that I wanted, I chose the 2014 Toyota Avalon XLE (Kelley Blue Book, 2014). While it’s fun to dream, it will be some time before a vehicle like that finds its way into my driveway; although, maybe by then, Subaru will be built in the U.S.




Kelley Blue Book. (2014). Cars for sale. Retrieved from http://www.kbb.com/cars-for-sale/?tab=mkmd

Lewis, H., Park, H., & Paolini, M. (2012). Frontier battery development for hybrid vehicles. Chemistry Central Journal, 6(1). Retrieved from http://dx.doi.org.libproxy.edmc.edu/10.1186/1752-153X-6-S1-S2

Schutte, C. (2012). Lightweighting Materials. Vehicle Technologies Program. Retrieved from http://www1.eere.energy.gov/vehiclesandfuels/pdfs/merit_review_2012/plenary/vtpn04_lm_schutte_2012_o.pdf

Stein, F. (2013). Ending America’s energy insecurity: Why electric vehicles should drive the United States to energy independence. Homeland Security Affairs, 9(1). Retrieved from https://login.libproxy.edmc.edu/login?url=http://search.proquest.com.libproxy.edmc.edu/docview/1368766010

Union of Concerned Scientists. (2013). How hybrid cars and trucks work. Center for Science and Democracy. Retrieved from http://www.ucsusa.org/clean_vehicles/smart-transportation-solutions/advanced-vehicle-technologies/hybrid-cars/how-hybrids-work.html

U.S. Department of Energy. (2012a). Sustainable Transportation. Office of Energy Efficiency and Renewable Energy. Retrieved from http://www1.eere.energy.gov/office_eere/pdfs/55295.pdf

U.S. Department of Energy. (2012b). America’s clean, efficient fleets: An infographic. Retrieved from http://energy.gov/articles/americas-clean-efficient-fleets-infographic

U.S. Department of Energy. (2012c). Top 10 things you didn’t know about combined heat and power. Retrieved from http://energy.gov/articles/top-10-things-you-didn-t-know-about-combined-heat-and-power