Using Excel to Plot Year of Dedication and Locations of Confederate Monuments

Quote of the Day

Half of what you learn in medical school is going to be shown to be either dead wrong or out-of-date within five years. Trouble is, nobody can tell you which half.

— Dr. David Sackett, pioneer of evidence-based medicine. Everyone who works in the technology field understands this problem. I often compare our engineering knowledge to milk – it has a shelf life.

Figure 1: Monument to the First Minnesota at Gettysburg. (Source)

I have been listening to the controversy surrounding the Confederate monuments around the United States. I live in Minnesota, a Union State, and we have no Confederate monuments in the state. Minnesota did send troops to the Civil War and they performed well (Figure 1).

I heard some discussions on television about all the Confederate monuments around the country and when they were erected. I decided to look for the data and plot it for myself. I very quickly found a document from the Southern Poverty Law Center that looked interesting and provided me some interesting data tidying and charting challenges. My focus here is on duplicating their chart of monuments dedications dates. This chart type is not a standard Excel type and I wanted to see how I could duplicate it. This workbook will be used in a charting seminar that I plan to present in a month or so.

As I looked at the PDF source document, I suspected that it had been originally written in Microsoft Word. To cleanly copy the data into Excel, I use a three-step process to avoid a messy transfer:

• copy the table from the original PDF document.
• paste into Word and re-copy from Word.
• paste into Excel.

This produced data that I could clean-up using Get and Transform (also known as Power Query).  Once cleaned up, I could use the data to look at which states had monuments (Figure 2). You can see that there are 1503 Confederate monuments listed in the data, which does not include approximately 2,570 Civil War battlefields, markers, plaques, cemeteries and similar symbols that commemorate historical events.

While the original data listed 35 location types, its graphic grouped the data into three categories: (1) monuments at schools, (2) monuments at courthouses, and (3) other. I followed their lead.

Figure 2: Confederate Monument Summary Table.

Unfortunately, the original document only has dedication dates for 856 of the 1503 monuments. While this is a serious shortcoming, this is the data that they plotted, and I will do the same.

I plotted the dedication dates using the format used in the original document, which is not a standard Excel chart type. However, I was able to make a plot that looks very similar to the original by using some formulas in cells. Please see my workbook here for details.

Figure 3: Plot of Confederate Monuments By Year of Dedication and Location.

Note that the original table added callouts for various important civil rights events. I decided that I wanted to keep the chart simple and did not include these in Figure 3.

There are some observations that we can make about Figure 3:

• Confederate monuments began being dedicated before the Civil War was even finished.
• There was a large surge in monument dedications around 1910.
• There was smaller surge of monuments dedicated during the 1960s. The only unusual thing about these dedications is that many were at schools. Previous dedications were only rarely at schools.

Save

Change in US House Square Footage Over Time

Quote of the Day

You always have prior information before you do an experiment, because something motivated you to do the experiment.

— Data Science Facts twitter feed. I view this statement as support for the Bayesian approach.

Figure 1: New US Housing Square Footage vs Year.

My wife and I are about half-way through the construction of a 2100 square foot home in northern Minnesota. This weekend, my neighbors and I were talking about the area of houses being built today, and no one in the conversation had any data. I grabbed my computer, jumped on the Internet, and very quickly found data from the Census Department that answered my question. Like most census information, the data is in the form of screwy tables that need to be parsed to get into a form that can easily be plotted. This exercise gave me another excellent example to use when I train staff on the use of Excel's Get and Transform tool. Figure 1 was the result of my search. For those who are interested, my workbook is here.

Figure 1 provides some interesting information:

• The size of new US housing is rising over time.
• The Great Recession (2008) saw a minor dip in the trend of larger housing.
• House sizes are now larger than before the Great Recession.
• My new house is considered mid-range in size.

The Census Department actually breaks the data down by region of the US: West, Midwest, South, and Northeast. Figure 2 shows a panel chart with all the data. The biggest home are being built in the Northeast. The smallest homes are being built in the Midwest, where I live.

Figure 2: New US Housing Square Footage vs Region and Year.

Save

Posted in Construction | 1 Comment

Calculate Eclipse Path Width

Quote of the Day

You must accept 1 of 2 basic premises: Either we are alone in the universe or we are not alone. Either way, the implications are staggering!

— Wernher von Braun

Yesterday, a reader asked me how to compute the totality path width for the eclipse that will cross the US on 21 Aug 2017. I wrote a post on how to perform this calculation years ago. NASA has published a path width value of 114.7 km. This width will actually vary a bit as the shadow moves across the Earth because the distance change slightly between all the bodies involved. Also, the Earth and Moon are not perfectly round, which I assume. NASA has very detailed models that even include the Moon's shape variations due to mountains and valleys.

In today's post, I will show how to compute a good approximation to NASA's result. I provide Figures 1 and 2 to show how the various parameters are defined. For the details on the analysis, please see my original post. Figure 1 shows my model for the Sun-Earth-Moon system during the eclipse.

Figure 1: Solar Eclipse Geometry

Figure 2 shows the details on my approximation for the umbra width.

Figure 2: Illustration of Umbra on the Earth.

I used Equation 1 to compute the totality path width. I grabbed the data specific to 21-Aug-2017 from this web site. The rest of the information I obtained from Google searches.

 Eq. 1 $\displaystyle {{s}_{{umbra}}}=2\cdot \left( {{{d}_{{umbra}}}-\left( {{{d}_{{moon}}}-{{r}_{{earth}}}} \right)} \right)\cdot \tan \left( \alpha \right)$

where

• sumbra is the totality path width on the Earth
• dsun is the distance between the Earth and Sun.
• dmoon is the distance between the Earth and Moon.
• rmoon is the radius of the Moon.
• rsun is the radius of the Sun.
• α is the vertex angle of the moon's shadow cone (see Figure 1). We compute α using $\alpha =\arcsin \left( {\frac{{{{r}_{{Sun}}}-{{r}_{{Moon}}}}}{{{{d}_{{Sun}}}-{{d}_{{Moon}}}}}} \right)$
• dumbra is the length of the moon's shadow cone (see Figure 1). We compute dumbra using $\displaystyle {{d}_{{Umbra}}}=\frac{{{{r}_{{Moon}}}}}{{\sin \left( \alpha \right)}}$

Figure 3 shows my calculations. I obtained 116 km, which compares favorably with NASA's 114.7 km.

Figure 3: My Totality Path Width Calculation.

Posted in Astronomy | 5 Comments

US Farmland % By State

Quote of the Day

Write a thousand words a day and in three years you’ll be a writer!

Figure 1: Top 15 States By Farm Area Percentage.

I am preparing to drive out to Idaho to experience totality during the August 21, 2017 eclipse. I am choosing to view the eclipse from Idaho because my granddaughter (and her parents) live in western Montana, and I can stop there for visit when I return to Minnesota. This will be a long drive through large states (Minnesota, North Dakota, Montana) that have a significant percentage of farm land. While planning my journey, I became curious as to the farmland percentage in each of the fifty states. In this post, I compute the farm land percentage by taking the farm acreage in each state and dividing by the state's land area (water area removed).

Figure 1 shows the top fifteen states by farm land percentage based on data from the US Department of Agriculture (USDA). You can see the farmland percentage for all fifty states by opening the Excel workbook linked here. The data was processed in Excel using Get and Transform (also known as Power Query).

It does not surprise me at all that North Dakota is over 88% farm land, Montana is 64% farmland and Minnesota is 50% farmland. The farms in each state are quite different: Montana will be dominated by cattle ranches, North Dakota by grain farms, and Minnesota by dairy farms.

The US Census department also breaks down the country into geographic regions called divisions, which are defined in the following list:

• New England: Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, Vermont
• Middle Atlantic: New Jersey, New York, Pennsylvania,
• East North Central: Illinois, Indiana, Michigan, Ohio, Wisconsin
• West North Central: Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota, South Dakota
• South Atlantic: Delaware, Florida, Georgia, Maryland, North Carolina, South Carolina, Virginia, West Virginia
• East South Central: Alabama, Kentucky, Mississippi, Tennessee
• West South Central: Arkansas, Louisiana, Oklahoma, Texas
• Pacific: Alaska, California, Hawaii, Oregon, Washington

Figure 2 shows the agricultural percentage by geographic division. The Pacific division has such a low farmland percentage because Alaska only farms 0.23%  of its massive land area.

Figure 2: US Farmland Percentage by Geographic Division.

Cost Advantages of Electrical Generation Using Natural Gas

Quote of the Day

There’s an unpriced externality in the cost of fossil fuels. The unpriced externality is the probability-weighted harm of changing the chemical constituency of the atmosphere and oceans. Since it is not captured in the price of gasoline, it does not drive the right behavior. It’d be like if tossing out garbage was just free, and there was no penalty, and you could do it as much as you want. The streets would be full of garbage. We’ve regulated a lot of other things, like sulfur emissions and nitrous oxide emissions; it’s done a lot of good on that front.

— Elon Musk. I view this quote as a restatement of the Tragedy of the Commons.

Figure 1: Total Cost of Purchasing A kW-hr of Electrical Power vs Time and Fuel. The y axis is expressed in units of Mills ($0.001) per kW-hr. I have been reading about how low natural gas prices have been affecting the coal mining industry. Coal mining is affected because those coal-fired electrical generation plants that can switch fuels are converting to natural gas to achieve significant cost reductions. Also, natural gas can fuel peaking generators that are designed to provide peak load support. The fuel conversions have increased to the point where it is significantly reducing the demand for coal. I had no idea what the price advantages were for natural gas over other fuels, so I surfed over the US Department of Energy's (DoE) web site and found two tables that provided a summary of the maintenance, fuel, and operating costs for different types of electrical generation plants. The DoE combines the data for natural gas, solar, wind, etc into a single category called "Gas Turbine and Small Scale." This type of data is best presented in the form of a graph, so I used Excel's Get and Transform (Power Query) to download the tables, combine them, and reformat the information for easy plotting. Costs are expressed in Mills ($0.001) per kW-hr.

My Excel workbook is available here. It is a good example for people who want to (1) learn how to access data with a complex format, (2) uses an inner join to combine data, and (3) makes use of table transpose and unpivot operations.

Figure 1 shows the dramatic drop in the cost of electrical power produced by natural gas since 2008. Starting in 2012, electrical power generated from natural gas has had roughly the same cost or lower than from coal, and you can reduce your carbon footprint as well.

Figure 2 shows how the cost of natural gas, coal, and uranium have varied with time – the water for a hydroelectric plant is considered to be free.  The drop in prices of electricity generated by natural gas is entirely accounted for by the price drop in fuel since the plant maintenance and operating costs have not changed much (Figure 3).

Figure 2: Fuel Cost vs Time

Figure 3: Plant Maintenance and Operating Costs vs Time.

Posted in News Fact Checking | 2 Comments

100 Mile Square Solar Array Could Power US

Quote of the Day

The only people who say worse things about politicians than reporters do are other politicians.

Andrew Rooney. We appear to be in the process of confirming Andy's statement.

Figure 1: Example of a Large Solar Array. (Source)

Elon Musk gave a talk at the National Governors Association where he stated that a 100 mile square solar array built in place like Nevada could generate an amount of electrical power equal to that of the present electrical grid. This seems like an easy calculation to verify and I thought I do that analysis here.

I arbitrarily assumed that the panel is going to be placed at Las Vegas – seems like a place in Nevada that most people are familiar with. I then need to determine the amount of solar energy available to a panel in Las Vegas. This calculation is greatly simplified by the existence of web sites that will tell you the solar irradiance at your location with respect to how you position your panel. My calculations assume tilting the panel toward the equator at an angle equal to the current latitude. This approach is simple and provides 18% more energy than just laying the panel flat on the ground.

Figure 2: A 100 Mile Square Solar Array in Nevada Could Power the US.

Elon did his homework. Assuming a nominal solar panel efficiency, a 100 mile square panel positioned in Nevada could generate enough energy for the US. Because power is needed 24 hours per day, there are enormous issues associated with energy storage, but the energy is there.

Save

Posted in News Fact Checking | 1 Comment

US WW2 Submarine Tonnage Sunk Database

Quote of the Day

Almost every successful person begins with two beliefs: The future can be better than the present. And I have the power to make it so.

— David Brooks. This quote really speaks to me. The bedrock of success is the belief that you can make things better. At the core of this belief is a strong feeling of hope.

Figure 1: USS Gato, the lead boat of the most common class of US submarines during WW2.

While answering a recent question about the tonnage sank by the top US submarine skippers during WW2, I realized that I had not made available my conversion to Excel of the Joint Army–Navy Assessment Committee (JANAC) data for vessels sunk by US submarines. The JANAC records are considered the official records because they were cross-checked with information from Japanese records.

The spreadsheet itself is from a course I taught last year on using Get and Transform (also known as Power Query). The raw data is from the Hyperwar web site – I often use the old WW2 records as an example of horribly formatted data that can be converted to a useful computer format using Python or Excel. The Hyperwar data appears to be from human-generated JANAC reports that were OCRed and converted to HTML. The Hyperwar site is a great resource, but the data does contain numerous conversion issues (e.g. commas turned into periods, extra Øs added to numbers). I cleaned up the obvious problems and cross-checked my results with data from the now-defunct Valor at Sea web site. The agreement was excellent.

This post makes this data available to those who are interested. With the Valor at Sea website offline, I could not find data summaries available anywhere else. Having it in spreadsheet form provides you the ability to generate custom reports. The data includes the specific ships sunk by each submarine. It does not include data for ships sunk because of the action of multiple submarines.

The Excel workbook is available here. There are no macros, but there are hyperlinks to various data sources.

Security Risks with Medical Radiation Sources

Quote of the Day

Doing statistics is like doing crosswords except that one cannot know for sure whether one has found the solution.

John Tukey, statistician and data analyst extraordinaire. If you get the chance, read his book Exploratory Data Analysis. It is a gem.

Introduction

Figure 1: Cobalt-60 Use in a Gamma Knife. (Source)

I was reading the Washington Post this weekend when I stumbled upon an 22-July-2017 article about concerns that ISIS in Mosul had access to an old medical radiation source. This source, which contains the radioactive isotope cobalt-60, is used in the treatment of cancer (Figure 1). However, cobalt-60 is extremely radioactive and could be used to build a dirty bomb. Fortunately, ISIS did not touch the source, but the concerns about a terrorist being able to use one of these radiation sources for a dirty bomb are real.

There have been encounters between radiation sources and unwary people – the encounters did not end well. For example, in one case, people tried to breakdown a cesium-137 radiation source for scrap. The incident ended with four people dead, twenty hospitalized, and 249 contaminated.

The Washington Post article mentioned three facts that we can easily verify using some simple math.

• The source contains 9 grams of cobalt-60, which generates a radiation level of 10,000 curies (Ci) when new. (Quote)
• Person standing three feet (~1 meter) from the unshielded source would receive a fatal does in less than 3 minutes. (Quote)
• The source is 30 years old, so its radiation level is significantly diminished with respect to a new source. (Quote)

Background

Definitions

becquerel (Bq)
One becquerel is defined as the activity of a quantity of radioactive material in which one nucleus decays per second. (Source)
curie (Ci)
The curie (symbol Ci) is a non-SI unit of radioactivity, one Ci = 3.7 × 1010 nucleus decays per second or one Ci = 3.73.7 × 1010 Bq. (Source)
sievert (Sv)
The Wikipedia defines the Sievert (symbol: Sv) as the SI derived unit of equivalent radiation dose. The Sievert represents a measure of the biological effect, and should not be used to express the unmodified absorbed dose of radiation energy, which is a physical quantity measured in Grays.
Dose Equivalent (H)
Equivalent dose is a dose quantity H representing the stochastic health effects of low levels of ionizing radiation on the human body. It is derived from the physical quantity absorbed dose, but also takes into account the biological effectiveness of the radiation, which is dependent on the radiation type and energy. In the SI system of units, the unit of measure is the sievert (Sv). (Source)
Half-Life (tHL)
Half-life is the time required for a quantity to reduce to half its initial value. The term is commonly used in nuclear physics to describe how quickly unstable atoms undergo, or how long stable atoms survive, radioactive decay. (Source)

Equation 1 tells us the amount of a radioactive species we have remaining after time t assuming that we had a 100% pure sample at time ø.

 Eq. 1 $\displaystyle N\left( {t,{{T}_{{HL}}},{{N}_{0}}} \right)={{N}_{0}}\cdot {{2}^{{-\frac{1}{{{{t}_{{HL}}}}}}}}$

where

• THL is the half-life of the radioactive species.
• t is elapsed time since having a pure sample.
• N0 is the initial amount of the substance. You can use mass or moles or even numbers of atoms.
• N is the amount of the radioactive species left after time t.

To obtain the level of radioactivity (i.e. decays per second), we need to take the derivative of Equation 1 (see Figure 3).

Analysis

Setup

Figure 2 shows how I setup the calculations in Mathcad 15.

Figure 2: Analysis Setup.

Calculations

Figure 3 shows my calculations that duplicate the results in the Washington Post article. The purple check marks indicate the specific results. Note that the Post article computes that you would get a lethal dose of radiation from a new Co60 source ~2.5 minutes when it is new, a calculation which I duplicate below. The 30-year old source in question is shown to have its lethality reduced by a factor of over 50. This means you would get a lethal dose of radiation from this source with ~2 hours of exposure (thanks to Ronan in the comments for pointing this out).

Conclusion

I had never thought about the potential security issues associated with medical radiation sources. I was surprised to the see how intense the radiation levels were from a cobalt source. While the  cobalt source mentioned in the article is 30 years old and only 2% of its initial radiation level, it is still a very dangerous item.

Appendix A: Quotes from the Article

In a draft report written in November 2015, research fellow Sarah Burkhard calculated that the radioactive cores, when new, contained about nine grams of pure cobalt-60 with a potency of more than 10,000 curies — a standard measure of radioactivity.

Fatal Dose Quote

A person standing three feet from the unshielded core would receive a fatal dose of radiation in less than three minutes.

Because cobalt-60 decays over time, the potency of the Mosul machines’ 30-year-old cobalt cores would have been far less than when the equipment was new, but still easily enough to deliver a lethal dose at close range, the report said.

Save

Save

Save

Posted in News Fact Checking | 5 Comments

Insulation Opportunity Costs

Quote of the Day

To the modern American at the millennium, these carrier pilots of more than a half century ago -- Massey, Waldron, and Lindsey last seen fighting to free themselves in a sea of flames as their planes were blasted apart by Zeros -- now appear as superhuman exemplars of what constituted heroism in the bleak months after the beginning of World War II. Even their names seem almost caricatures of an earlier stalwart American manhood -- Max Leslie, Lem Massey, Wade McClusky, Jack Waldron -- doomed fighters who were not all young eighteen-year-old conscripts, but often married and with children, enthusiastic rather than merely willing to fly their decrepit planes into a fiery end above the Japanese fleet, in a few seconds to orphan their families if need be to defend all that they held dear. One wonders if an America of suburban, video-playing Nicoles, Ashleys and Jasons shall ever see their like again.

Victor Davis Hanson on the heroes of the Battle of Midway. Unlike Hanson, I would not be concerned about today's youth. I watched many young people from my neighborhood volunteer to fight in Iraq and Afghanistan. My main concern is ensuring that government does not waste their lives on ventures unworthy of their sacrifice. I should mention that Max Leslie and Wade McClusky did survive the Battle of Midway.

Introduction

Figure 1: My Version of Finehomebuilding Insulation Comparison Table.

I have had a number of discussions with coworkers about the different types of wall insulation – some of these discussions have been documented in previous blog posts (e.g. here , here, here). There exists wide cost and performance disparities between the different wall insulation technologies. With respect to cost, I view fiberglass batts as a low-cost insulation option and the spray foams (open and closed cell) as high-cost options. All the options have their advantages and disadvantages. Fine Homebuilding Magazine (August/September 2017) has an excellent article by Martin Holladay provides an excellent spreadsheet-like analysis that illustrates the trade-offs between open and closed-cell foam nicely. My goal in this post is to go through the computational details of his analysis and to discuss his approach to choosing the best insulation for your application.

I like Martin's discussion because he focuses on the opportunity costs associated with the insulation choice. In Martin's analysis, he shows that the difference between open cell and closed cell R-values can be rather small relative to the cost differential – Martin mentions a $3K premium for closed cell over open cell to obtain a 1R - 2R improvement when insulating stud bays. Martin argues that you can get a better R-value return by spending that$3K on other insulation options, like adding exterior rigid foam – which I have used for some projects. The exact premium will vary depending on the house size, but I have encountered similar price differentials in my own work.

Background

Thermal Modeling

Most home insulation value is modeled using the resistor analogs, like I show in Figure 2. The model in Figure 2 is very simple, but useful in that it models two important types of heat transfer in traditional construction:

• Losses through Framing (RFraming)
In traditional framing, heat can pass directly between the inside and outside directly through the wood, which is called thermal bridging. To simplify the analysis, a fixed percentage of the wall construction, called kFraming,  is assumed to be subject to this form of heat loss. kFraming is usually a number between 20% and 30%.
• Losses through the Stud Bay or Cavity (RCavity)
When framing exterior walls, the spaces between studs are normally filled with insulation. The percentage of heat loss through the stud bay is 1-kFraming.

Figure 2: Simple Thermal Modeling of Different Wall Constructions.

While the resistor values are calculated using the thermal conductivities of the materials involved, most manufacturers' specify the R-value per inch of their materials. I will use these numbers for this analysis.

Analysis Assumptions

Holladay's analysis makes the following assumptions, all reasonable:

• Standard 2x4 and 2x6 wall construction practices
• Closed cell insulation is so dense as to be difficult to trim flat to the studs. This means installers usually do not completely fill the cavity. Assume closed cell insulation is filled to within 1/2 inch of the stud edge.
• The R-value of closed cell insulation is 6.5 R per inch. The R-value of open cell construction is 3.7 R per inch.
• 25% of the wall construction is thermally "bridged" by the framing members. The other 75% consists of stud cavities filled with insulation or openings for doors and windows, which are not modeled.
• The R-factor of doors and windows are handled separately.
• Any exterior insulation is ignored.
• There is no insulation value to any wood that extends beyond the insulation.

Analysis

Given the background information, the analysis is straightforward. I put everything into an Excel spreadsheet, which you can download here. The spreadsheet generates the table shown in Figure 1.

Conclusion

I always consider the return on my investment. Holladay entitled his article "Close-cell foam between studs is a waste," which really grabbed my attention. I have been reluctant to invest in closed cell because of its high installation cost relative to the insulation benefit it provides. It looks like an energy expert like Holladay also has concerns.

Save

Save

Quote of the Day

A faith that makes losing a sin will make cheating a sacrament.

Introduction

Figure 1: Map of Guiana Space Centre. (Source)

Putting a satellite into orbit requires that you impart a velocity of ~17,000 mph to the satellite. Because the Earth is rotating, its surface velocity gives you a head start on achieving orbital velocity when you launch toward the east – the direction of the Earth's rotation. The closer you move your launch site to the equator, the more velocity you get from the Earth's rotation.

In this post, I will compute the surface velocity for the European launch site in Guiana and at the Cape Canaveral launch complex (example launch pad) in Florida. Figure 1 shows a map of the the Guiana launch site, which is closer to the equator than Cape Canaveral. My objective in this post is to determine the advantage that the Guiana launch site has over Florida's Canaveral launch site.

I became interested in the important of launch site location after reading this article. My results agree with theirs.

Background

Earth Ground Velocity

Equation 1 shows the formula that I used to compute the Earth's velocity at the two launch sites

 Eq. 1 $\displaystyle {{v}_{{Ground}}}={{\omega }_{{Earth}}}\cdot {{r}_{{Earth}}}\left( {{{\theta }_{{Latitude}}}} \right)$

where

• ωEarth is angular velocity of the Earth [rad/sec]. The angular velocity is computed using the formula ${{\omega }_{{Earth}}}=\frac{{2\cdot \pi }}{{{{T}_{{Sideral}}}}}$, with TSidereal being the length of the Earth's sidereal day.
• rEarth is radius of the Earth at the latitude θLatitude [m]

Equation 2 shows how to compute using Mathcad 15 the Earth's radius as a function of latitude based on the WGS84 reference model. The exact formula I used is from this paper.

 Eq. 2

Launch Site Latitudes

The latitude of the launch sites is what determines its velocity boost. I list their latitudes here:

I should mention that the Cape Canaveral launch site consists of numerous launch pads. Figure 2 shows a map.

Figure 2: Map of Cape Canaveral Air Force Station. (Source)

Analysis

I want to compute the difference in the Earth's rotational velocity at the the two launch sites. Figure 3 shows how to perform that calculation.

Figure 3: Earth Rotational Velocity at Equation: Guiana and Canaveral.

My analysis shows that the Guiana launch site has a velocity advantage of 126 mph, which is relatively small compared to the 17,000 mph needed to achieve orbit.

Conclusion

I was able to confirm the velocity advantage for the Guiana launch site over that of Cape Canaveral. The ideal launch site would be as near as possible to the equator with nothing but ocean to the east to ensure that failed rockets would drop into the ocean. Polar launches have different requirements – like clear ocean to the north. The US uses Vandenberg Air Force Base for those launches.