Mosquito Magnet Testing

Quote of the Day

20 years from now, the only people who will remember that you worked late are your kids.

— Something I saw on a Reddit post. I don't recall which one. Young parents need to remember this.


Introduction

Figure 1: Mosquito Magnet I am testing.

Figure 1: Mosquito Magnet I am testing. (Amazon)

I do not like mosquitoes — who does? To reduce the number of mosquitoes at my Northern Minnesota cabin, I decided to buy a Mosquito Magnet (Figure 1). I bought it last year, and while it ran for a while, it soon gave me an error warning (fast flashing LED) and stopped working. I live in a remote location, and getting things serviced is difficult, so I put it away for when I had time to look at it.

Unfortunately, we have quite a mosquito population this year, and my family has demanded that I take active measures to reduce the mosquito population. So I pulled the Mosquito Magnet out of storage and repaired it with the help of a Youtube video (Link).

It turns out I had a loose thermistor, and I was able to secure it in place. While repairing it, I also went through the cleaning process (Link).

I decided to do a little testing and gather a little data. In this post, I will look at the unit's rate of propane use. I will discuss its effectiveness in capturing mosquitos in a later post.

Background

Analysis

Test Method

I am working this summer from my cabin's garage. During the day, I walk between the garage and my cabin multiple times. The Mosquito Magnet is along the path between the cabin and the garage, so I stop during my walk and use a fish scale to measure the propane tank weight.

I performed the test over four days because I suspected the flow rate would vary by the tank pressure. While I did not measure the tank pressure, I did measure the weight of propane in the tank, which should be related to the pressure.

Data Analysis

Propane Weight Calculation

Equation 1 can be used to calculate the weight of propane left in the tank.

Eq. 1 \displaystyle {{W}_{{\text{Propane}}}}={{W}_{{\text{Tank}}}}-{{W}_{{\text{Tare}}}}

where

  • WPropane is the mass of propane remaining in the tank, which we compute using Equation 1.
  • WTank is the measured weight of the propane tank (my measurements). Government regulations limit the fill value on a "20 lb" tank to 80% of its volume rating for safety reasons. So my local propane vendor fills my tank to ~34 lbs, which means they put in 17 lbs of propane.
  • WTare is the empty weight of the "20 lb" propane tank. My tank has an empty weight of 17 lbs.

My tank weight measurement was subject to some variation because I was holding the propane tank in the air by the scale. Fortunately, I took lots of measurements over time, and the errors should average out.

Vendor Propane Burn Rate Specification

The Mosquito Magnet vendor states that a 20 lb propane tank should last ~3 weeks. We can compute the expected burn rate using Equation 2.

Eq. 2 \displaystyle {{r}_{{\text{Propane}}}}=\frac{{{{W}_{{\text{Full Weight}}}}}}{{{{T}_{{\text{Specified Time}}}}}}=\frac{{17\ \text{lb}}}{{21\ \text{days}}}=0.81\ \frac{{\text{lb}}}{{\text{day}}}

where

  • WFull Weight is the mass of propane remaining in the tank.
  • TSpecified Time is the run time specified by the vendor (21 days).

Visualization

Estimated Propane Use Per Day

The unit stopped running when the amount of propane in the tank reached ~2 lbs. Figure 2 shows the tank weight versus time on day 2 of the testing. Day 2 began with 7.7 lbs of propane in the tank — a couple of pounds shy of halfway full = 9.5 lbs = (17 lbs+2 lbs)/2. All of my data is in the worksheet attached here. The rate of use did vary by day. The more propane in the tank, the higher the burn rate.

Measured Mosquito Magnet Propane Usage

Figure 2: Measured Mosquito Magnet Propane Usage.
The tank had 7.7 lbs of propane at the start of the test.

Variation in Daily Propane Use with Propane Weight

Figure 3 shows how the propane burn rate varies with the weight of propane in the tank. This figure shows that the propane burn rate reduces as the tank depletes. I would estimate the average burn rate for 9.5 lbs of propane as 0.74 lbs/day = 0.70 lbs/day (7.7 lbs value)+ 0.02 lbs/day/lbs ⋅ 2 lbs. This is pretty close to the vendor's value of 0.81 lbs/day, considering the crudeness of my measurement technique.

Figure M: Variation in Daily Propane Use with Tank Weight.

Figure 3: Variation in Daily Propane Use with Tank Weight.

Conclusion

The Mosquito magnet does use ~0.81 lb/day of propane when averaged over a full tank. However, the rate of propane consumption varies with the amount of propane in the tank.

Posted in Cabin | Comments Off on Mosquito Magnet Testing

Great Escape Nationalities

When you create a difference in someone's life, you not only impact their life, you impact everyone influenced by them throughout their entire lifetime. No act is ever too small. One by one, this is how to make an ocean rise.

Poet Danielle Doby, Ripples poem, found in I Am Her Tribe


Introduction

Figure 1: Location of Luft Stalag III.

Figure 1: Location of Luft Stalag III (Wikipedia).

I recently watched a series of videos on WW2TV about the escape of 76 prisoners of war from Stalag Luft III in Germany (now Poland) — an event now known as The Great Escape. The story was immortalized in a large-budget Hollywood movie called The Great Escape.

While the movie did a good job of describing the construction of the tunnel, much of the rest of the movie was pure fiction. Watching the movie did motivate me as a boy to read the book of the same name by the Australian author Paul Brickhill, a prisoner at the camp who assisted with the escape, but was not one of the escapers.

One of the great inaccuracies of the movie was in the role of the American prisoners. While some Americans assisted early on with the tunnel operation, none were at the camp at the time of the escape (link). One of The Great Escape segments on WW2TV mentioned that most of the escapees were not British, which motivated me to look at the nationalities of those who escaped. This was a nice little Excel exercise whose results are presented below. For those who like to follow along, my workbook is here.

Background

Overview

The prisoners started three tunnels, called Tom, Dick, and Harry, but only Harry was used in the escape (Figure 2). The construction of the tunnels required an impressive level of organization, particularly considering how the camp was built on sand that required shoring to prevent collapse. The plan was to free 200 prisoners, but various difficulties limited the number of escapees to 76. Of these, 50 were executed for escaping. My objective here is to break down the 76 escapees by nationality.

Tunnel Harry

Figure 2: Tunnel Harry of Great Escape Fame.

Reference Video

Figure 3 shows a video (part of a series) on The Great Escape. The content is excellent.

Figure 3: Good Video on the Canadian Contribution to the Great Escape.

Analysis

Data Source

Wikipedia lists the escapees, their nationalities, and those who were killed (here). I simply read the table using Excel.

Nationality Pivot Table

Figure 4 shows a screenshot of my pivot table. I list the nationalities of all seventy-six escapers and the fifty that were murdered. I highlighted in gold the nationalities where every escaper was killed. Why these nationalities?

Figure 3: Summary of Escapees: Murdered and Survived.

Figure 4: Summary of Escapees: Murdered and Survived.

Conclusion

The Great Escape is an amazing story that is best learned from the written accounts by authors like Brickhill. The movie, while great cinema, is too full of inaccuracies to depend on it for facts.

Posted in History Through Spreadsheets, Military History | Comments Off on Great Escape Nationalities

WW2 US Submarine Torpedo Firings Versus Time

Quote of the Day

If you can't describe what you're doing as a process, you don't know what you're doing.

W. Edwards Deming, guru of statistical quality control. I took a class from Dr. Demming in the early '80s. The older I get, the smarter he seems. He was kind of a cantankerous gentleman (for example, railed constantly against management), but very wise on human behavior in the manufacturing environment.


Introduction

Figure 1: US Acoustic Homing Torpedo (Torpedo Mk 27).

Figure 1: US Acoustic Homing Torpedo (Torpedo Mk 27).

I recently noticed that combinedfleet.com has excellent summaries of the WW2 US submarine patrol reports that are very easy to scrape for data. These patrol reports are interesting because they provide accurate data as to the rate of torpedo firings by US submarines and some indications as to the mix of torpedoes being fired. Like all WW2 records, not every patrol recorded the weapons used and it is difficult to know how accurate the hit count is.

I will scrape the patrol report pages for 238 submarines and turn the data into graphs and tables that provide insight as to the tempo of US submarine combat in WW2. My workbook is available here for those who like to follow along with the analysis. Note that I used a VBA routine to extract URLs.

Background

WW2 Data Warning

This post is focused on using WW2 historical data to illustrate how to perform data scraping operations. I should note that the results I obtain here vary slightly from results published in other sources, which I show below. This is a common problem with WW2 sources. Often, the results are different because of OCR errors — to be clear, I do not know the source of the difference here.

Definition errors are also an issue – like what constitutes a torpedo (for example, an air-launched homing torpedo deliberately mislabeled as a Mk 21 Mine). Finally, the skippers on some of the patrol reports in some cases simply didn't say what kind of torpedo they launched. They were busy at the time and can be forgiven for simple omissions.

Alternate Reference

The book Ship Killers (Widenberg and Polmar) has two tables that contain similar information. As is typical for WW2 information, the data between the two tables do not agree in terms of weapons used and totals. However, the results are similar to what I obtained by scraping the Combined Fleet website in broad terms.

Table 1 shows the number of torpedos fired by type and their hit rates. Note that I generally do not give much credence to hit counts because of the difficulty of confirming the information, while the number of torpedoes fired is fairly easy to tally.

Table 1: Torpedoes Fired By U.S. Submarines In World War 11.
Type Fired Hits Percentage
Mk9, 10 11 1,722 502 29%
Mk 14 6,852 2,437 36%
Mk 18 3,536 979 28%
Mk 23 2,058 822 40%
All others 139 39 28%
Total 14,307 4,779 33%

Table 2 shows the number of torpedoes fired by year and theater. Note that the Pacific theater dominated US torpedo usage.

Table 2: US Submarine Torpedo Effectiveness.
Theater Year Fired Hits Effectiveness
Pacific 14,264 4,767 33%
1941 104 13 13%
1942 1,926 554 28%
1943 3,761 1,298 34.5%
1944 6,108 2,329 38%
1945 2,365 573 24%
Atlantic 47 12 25.5%
1942 19 4 21%
1943 24 8 33%
1944 4 0 0%
Total 14,311 4,779 33%

Torpedo Variants

Table 3 shows the torpedoes listed in the patrol reports. Other weapons included in the report are the deck gun, burning, and ramming. I do not address those weapons in this post.

Table 3: Torpedoes Listed in Patrol Reports.
Torpedo Type Description
Acoustic Seeking Torpedo Probably a Mark 27 Cutie
Mark X Torpedo Obsolete, had a depth control issue
Mark X Type 3 Torpedo Obsolete, had a depth control issue
Mark XIV Torpedo Problematic US WW2 standard torpedo
Mark XIV Type 1 Torpedo Problematic US WW2 standard torpedo
Mark XIV Type 3 Torpedo Problematic US WW2 standard torpedo
Mark XIV Type 3A Torpedo Fixed US WW2 standard torpedo
Mark XV Torpedo Surface ship torp, launchable from submarine aft tubes
Mark XVIII Electric Torpedo US copy of German electric torpedo
Mark XVIII Type 1 Torpedo US copy of German electric torpedo
Mark XXIII Torpedo Single-speed Mk 14.

Analysis

Process

The analysis approach is straightforward:

  • Generate a list of URLs with submarine patrol data
  • Create a Power Query function that grabs the data from each URL.
  • Run the function on each submarine's URL.
  • Create a consolidated data file.
  • Generate required plots and tables.

Torpedoes Fired Versus Time

Figure 2 shows US torpedo firings by month during WW2. You can see that October 1944 was a very busy month. At the end of the war, the torpedo firings dropped dramatically because most Japanese ships had already been sunk.

Figure 2: US Torpedo Expenditures By Month.

Figure 2: US Torpedo Expenditures By Month.

We can generate a table of yearly data as shown in Table 4. This is very similar to Table 2 from an alternate source.

Table 4: Sub Patrol Torpedo Firings By Year.
Year Torps Fired
1941 104
1942 1,946
1943 3,804
1944 6,165
1945 2,401
Total 14,420

Torpedoes By Type

Table 5 shows the breakdown of the torpedoes fired by type. I assumed that "Acoustic Homing Torpedo" was a Mark 27 "Cutie" for this table.

Table 5: Sub Patrol Report Torpedo Breakdown By Type
Torpedo Claimed Hits Torps Fired Hit Percentage
Mark 14 2,452 6,977 35%
Mark 23 (One Speed Mark 14) 986 2,595 38%
Not Stated 664 2,321 29%
Mark 18 (Electric) 624 2,149 29%
Mark 10 31 157 20%
Mark 15 46 113 41%
Mark 27 Acoustic Homing 34 108 31%
Grand Total 4,837 14,420 34%

Conclusion

I learned a number of things during this analysis:

  • October 1944 was the peak month of US torpedo firings.
    This seems reasonable. The Japanese still had a significant maritime fleet and the US finally had deployed a significant number of submarines and finally had a working torpedo.
  • Mark 15 torpedoes were used on submarines early in the war.
    The Mark 15 is larger than the Mark 14 and could only be fired from the aft tubes (reference). I knew that torpedoes were in short supply early in the war, so it does not surprise me that they were used if available.
  • There were significant numbers of Mark 18 electric torpedoes fired.
    This is amazing considering the Mark 18 was copied from a German design and was copied, manufactured, and produced in significant numbers during the US's 44-month wartime involvement.
  • The Mark 23 was not a popular variant.
    It was a Mark 14 with the low-speed option removed. The low-speed option provided a significantly longer range. Early in the war, engagement ranges were relatively short and it made sense to remove the low-speed option. As the war progressed, ranges increased and the low-speed option became necessary.
Posted in Excel, History Through Spreadsheets, Military History | 3 Comments

UK Warship Losses in the Mediterranean During WW2

Quote of the Day

Exit strategy is a false god. I have never known of a war in which other people had exit strategies. You don't exit a war – you either win, lose, or have a ceasefire.

Thomas Ricks. I have listened to numerous discussions on the importance of having an exit strategy. While the US had experienced multiple wars in the last few decades, we never did seem to come up with an exit strategy for any of them. Maybe Ricks is correct …


Introduction

Fritz-X

Figure 1: Fritz-X, an early German anti-ship weapon that was a terror in the Mediterranean. (Wikipedia)

Most of my naval history reading has been about the Pacific War. This means I have not read much about the Battle of the Mediterranean. I recently heard a historian (Drach) say that the UK lost 135 major warships in the Mediterranean Theater during WW2. I must admit that I was surprised at the high losses and decided to investigate further. This theater saw numerous major battles (examples like Taranto, Cape Matapan) and some real technology innovations (example in Figure 1).

I decided to do a bit of research and see if I could find out more about the 135 warships. Fortunately, Naval-History.net provides a page that is an OCRed and uncorrected excerpt from British Vessels Lost at Sea, 1935-45, published by HMSO (Her Majesty's Stationary Office) in 1947. Unfortunately, like all converted WW2 records, there are numerous typographic issues that had to be corrected.

I did find that 135 Royal Navy warships were lost during the Battle of the Mediterranean. This was a tough campaign. For comparison, the UK lost 45 submarines in the Med (see Figure 3), which is not much less than the 52 submarines the US lost during the entire Pacific War.

Naval-History.net provided an excellent web scaping example. The entire effort was implemented using Rstudio and Rmarkdown. For those who like to follow along, the software is available on Github.

My favorite warship is HMS Warspite, a ship with an amazing record in both WW1 and WW2. Figure 2 below shows the result of a Fritz-X (Figure 1) impacting HMS Warspite. Fritz-X is a forerunner of modern anti-ship weapons.

Figure 2: Fritz-X Hitting Warspite in the Med. (Link)

Figure 2: Fritz-X Hitting Warspite in the Med. (Link)

Background

Warship Types Counted

The original loss record contains large warships, auxiliaries, landing craft, torpedo boats, and support craft – 460+ if you include all the landing craft. I decided to limit my counting to the following ship types:

  • Aircraft Carrier
  • Battleship
  • Corvette
  • Cruiser
  • Destroyer (both standard and escorts)
  • Monitor
  • Sloop
  • Submarine

One clearly could argue for other ships to be included, like minesweepers and minelayers. But I had to draw the line somewhere and this was where I drew it.

The definition of a warship is only one of the many problems with determining ship loss totals. Here are a few of the other problems associated with coming up ship loss estimates:

  • Ships can be so damaged as to be unrepairable. Are they a loss?
    For example, the US records list 52 submarines as lost during WW2. However, 2 submarines were so heavily damaged that they were not able to continue service. Because of these two damaged beyond repair submarines, Wikipedia list 54 submarines as lost by the US (see Note at bottom of the table).
  • Ships can be refloated and repaired. How should these be tallied?
    The Italian Navy in particular was able to refloat and repair their losses.
  • Some ships are lost for causes other than battle, like grounding.

Because of these issues, it is common to find differences in lists of WW2 ship losses.

Fortunately, I was able to immediately confirm what Drachinifel said in the video (Figure 3). The UK lost 135 warships of the types I listed during the Battle of the Mediterranean.

Reference Video

Figure 3 contains an excellent briefing on the Italian Navy and Mediterranean Theater.

Figure 3: Military History Not Visualized interview with Drachinifel. He mentions the 135 ships lost total several times during this interview.

Analysis

The analysis details are rather complex and are well covered in my Rmarkdown document, which you can see in my Github repo. I will only cover results in this post.

Summary Table

Figure 4 shows a screenshot from my Rmarkdown worksheet of the ship losses by ship type. Destroyers and submarines clearly bore the brunt of the losses.

Figure 4: UK WW2 Med Ship Loss Totals.

Chart of Losses By Year and Type

Figure 5 shows that the conflict was very intense during 1942. I should note that Italy surrendered on September 8, 1943. Even so, the Germans remained active in the Mediterranean into 1945, though with greatly diminished capabilities.

Figure 4: UK WW2 Med Losses vs TIme and Ship Type.

Figure 5: UK WW2 Med Losses vs Year and Ship Type.

Conclusion

This was an interesting exercise and I am going to be spending some time reading about the Battle of the Mediterranean over the next few weeks. I have been impressed with the performance of both the Royal Navy and the Regia Marina.

A number of iconic photographs came out of the Battle of the Mediterranean. For example, the explosion of the battleship HMS Barham (Figure 6) after being torpedoed by a U-boat. She was lost off the coast of Egypt and her wreck has not been found.

Figure 5: Explosion of the HMS Barham after a torpedo attack by a U-boat. (Wikipedia)

Figure 6: Explosion of the HMS Barham after a torpedo attack by a U-boat. (Wikipedia)

Posted in History Through Spreadsheets, Naval History | 9 Comments

Was Ulysses Grant a Butcher?

Quote of the Day

I know people who regret buying parachutes. Strangely, I've never met anyone who regrets not buying a parachute.

Seth Masket. Nice illustration of survivor bias.


Introduction

Figure 1: General Ulysses S. Grant, a leader who is becoming more appreciated with time. (Wikipedia)

Figure 1: General Ulysses S. Grant, a leader who is becoming more appreciated with time. (Wikipedia)

When I was studying history in high school and at university, Robert E. Lee was described as a military genius and U.S. Grant was an unremarkable general that only won by butchering his own troops in bloody frontal assaults.  I recently read an article called The Butchers Bill by the Civil War researcher Edward Bonekemper that argues that it was Lee and not Grant that lost the most troops. For each general, Bonkemper presented casualty tables by battle to make his point. Unfortunately, the two tables are behind a paywall. However, you can find a blog post that shows them, along with an important typo correction (magazine reversed column titles).

I decided to try to duplicate Bonekemper's tables by doing a bit of web scraping. This post uses a combination of data from a Github repo by Jeffery Arnold that contains a fantastic amount of Civil War battle data that I augmented with some Wikipedia scraping to assign generals to battle. I should note that my casualty results are significantly different than Bonekemper's – I assume because there are large differences between sources of Civil War casualty data. The Arnold repo has data from a number of sources and I chose his Wikipedia casualty file because it is easy to check.

For those who like to follow along, you can find my work in my Github repo.

Background

Definitions

Casualty
For this analysis, a casualty is defined as a soldier killed, wounded, or captured. Casualty data sources vary greatly for a number of reasons. For example, many sources just include those killed during the battle, but ignore those who died of their wounds later.
Battle
This analysis sums casualties from Civil War battles and sieges. Many sources work with campaigns (for example Overland Campaign), which are not addressed in the sources I used. One confusing aspect of working with battles is that they frequently are known by different names.
Campaign
A series of related battles. My analysis here does not work at the campaign level because my data sources did not work at that level.

Methodology

My approach was simple:

  • Read the Wikipedia casualty data from the Arnold rep
  • Use the Wikipedia URLs in the Wikipedia casualty data to scrape the list of generals for each battle
  • Join the casualty data and the general data
  • Filter for Lee and Grant
  • Sum their casualties and present a summary table

Methodology Issues

Bonekemper's Issues

While Bonekemper makes an interesting comparison, I am not sure this type of comparison makes sense. Grant and Lee fought two different wars:

  • Lee fought in the East, Grant spent most of the war in the West.
    Different generals, different terrain, different situations. Casualty rates were higher in the East where Lee spent the entire war.
  • Lee did not assume command of the Army of Northern Virginia until 1-June-1862.
    He took over command from Joe Johnston, who was wounded at the Battle of Seven Pines.
  • Grant's combat service started as a Colonel in the Illinois militia and he had to work his way up the command chain.
  • When Grant and Lee faced off in 1864, Grant was mainly on offense and Lee was mainly on defense.
    Troops on the offense are more likely to take casualties.

Issues with My Analysis

  • There are many different casualty figures for each battle.
  • Civil War casualties are difficult to tally because:
  • I decided to use Wikipedia as my source as it is easy to scrape.
    Some folks grumble about using Wikipedia as a source. My main goal is to provide the students I tutor with R-based web scraping examples and this does the job nicely.

Lost Cause Lecture by Edward Bonekemper

Here is the lecture by Ed Bonekemper that motivated me to look further into the casualty comparison.

Figure M: Bonekember Lost Cause Lecture."

Analysis

For details on the analysis, please see my Github repo. I will only cover results in the post.

Grant Casualty Table

Table 1 shows my summary of the Confederate and US casualties that Wikipedia lists for US Grant.

Table 1: Ulysses S. Grant Casualty Summary
Battle Confederate US
Battle of Appomattox Court House 28,305 164
Battle of Belmont 641 607
Battle of Champion Hill 3,840 2,441
Battle of Cold Harbor 5,287 12,737
Battle of Fort Donelson 13,846 2,691
Battle of Fort Henry 79 40
Battle of Jackson, Mississippi 850 286
Battle of North Anna 1,552 2,623
Battle of Port Gibson 787 861
Battle of Shiloh 10,699 13,047
Battle of Spotsylvania Court House 12,687 18,399
Battle of the Wilderness 11,033 17,666
Battle of Totopotomoy Creek 1,593 731
Chattanooga Campaign 7,000 11,648
Second Battle of Corinth 8,466 5,040
Second Battle of Petersburg 4,000 11,386
Siege of Vicksburg 65,394 9,670
Third Battle of Petersburg 4,250 3,500
Siege of Corinth 1,000 1,000
Total 181,309 114,537

Lee Casualty Table

Table 2 shows my summary of the Confederate and US casualties that Wikipedia lists for Robert E. Lee.

Table 2: Robert E. Lee Casualty Summary
Battle Confederate US
Battle of Antietam 20,632 24,802
Battle of Appomattox Court House 28,305 164
Battle of Beaver Dam Creek 1,484 361
Battle of Chaffin's Farm 2,000 3,357
Battle of Chancellorsville 12,764 17,197
Battle of Cheat Mountain 90 88
Battle of Cold Harbor 5,287 12,737
Battle of Cumberland Church 255 655
Battle of Darbytown and New Market 700 458
Battle of Fort Pulaski 385 1
Battle of Fredericksburg 4,151 12,653
Battle of Gaines's Mill 7,993 6,837
Battle of Gettysburg 23,231 23,055
Battle of Glendale 3,673 3,797
Battle of Malvern Hill 5,650 2,100
Battle of Mine Run 680 1,272
Battle of North Anna 1,552 2,623
Battle of Oak Grove 441 626
Battle of Salem Church 4,935 4,611
Battle of South Mountain 2,685 2,325
Battle of the Crater 1,491 3,798
Battle of the Wilderness 11,033 17,666
Battle of Totopotomoy Creek 1,593 731
Battle of White Oak Road 800 1,870
Second Battle of Bull Run 8,300 10,000
Second Battle of Deep Bottom 1,500 2,899
Second Battle of Petersburg 4,000 11,386
Second Battle of Rappahannock Station 1,670 419
Third Battle of Petersburg 4,250 3,500
Total 161,530 171,988

Grant and Lee Casualty Summary Table

Table 3 summarizes my web scraping of the casualties.

Table 3: My Casualty Summary for Lee and Grant
General Confederate US
Grant 181,309 114,537
Lee 161,530 171,988

Bonekemper's Results

Table 4 summarizes Bonekemper's casualty tally.

Table 4: Bonekemper's Casualty Summary for Lee and Grant
General Confederate US
Grant 190,760 153,642
Lee 208,090 240,322

Conclusion

I was able to generate casualty tables by general based on the Wikipedia information. My results do show that troops under Grant suffered fewer casualties than those under Lee, which agrees with Bonekemper's results. However, my casualty totals for each general in each category (Confederate, Union) are lower. I am sure this is because of different sources being used for casualty totals.

I am not convinced that either Grant or Lee could be characterized as butchers. They were doing the best they could under difficult circumstances. Both made mistakes that cost soldiers their lives. If you need examples, just look at Pickett's Charge (Lee) and Cold Harbor (Grant).

What I have learned from this exercise? I am now convinced that my academic exposure to Civil War was strongly influenced by the Lost Cause Movement, which resulted in a pro-Confederacy slant to my early education. My recent reading has called into question most of what I was taught about the Civil War while in school.

Posted in Excel, History Through Spreadsheets, Military History | 7 Comments

Duplicate Drach's U-Boat Tonnage and Sunk Charts

Quote of the Day

He woke up every morning, and said "It's a great day to retreat, which way shall I go?"

— Historian Gary Gallagher on Confederate General Joseph E. Johnson. Gallagher has really stirred my interest in the US Civil War. He is a master lecturer and has figured out how to engage his audience. I look forward to each of his presentations.


Introduction

Figure 1: Armchair Admirals' Program where Drach is Presenting His Charts.

Figure 1: Armchair Admirals' Program where Drach is Presenting His Charts. Drach is shown in the bottom square.

I am a huge fan of Drachinifel's naval history channel. The other day, Drach was participating in the Armchair Admiral's program, during which he presented two charts on the Battle of the Atlantic that I have never seen before: (1) A chart of tonnage sunk by U-boats versus time and (2) a chart of U-boats sunk versus time. The unique aspect of the charts was that the data points were colored based on whether the Enigma cipher was broken at the time and whether centimetric (microwave) radar was deployed. These charts really got me thinking about the impact of technology on the struggle against U-boats.

I decided to duplicate the two charts in Excel since they are simple. For those who like to follow along, my workbook can be downloaded here.

Background

Here is the Battle of the Atlantic panel discussion where Drach presented his two charts.

Figure 2: Drach's Charts are being presented at time 1:12:52.

Analysis

Data Source

Drach did not say where he got his data, so I just decided to use a table from Wikipedia that summarizes all ship losses by cause and month of loss (link). The data appears to be the same as Drach plotted.

While this table has the number of numerical data, it does not contain the dates for when Enigma was broken (there really was no one date) and centimetric radar was deployed. I decided to choose the dates based on

  • The capture of German Weather Ship Lauenburg, which provided Enigma codebook data at the end of June 1941.
  • The deployment of the ASV Mk III centimetric radar during May 1943. This radar made U-boats detectable by aircraft.

Given these two dates, I can now generate the graphs by:

  • Using Power Query to extract the data from the Wikipedia
  • Using Excel, reformat the data as three different series based on the month.
  • Plot the data using standard Excel charts.

Graphics

Tonnage Sunk Per Month

Figure M: Ship Tonnage Per Month By Uboats.

Figure 3: Ship Tonnage Per Month By U-boats.

Uboats Sunk Per Month

Figure M: Number of Uboats Sunk Per Month.

Figure 4: Number of Uboats Sunk Per Month.

Conclusion

There are two things I glean from these graphs:

  • The breaking of the Enigma code seems to have no effect on tonnage sunk but did start an increase in the rate of Uboat sinkings.
  • It is unclear whether centimetric radar had any impact on tonnage sunk, but it did seem to increase in the rate of U-boats being sunk.

The increase in Uboat losses inevitably led to a reduction in tonnage sunk. The toll on U-boat crews was terrible; 70% of the crewman died during the war (link).

Posted in Excel, History Through Spreadsheets, Military History, Naval History | Comments Off on Duplicate Drach's U-Boat Tonnage and Sunk Charts

Excel Generated Traceability Matrix

Quote of the Day

The easiest way to solve a problem is to deny it exists.

Isaac Asimov. Hmmm … Describes a lot of the goings-on in Washington DC.


Introduction

Figure 2: Requirements Flowdown Structure.

Figure 2: Requirements Flowdown Structure.

I spend much of my time writing system requirements for the company's hardware, software, and firmware staff. My system requirements are derived from user requirements that I receive from our Marketing group.

I recently needed to generate a table that showed how every user requirement generated by our Marketing Department was mapped to one or more system requirements. This table is known as a flowdown table. For all sorts of reasons, none of them good, our requirements database could not generate the report. I decided that Excel would be the appropriate tool to generate the table we required. I should note that a Python version was also developed and will likely be used in the future.

In general, every user requirement will map to multiple system requirements that are contained in multiple system documents. The mapping between user and system requirements makes it easy to see which system documents are involved in meeting each user requirement.

The report format I generated was easy to implement using Excel and its spillable range feature. For those who want to follow along, my Excel workbook is here.

Background

Definitions

User Requirements
Stakeholder requirements identified by Product Marketing.
Stakeholder
 A group with a vested interest in the requirements (user and system). This includes external and internal customers (Engineering, Production, Test, etc).
Requirement Traceabilty
The ability to provide objective evidence that every user requirement corresponds to a lower-level requirement (system, subsystem, or test). This ensures that the product meets the stakeholder's needs.
Coverage
The percentage of user requirements that have corresponding lower-level requirements. Normally, a program wants 100% coverage. In fact, programs always have time limits and some user requirements may need to be deferred to follow-on development efforts.

Requirements Structure

My task is to prove that every user requirement maps to a system requirement in one or more system documents. Figure 2 illustrates our document tree. The only unusual aspect of the document tree is that we have one system requirement document for each version of the product – there always are multiple versions. This results in systems requirements documents that have substantial duplication. We have plans to reduce this duplication, but we are not ready to implement that approach yet.

Figure 2: Document Tree.

Figure 2: Document Tree.

Analysis

Requirement Link Table

Figure 3 illustrates how the user requirements map to the system requirements. One user requirement can appear in multiple system requirement documents.

Figure 3: Very Small Example of A Requirement Link Table.

Figure 3: Very Small Example of A Requirement Link Table.

Requirement Table

Our requirement database can generate a table version (Figure 4) of the graph shown in Figure 3. We will convert this table into our desired format, which is shown in Figure 5.

Figure 4: Table View of Requirement Links.

Figure 4: Table View of Requirement Links.

Output

Figure 5 shows the output that I need to generate. In reality, there are hundreds of user requirements, thousands of system requirements in roughly 9 system documents. I think of this table as a pivot table for text data.

Figure 5: Output Generated.

Figure 5: Output Generated.

Output

I solved this problem using the following Excel's spillable ranges to generate:

  • list of unique user requirements (table row indices)
  • list of unique system documents (table column indices)
  • for each cell (which corresponds to a combination of user requirement and system document), a list of system requirements that are used to meet that user requirement in each system document.

Figure 6 shows the formulas used. Please click on the image to enlarge it. Of course, it may be easier to view in the workbook.

Figure 6: Output Implementation.

Figure 6: Output Implementation.

Conclusion

The traceability table has been generated using Python and Excel. My management has been more interested in using Excel because more staff is familiar with that tool. I have used the Excel tool version for two projects and it has worked well. We will now be working on formalizing a process for using the tool.

Posted in Excel, Management | Comments Off on Excel Generated Traceability Matrix

MacArthur's Photo on Nimitz's Desk?

Quote of the Day

Asking for help isn’t giving up, it is refusing to give up.

Charlie Mackesy


Figure 1: Autographed Photograph of Douglas MacArthur to Chester Nimitz.

Figure 1: Autographed Photograph of Douglas MacArthur to Chester Nimitz. (Naval History and Heritage Command)

I have been reading a couple of books about Admiral Chester Nimitz and his conduct of the Pacific War during WW2. The books showed several of the pictures of Nimitz in his office and you could very clearly see that Nimitz had a photo of General Douglas MacArthur on his desk. Nimitz has impressed me as a person who disdained ostentatious and narcissistic behavior, exactly the opposite of MacArthur. To me, Nimitz personifies the model of a manager who takes less than his share of the credit and more than his share of the blame. Why would Nimitz have a framed photograph of MacArthur on his desk?

A little bit of googling gave me the answer. Rear Admiral Edwin T. Layton, the Pacific Fleet's head of intelligence, told Nimitz's biographer that Nimitz kept a framed photograph of MacArthur on his office desk throughout the war. Layton asked Nimitz why he kept that photograph on his desk. Nimitz's response was priceless (link).

Layton, I'll tell you. That's to remind me not to be a horse's ass and to make Jovian pronouncements complete with thunderbolts.

Rather than looking for inspiration, Nimitz wanted a reminder about staying grounded.

It is interesting what personal items people keep on their desks. I have nothing on my desk – I have worked hard over my career to develop the habit of a clean desk. My coworker's desktops usually have photos of their spouses and children, sometimes their dogs.

Some famous people have office items that have become legendary.

  • Wernher von Braun
    Desktop placard with the saying "Late to bed, early to rise, work like hell, and advertise." Wernher was not going to make the mistake of other rocket pioneers who did not pay attention to public opinion.
  • Harry Truman
    Desktop placard with the saying "The Buck Stops Here." The phrase reminded Truman that he has to make the decisions and accept the ultimate responsibility for those decisions.
  • John Glenn
    8-inch wrench from his father's plumbing shop to remind himself of where he came from.
  • General Mark Hertling
    He keeps a small box with the names of all the soldiers that died while under his command. I teared up as I listened to him tell that story.
Posted in Management, Military History, Naval History, Personal | Comments Off on MacArthur's Photo on Nimitz's Desk?

Sunken Ships at Ironbottom Sound

Quote of the Day

Why the US Navy performed so poorly in sea battles during the early part of World War II:

  1. The United States failed to grasp that the killing weapon was the torpedo.
  2. The United States had no tactics suitable for night battle at close quarters.
  3. The United States was slow to learn. Because of the rapid turnover of tactical leaders, the pace of the battles overwhelmed the Americans.
  4. Above all, the United States did not exploit its potentially decisive radar advantage.

— My paraphrase of Capt. Wayne Hughes from his book Fleet Tactics.


Introduction

Figure 1: WW2 Ships Sunk in Ironbottom Sound. (Wikipedia)

Figure 1: WW2 Ships Sunk in Ironbottom Sound. (Wikipedia)

I just finished watching a series of videos on the Guadalcanal Campaign by Drachinifel, whose work is superb (Figure 2). The marines derisively referred to this campaign as Operation Shoestring because of the resource limitations. Things were no better for the sailors. Unlike many WW2 island campaigns, more sailors died in the battles than ground troops (link). The Allies, and in particular the US Navy (USN), had to learn the hard way that the Imperial Japanese Navy (IJN) was a force that deserved respect. Many Allied ships were sunk while learning this lesson.

Ironbottom Sound was called Savo Sound before WW2, but the number of ships sunk there was so staggering that the sound is now commonly called Ironbottom Sound. This post will extract the names of the sunken ships from the image in Figure 1 using an Online Character Recognizer (OCR). I will then create some tables that look at the casualties by ship type and country of origin.

For those who like to follow along, my spreadsheet is here.

Background

Scope

I am looking at the ships listed in Figure 1. Some ships officially listed as lost at Guadalcanal are not shown in Figure 1. For example, the USS Benham is not shown but was seriously damaged on 15-Nov-1942 and ended up being sunk by Allied gunfire. The USS Jarvis and USS Juneau are also not shown. I do not know why, but their locations may not be known.

Battle of Savo Island

The video shown in Figure 2 provides a good overview of the Battle of Savo Island, which may be the worst loss in USN history and contributed four wrecks to Ironbottom Sound. The tale of this battle highlights many of the issues that the Allies would need to resolve quickly

Figure 2: Good Video Briefing on Early Guadalcanal Campaign.

This battle also shows the importance of leadership. If you want to see an example of how not to lead a naval force, read about Captain Howard Bode.

Analysis

OCR

I used the online and free Convertio tool to generate a spreadsheet of all the text in Figure 1. I needed to delete the lightly colored geographic names from the conversion.

Pivot Tables

Once the OCR was cleaned up, I augmented the image data with naval service and ship type information,  and then made pivot tables of the result.

Wrecks By Nation/Service

Figure 3 shows the number of wrecks by the naval service. Notice how the USN took a real beating.

Figure M: List of Ironbottom Sound Wrecks By Nation/Service.

Figure 3: Ironbottom Sound Wrecks By Navy.

Wrecks By Ship Type

Figure 4 shows the wrecks by the type of ship. As far as the military ships go, destroyers and cruisers dominate the losses. I should mention the loss of the oiler Kanawha. The oilers were critical to the US Navy maintaining its tempo of operations. The US Navy early in WW2 was seriously short of oilers, particularly fleet oilers capable of Underway Replenishment (UNREP). For example, the loss of the fleet oiler USS Neosho at the Battle of Coral Sea was considered almost as serious as the loss of the carrier USS Lexington.

Figure M: List of Ironbottom Sound Wrecks By Ship Type.

Figure 4: List of Ironbottom Sound Wrecks By Ship Type.

US Navy Wrecks By Type

Figure 5, shows the US Navy wrecks by type. Of the seven cruisers wrecks at Ironbottom Sound, six are from the US Navy and the seventh is Australian. The US cruisers really took a beating, particularly from torpedoes like the IJN Long Lance.

Figure M: US Navy Wrecks By Type.

Figure 5: US Navy Wrecks By Type.

US Navy Cruiser Wrecks

Figure 6 shows the US Navy cruiser wrecks. One of the wrecks is the bow of the USS Minneapolis, which a Long Lance torpedo hit separated from the rest of the ship. The USS Minneapolis was repaired and served out the rest of the war.

Figure M: US Navy Cruiser Losses.

Figure 6: US Navy Cruiser Losses.

Conclusion

While the Guadalcanal Campaign was a brutal battle, there were some lessons that probably could not be learned any other way than through enemy engagement. Here are some of the key things that the Allies had to learn:

  • Night tactics
    The IJN ruled the night in the early battles. The Allies had better radar but did not know how to properly use it.
  • Radar tactics
    Early radar was tricky to use for fire control but properly used was decisive in night battles. The Allies had a pioneer in its midst, Admiral Willis Lee, and needed to learn from him. He demonstrated his prowess during the Second Naval Battle of Guadalcanal.
  • Identify wartime leaders
    Wartime leaders are different from peacetime leaders and must be identified as soon as possible. This problem existed with both the naval and ground forces.
  • Develop tactics relevant to the time
    Night fighting, long-range torpedoes, and the need for multi-national naval forces required the Allies to develop new tactics.

I would argue the USN, in particular, did not learn quickly. However, there are people who might disagree (Trent Hone in his excellent book Learning War).

Appendix

Excerpt from Battles That Changed American History: 100 of the Greatest Victories and Defeats by Spence Tucker (link).

Figure M: Tucker Excerpt

Posted in Excel, History Through Spreadsheets, Naval History | Comments Off on Sunken Ships at Ironbottom Sound

US Navy Ship Numbers Versus Time

Quote of the Day

The world always seems brighter when you've just made something that wasn't there before.

Neil Gaiman. Yup – this is what I love about engineering. I get to make new things all the time.


Introduction

Figure 1: Landing Craft, Vehicle, Personnel (LCVP). The most common type of USN vessel during WW2.

Figure 1: Landing Craft, Vehicle, Personnel (LCVP). The most common type of USN vessel during WW2.

I have been researching some of the work of Alfred Thayer Mahan, a US Navy officer and strategist, that was an influential voice in the 19th-century military circles. He was a strong advocate for the role of sea power in securing economic opportunity (link). This research has motivated me to determine how US Navy ship numbers have varied with time.

Mahan wrote when the US had a very weak navy and advocated for increasing the navy's number of capital ships (link) and foreign bases. Interestingly, Mahan was a rather poor ship captain and was involved in a number of ship collisions (link). I always find it interesting when you find strategists that are not particularly good at low-level functions – yet we usually promote people based on their low-level performance. Similarly, Nimitz was a good strategist but, like Mahan, I would not want him conning my ship (link).

My reading and watching lectures on Mahan have motivated me to look at how the US Navy grew and shrank over the years. Fortunately, the Naval History and Heritage Command (NHHC) have an excellent page on the size of the US Navy over time. Unfortunately, the data is scattered throughout the page and it must be scrapped so that I can consolidate and graph it. This post is about using Power Query to scrape the data from the page and generate a graph in Excel of the number of active ships in the US Navy over time.

For those who like to follow along, my Excel workbook is here.

Background

Mahan Background

My analysis was motivated by a statement in this Youtube lecture on Mahan where John Kuehn mentioned that the US Navy in the late 1800s has less than 40 ships. I was surprised at that number and I wanted to investigate further.

Figure 2: Good Mahan Video Briefing.

Analysis

Approach

The NHHC website is a bit odd in that it has broken up the US Navy ship number data into multiple tables that contain various numbers of years worth of ship type and number data. I could use R or Python to scrape the page, but I am in a hurry and Power Query grabs web tables very cleanly and quickly. Here is the process:

  • Grab all the tables on the page
  • Filter for the tables with ship number data
  • Use a PQ function to grab the total active ship number data for each year in each table.
  • Consolidate all the table
  • Generate the Graph

Graph

Figure 3 show my plot of the scraped ship number data.

Figure 3: US Navy Total Active Ships Versus Time.

Figure 3: US Navy Total Active Ships Versus Time.

Observations

Here is what the graph tells me:

  • The US Navy ship count at the end of WW2 is amazing.
  • The US Navy ship count two years after the end of WW2 was equally amazing.
  • The most common WW2 ship type was "amphibious" and the most common amphibious ship was a Landing Craft, Vehicle, Personnel (LCVP), which makes sense. The LCVP is shown in Figure 1.
  • The US Navy had a small increase in ship numbers during WW1, but US participation was so short that many of those ships were not finished by war's end.
  • The current US Navy ship numbers in actually relatively low.

Conclusion

I am going to use this as an exercise for Excel students because I was able to solve a relatively tough problem very quickly (I had data in about 10 minutes). The problem provides students with experience in:

  • Power Query web scraping
  • Cleaning up web tables
  • Writing a Power Query M-function
  • Generating a graph from a table
Posted in Excel, Military History | Comments Off on US Navy Ship Numbers Versus Time