## Using Excel to Convert a Number List to Dashed String of Ranges

Quote of the Day

Gratitude is riches. Complaint is poverty.

Doris Day. This quote is another expression of the Benedictine philosophy.

Figure 1: Yellow highlight shows the function output for the number list shown.

While working on a test report for an aircraft manufacturer this week, I needed to convert a large number of number lists to strings of dashed ranges. For illustration, suppose you are given a list {1, 2, 3, 5, 6, 7, 8, 10}. Converting this list to a dashed set of ranges means generating the string "1-3, 5, 6-8, 10." Figure 1 shows another example with an optional prefix added to each number.

The need for this operation is more common than you might think. I usually see the dashed string format used on Bills of Materials (BOMs) for listing out the reference designators for specific part numbers, like resistors. A PCB BOM might have multiple 10Ω resistors with reference designators listed as "R1-R3, R5, R6-R8, R10." Normally, the schematic capture system would generate the reference designators in this format automatically, but I have worked in situations where the parts lists were in Excel and you needed to convert lists to list of dashed ranges.

Today's situation involved creating a dashed range list of the various graphs in a test report that had calibration and data charts interspersed. The customer wanted their data formatted this way and I complied – the life of a contractor.

My workbook is attached here. The VBA code is shown below.

## US Iron and Aluminum Mining During WW2

Quote of the Day

Nothing. I keep other people awake at night.

— General Jim Mattis. His answer to the question, "What keeps you up at night?"

Figure 1: Hematite Crystal. I used to find smaller versions of these crystals when I was a boy. (Wikipedia)

My vacation/retirement cabin is in the iron mining region of Minnesota. The rock throughout the area shows the reddish hue of iron. I recently heard some old timers talking about how the intensity of the mining operations during WW2 took the last of the high-grade iron ore (hematite) and left only low-grade ore (taconite). This comment made me curious about mining during WW2.

Because metals were so important to the war effort during WW2, I thought it would be interesting to look at how the war affected iron and aluminum mining. Since I have often heard people say that WW2 pulled the US out of the Great Depression, this data may help illustrate what really happened.

All of the data is from US Census Bureau's The Historical Statistics of the United States 1789 - 1945 (link – 55 MBytes). I am not going into the nuances of gathering the data because it required only doing Optical Character Recognition (OCR) on the document. All I did was make a couple of simple graphs. However, the graphs are interesting.

For this exercise, I defined the war years as 1940 through 1945. Technically, the US did not enter the war until late 1941, but the US had started supplying materials almost immediately after hostilities started on 1-Sep-1939. Production greatly increased with the passing of the Two Ocean Navy Act of 1940.

Figure 2 shows how WW2 affected iron ore production. Note how iron ore production had drop to less than 30 million tons in 1938 (see Recession of 1937-1938), but rose steadily to its peak in 1942. Production then declined each year during the rest of the war (1943-1945). We see that iron ore production increase, but it was not an order of magnitude. Aluminum was a different matter.

Figure 2: US Iron Ore Production from 1914 to 1945.

Figure 2 shows how WW2 affected aluminum ore production (bauxite) production. Ore is key to the production of primary (non-recycled) aluminum. Note how dramatic the rise in aluminum production is compared to that of iron ore. Since aluminum is important to the production of aircraft, this chart shows the massive increase in the production of aircraft that occurred during the war.

Figure 3: US Bauxite Production from 1914 to 1945.

For those who are curious, my Excel workbook is here.

## Estimating Exponential Time Constants

Quote of the Day

Engineering is achieving function while avoiding failure.

Figure 1: Annotated Exponential Curve.

I have been presented with a large amount of experimental data from which I need to determine many exponential time constants. There are so many time constants to calculate that I need to automate the process.

I have data with dozens of exponential curves like shown in Figure 1.  I My plan is to estimate three points from every exponential curve: (t0, V0); (t1,V1); and (t2,V2). I will then determine the exponential time constant (τ) and the final voltage (VF) by fitting these three points to Equation 1.

 Eq. 1 $\displaystyle v(t)={{V}_{F}}+\left( {{{V}_{I}}-{{V}_{F}}} \right)\cdot {{e}^{{-\frac{t}{\tau }}}}$

where

• VF is the final voltage of reached by the curve.
• VI is the initial voltage of the curve.
• τ is the exponential time constant that I need to compute.
• v(t) is the exponential voltage as a function of time.
• t is time.

Because I have two unknowns (τ, VF), I will need to solve two equations. In my case, I cannot see the point where the exponential curve begins – I will show below that it does not matter. In Figure 1, I select a reference point (t0, V0) and measure the two other points relative to V0, i.e., (V1, t1-t0) and (V2, t2-t0).

To find (τ, VF), I chose to use a nonlinear solver in Mathcad to solve the problem (Figure 2).

Figure 2: Mathcad Solution with Test Case.

I should mention that it does not matter which point is chosen for the reference. You can prove this as shown below. In this derivation, I show that when you pick a reference V0, the point value of the point at t1 is driven by the time difference t1-t0.

Figure 3: Derivation Showing that the Value of V1 is related only to the time difference from V0.

Thanks to this approach, I was able to determine all the exponential time constants quickly and accurately.

## Determining RMS Acceleration for a Vibration Acceleration Spectral Density

Quote of the Day

The most important 6 inches on the battlefield is between your ears.

## Introduction

Figure 1: Vibration Acceleration Spectral Density Example From NAVMAT P-9492.

I was asked last week to write a vibration test plan for a mobile electronic product. I am used to writing vibration test plans that follow canned procedures in standards like MIL-STD-810F or SAE J1455, but this case is different because the customer has specified a non‑standard random vibration acceleration profile, which is also called a Power Spectral Density (PSD). I need to determine the RMS g level for this profile. This post shows how I go about this calculation. I am not going to showing the customer's vibration PSD because it is proprietary. Instead, I will use a well‑known US Navy vibration PSD as a computation example (Figure 1).

In the case of vibration testing, the term PSD is a misnomer. Vibration testing is normally based on an Acceleration Spectral Density (ASD) function expressed in the form of a graph or table. However, the term PSD is commonly used because the analysis methods are identical to those used with PSDs in digital signal processing, where the term PSD really does refer to a power spectrum.

## Background

### Objective

For my random vibration test problem, I was given a normalized ASD shape (no absolute levels) and the customer's desired RMS g (acceleration) level. I must integrate the ASD to determine the RMS g level. Given the integral, I can set the absolute levels to obtain the required RMS g level. This effort is complicated slightly by the fact that the ASD is usually specified in terms of piecewise linear segments on a log-log graph. What is the function that I must integrate? This post will show you how to integrate an ASD to determine the RMS g level.

### ASD Example

Figure 1 shows a US Navy example that has been used for decades to test electronic gear, which developed by Willis Willoughby, who developed much of the US Navy's quality program. These piecewise linear approximations are important because vibration test equipment (Figure 2) is usually programmed using piecewise linear approximations to ASDs. The approximation process is actually quite interesting, and NASA presents a good example of the process here.

Figure 2 shows a typical vibration table.

Figure 2: Example of a NASA Vibration Test Fixture.

## Analysis

### Derivation

A line segment on a log-log graph does not represent a line segment on a linear scale. My integration needs to be performed with the function on a linear scale. So I need to convert that log-log line segment into its linear scale form. Figure 3 shows how I did that conversion.

The yellow highlighted areas of Figure 3 shows how to compute the slope and intercept of the line segment on a log-log plot. The green highlighted area shows how to convert a point on a log-log plot to a linear scale.

Figure 3: Log-Log to Linear Graph Line Derivation.

Now that I have my vibration function, the integration operation is straightforward. I will be using Mathcad's built-in integration function, but many tools support similar capabilities (e.g., R has its trapz function, which I use regularly).

### Example

To compute the RMS g level, I must integrate the ASD curve for the frequencies of interest, in this case, 10 Hz to 2000 Hz. Figure 4 shows my work for the US Navy example of Figure 1. I computed 6 gRMS, which I highlight in yellow in Figure 4. The US Navy lists this profile as 6.06 gRMS, which is close agreement.

Figure 4: My Calculation for the RMS g Level.

I should mention that there are other ways to compute RMS g level. Take a look at this website and this website.

## Let's Grow Some Oats

Quote of the Day

(1) You have to fight when you already feel defeated;(2) You have to delay gratification; (3) You have to make mistakes, look like an idiot, and try again — without even flinching; (4) You have to keep your emotions in check; (5) You have to make the calls you’re afraid to make; (6) You have to trust your gut; (7) You have to lead when no one else follows; (8) You have to focus on the details even when it makes your mind numb; (9) You have to be kind to people who are rude to you; (10) You have to be accountable for your actions, no matter what."

Travis Bradbury on what constitutes mental strength.

Figure 1: Oat Grains (Wikipedia).

I am going to grow and process some oats this year. This is a project that I have been interested in doing for a while because one of my sons is now in the oat business and he has shown some interest in working through the entire oat processing cycle. As a boy, I used to mill oats on the family farm, but I remember very little of that time.

I am fortunate because one of my brothers is a farmer who grows oats as a cover crop in the corners of his center pivot irrigation systems, so I have access to some land to cultivate, harvest, and process a small amount of oats.

My plan is to grow enough oats to make a 20 lb bag of oat flour. A little math shows me that 10 m x 10 m oat garden should be enough (Figure 2).

Figure 2: Oat Planting Area Required for 20 lbs of Flour.

In addition to my area calculation, I wanted to gather a bit of information on oat production in the US. The USDA Statistical Service provides an excellent resource for this type of information.  Using Power Query, I grabbed some data from the USDA records for 2016, 2017, and 2018. Table 1 shows a quick summary of the US oat production data from the USDA records. For those who are interested, my Excel workbook is here.

Table 1: US Oat Production Summary Statistics.

Note that most of the oats planted in the US are not harvested. This is because they are used as a cover crop – a crop planted to manage the land and not for production.  By land management, I mean things like erosion control and replacing soil nutrients. In fact, I know what farmer who referred to oats as "green manure."

## Age of Supreme Court Justices at Confirmation with Power Query

Quote of the Day

In Order To Do Good, You May Have To Engage in Evil.

Robert McNamara, rule #9 in his lessons of life. A classic example of government double-speak.

Figure 1: Justice William O. Douglas, the Youngest Justice Confirmed in the 20th Century (40 years old). (Image)

I was listening to a political pundit mention that both US political parties want to confirm young Supreme Court justices to ensure that their judicial philosophies endure. I was curious as to whether that was true over time. I went to the Wikipedia and saw that they had a list of all the justices since the founding of the US and web pages for each justice. Sounds like a perfect opportunity for a bit of web scraping!

Here is my process: (1) download the list of Supreme Court justices using Power Query, (2) grab the dates of Senate confirmation from this list, (3) grab birth dates from their individual Wikipedia pages, (4) compute their age at confirmation using this function, (5) plot the ages of the justices at their confirmation using ggplot2 along with smoothed line (loess with geom_smooth).

Figure 2 shows my plot of confirmation ages over time. There are several points we can make about this plot:

• The confirmation age has been trending lower since 1900.
• The youngest confirmation ages occurred early in the history of the republic.
• The youngest justice in history at 32 years and 2 months old was Joseph Story,  who was confirmed in 1811. He was played in the movie Amistad by Harry Blackmun, a real Supreme Court justice.

Figure 2: Graph of Supreme Court Justice Ages at Their Time of Confirmation.

For those who want to look under the covers, my source material is in this zip file.

## Power Query DATEDIF Function

Quote of the Day

The first rule of compounding – never interrupt it unnecessarily.

Figure 1: DATEDIF Workaround From Ashish Mathur.

I have been using Excel's DATEDIF function for years to determine the age of items in years, months, and days. Until I ran into a bug last week, I did not know that the function was unsupported by Microsoft and had issues with calculating the number of days. Because much of my personal work involves dates, I need to have an accurate age calculation function for use in Excel and Power Query. In this post, I will discuss a DATEIF workaround that I found online (Figure 1) and a Power Query age calculation function that I wrote based on a concept from Imke Feldmann. My workbook is available here for those who are interested. The workbook shows how I tested the routine by comparing it with the DATEDIF workaround results. I tested the boundary conditions and then random dates. The results agreed with the DATEDIF workaround of Figure 1 and an online date calculator.

As far as the DATEDIF workaround, I will leave you to examine Figure 1 for an example of the problem and the workaround proposed by Ashish Mathur. Please see his blog post on the issue for more details.

The source code for my Power Query function to compute the ages is shown below. The function is fed a column of start and end dates and produces a column of records that contain years, months, and days. You just expand this column as you need. The workbook contains an example of applying the function.

## Computing the Longest Lived US President with Power Query

Quote of the Day

...in any bureaucratic organization there will be two kinds of people: those who work to further the actual goals of the organization, and those who work for the organization itself. Examples in education would be teachers who work and sacrifice to teach children, vs. union representatives who work to protect any teacher including the most incompetent. The Iron Law states that in all cases, the second type of person will always gain control of the organization, and will always write the rules under which the organization functions.

— Pournelle's Iron Law of Bureaucracy.

Figure 1: Jimmy Carter, 39th President of the United States.

I was listening to news the other night when I heard a reporter mention that Jimmy Carter just became the longest-lived US president. I thought verifying this fact would be a good Power Query exercise. He had just surpassed George H.W. Bush, the previous record holder.

My approach was to obtain the birth and death information (if present) by scraping the Wikipedia pages for US presidents. For the dead presidents, I then computed their age at death. For the living presidents, I computed their age as of 25-March-2019.

Doing historical work in Excel is always an adventure because it does not handle dates prior to 1900. Power Query does not have this issue. On the downside, Power Query does not support a function for computing date differences in terms of years, months, and days (Excel has DATEDIF). I found an M-code routine that partially solved the issue, which I modified to fully solve the issue. My Excel worksheet is available here.

My routine for computing the ages of the presidents in years, months, and days is shown in the following code block. This is a modified version of some work by Imke Feldmann, a goddess of Power Query. The routine adds a  column with a record in each president's row that can be expanded to show years, months, and days.

Table 1 shows my table of the longest-lived dead presidents. Note that my algorithm does not count their last day – just like DATEDIF. I compared my result with the output from this website and they agree.

Table 1: Longest Lived Dead Presidents.

Table 2 shows my table for the ages of the living US presidents. As you can see, Jimmy Carter is now the longest living US president.

Table 2: Ages of the Living US Presidents.

## Good Use for Excel Textjoin Command

Quote of the Day

When Winston is right he is unique. When he is wrong, Oh My God!

Figure 1: Old School Work Time Clock. (Source)

I am always amazed at how Excel gets used for everything in small companies. It is the glue that hold many small businesses together.

One common Excel task is tracking work hours. As a contractor, I encounter all sorts of approaches to recording work hours. One small company wants all of my hours captured in an Excel workbook that contains one worksheet per week. Every two weeks, an administrator goes in and captures the hours into another worksheet.

In addition to the company's desires, I want a couple of reports from this workbook.

• A list of my work hours by week and task.
• A list of my workdays, which I use to compute my mileage.

How to solve this problem in the simplest way possible? I have a few constraints and one personal request:

• I cannot use macros.
• I cannot modify the worksheet structure:
• 3 lines for tasks assigned to each workday
• Monday is the start of the work week
• Each tab labeled by week number (Week1, Week2, ...)
• Because many of the customers and accounts repeat from one week to the next, I want to be able to add worksheets by copying the previous week's worksheet and incrementing the start of week date.

I took the following approach:

• I will hide my report worksheet.
• On my report worksheet, I will use an expanding spearing formula (aka 3D formula) and TEXTJOIN to create table with one column of cells with all the dates, tasks, and hours combined into delimited strings. To ensure that my table expands properly, I have added two hidden sheets called Begin and End that will bracket my weekly worksheets. I can include them in my TEXTJOIN address. They will ensure that as I add sheets, my TEXTJOIN formula will not need to be modified.
• I will use Power Query to process the table.
• I then generate two pivot tables with the information that I need.
• I added a pick list for all the Mondays in a year so that I cannot get a start date wrong.

This approach seems to be working. When I need my reports, all I do is unhide my report page, refresh my pivot tables, and I have the information that I need. When I start a new week, I just copy the previous week and change the start date.

My time card workbook is shown here. I have left everything unhidden so you can see what I did.

Quote of the Day

This... is [the isle of] Berk. It snows nine months of the year... and hails the other three.

— Introduction from How to Train Your Dragon. I feel this way about Minnesota every February.

Figure 1: Typical BER Test Configuration.

I test high-speed serial channels every day. The most common test parameter that I need to measure is the Bit Error Rate (BER). Figure 1 shows the most common test configuration used for measuring BER. Because bit errors occur randomly, there is a certain amount of error involved in measuring the parameter. So when you state a BER measurement, you also give a confidence interval to express your level of uncertainty.

I recently was asked how to compute the  test time required to verify the Bit Error Rate (BER) of a serial transport at various confidence levels. Test time is important because it drives test cost. I decided to make a spreadsheet that uses a data table and VBA to compute the test time as  function of the number of bit errors during the testing and the desired confidence level. While I have discussed this topic in another blog post, I did not provide a spreadsheet for computing the values. The spreadsheet included here resolves this shortcoming. You just need to put in your bit rate and push the button.