## View of Jupiter From Satellite Metis

Quote of the Day

Successful ... politicians are insecure and intimidated men. They advance politically only as they placate, appease, bribe, seduce, bamboozle or otherwise manage to manipulate the demanding and threatening elements in their constituencies.

Walter Lippmann (1955). Hmm … no change in politicians over the last 61 years.

## Introduction

Figure 1: Artist's Conception of the View From a Base on Metis. (Source)

The arrival of the Juno spacecraft at Jupiter has motivated me to take a look a closer look at the Jovian system. I was surprised to see that we have cataloged 67 moons, sixteen of which have been discovered since 2003 and are not yet named. One moon that was new to me is called Metis (Figure 2), which is Jupiter's innermost moon. It is very tiny and resides within Jupiter's main ring.

Figure 2: Metis Photograph from Galileo Spacecraft. (Source)

The Wikipedia's article on this moon had an interesting statement that I thought I would try to verify.

Because Metis orbits very close to Jupiter, Jupiter appears as a gigantic sphere about 67.9° in diameter from Metis , the largest angular diameter as viewed from any of Jupiter's moons. For the same reason only 31% of Jupiter's surface is visible from Metis at any one time, the most limited view of Jupiter from any of its moons.

I was able to confirm the 67.9° angular diameter, but I believe the 31% value is in error and should be 22%. I have posted a comment to the Wikipedia requesting that those folks double-check that number.

## Analysis

### Geometry

Figure 3 shows the basic viewing geometry. For information on how to compute the viewing area on Jupiter, see the Wikipedia article on spherical caps.

Figure 3: Visual Geometry of Metis to Jupiter. All dimensions to scale except that of Metis, which would be invisible at this

### Calculations

Figure 4 shows my calculations that verify the Wikipedia's statement on viewing angle. I could not confirm their statement on viewing area, but I believe that I understand where our calculations differ – I obtain their value if I fail to divide the viewing angle by half for the spherical cap area calculation.

Figure 4: Viewing Angle and Jupiter Viewing Area Calculations.

## Conclusion

I like to try to imagine looking up at night and seeing a sky full of a planet. It would be an incredible sight. Unfortunately, Metis is a tiny, cold world that is exposed to an enormous amount of radiation. I cannot imagine humans ever visiting there.

Figure 5 shows a beautiful rendering of the view of Jupiter from Metis.

Figure 5: View of Jupiter from Metis Using Space Engine. (Source)

Save

Save

Save

Save

Save

Save

Save

Save

Save

## Age of Presidents at Inauguration

Quote of the Day

Imagination is more important than knowledge. For knowledge is limited to all we now know and understand, while imagination embraces the entire world, and all there ever will be to know and understand.

— Albert Einstein

Figure 1: At 42, Teddy Roosevelt was the youngest person to serve as president. (Source)

While crawling around the Wikipedia looking for Presidential information, I found a list of the ages at inauguration of the US presidents ordered from oldest to youngest. I threw Hillary Clinton and Donald Trump into the list (Table 1) to see where they would place – they are old by historic standards. In fact, Donald Trump would be the oldest ever.

I computed the inaugural ages of all the presidents using Excel and found that the Wikipedia was accurate, which I normally find true. The age calculation was complicated by the fact that the age system used in Excel does not work for ages before 1900. Fortunately, I found the excellent extended date add-in from Walkenbach, which allows you to work with the string representation of dates. If you wish, you can write your own macros.

Table 1: List of the Inaugural Ages of the Presidents with Hillary and Donald Added.
President/Candidate Age at inauguration
Donald Trump 70 years, 220 days
Ronald Reagan 69 years, 349 days
Hillary Clinton 69 years, 86 days
William Henry Harrison 68 years, 23 days
James Buchanan 65 years, 315 days
George H. W. Bush 64 years, 222 days
Zachary Taylor 64 years, 100 days
Dwight D. Eisenhower 62 years, 98 days
Andrew Jackson 61 years, 354 days
John Adams 61 years, 125 days
Gerald Ford 61 years, 26 days
Harry S. Truman 60 years, 339 days
Grover Cleveland (2nd Inauguration) 59 years, 351 days
James Monroe 58 years, 310 days
James Madison 57 years, 353 days
Thomas Jefferson 57 years, 325 days
John Quincy Adams 57 years, 236 days
George Washington 57 years, 67 days
Andrew Johnson 56 years, 107 days
Woodrow Wilson 56 years, 66 days
Richard Nixon 56 years, 11 days
Benjamin Harrison 55 years, 196 days
Warren G. Harding 55 years, 122 days
Lyndon B. Johnson 55 years, 87 days
Herbert Hoover 54 years, 206 days
George W. Bush 54 years, 198 days
Rutherford B. Hayes 54 years, 151 days
Martin Van Buren 54 years, 89 days
William McKinley 54 years, 34 days
Jimmy Carter 52 years, 111 days
Abraham Lincoln 52 years, 20 days
Chester A. Arthur 51 years, 349 days
William Howard Taft 51 years, 170 days
Franklin D. Roosevelt 51 years, 33 days
Calvin Coolidge 51 years, 29 days
John Tyler 51 years, 6 days
Millard Fillmore 50 years, 183 days
James K. Polk 49 years, 122 days
James A. Garfield 49 years, 105 days
Franklin Pierce 48 years, 101 days
Grover Cleveland (1st Inauguration) 47 years, 351 days
Barack Obama 47 years, 169 days
Ulysses S. Grant 46 years, 311 days
Bill Clinton 46 years, 154 days
John F. Kennedy 43 years, 236 days
Theodore Roosevelt 42 years, 322 days

Save

Save

Save

Save

Save

Save

Save

Save

Posted in Personal | 2 Comments

## Asteroid Size Estimation

Quote of the Day

The job of a successful leader is to build relationships that are based on mutual respect and the recognition that others know things that we may need to know to get the job done.

— Edgar Schein. This is as good a description of leadership that I have found.

## Introduction

Figure 1: View of 2012 TC4 From Earth. Note how small and faint the asteroid is in the photo. (Source)

I often see announcements of Near-Earth Objects (NEOs) in the scientific press. For asteroids, these announcements are usually accompanied by a size estimate of the asteroid. In this post, I will discuss a commonly used formula for the effective spherical diameter of an asteroid based on its normalized brightness (i.e. absolute magnitude).

Most asteroids are such small objects that astronomers cannot obtain a large enough image to make a direct measurement. Instead, the astronomers estimate the size of the asteroid by on its range and brightness. Unfortunately, the brightness of an asteroid depends on it surface composition, which is usually unknown when the asteroid is little more than a point of light when first seen from Earth. Astronomers are then forced to give a wide range of possible diameters to account for their surface composition uncertainty.

## Background

### Definitions

Apparent Magnitude (V)
The visual brightness of an asteroid when observed and measured visually or with a CCD camera. (Source)
Absolute Magnitude (H)
The visual magnitude of an asteroid if it were 1 AU from the Earth and 1 AU from the Sun and fully illuminated, i.e. at zero phase angle – a geometrically impossible situation. (Source)
Geometric Albedo (pV)
The ratio of the brightness of a planetary body, as viewed from the Sun, to that of a white, diffusely reflecting sphere of the same size and at the same distance. Zero for a perfect absorber and 1 for a perfect reflector. (Source)
I smiled when I read this description of geometric albedo – it is VERY similar to the description of target strength as used by the sonar community. I am always amazed at the similarity between the various engineering and scientific disciplines.
Bond Albedo (A)
The Bond albedo is the fraction of power in the total electromagnetic radiation incident on an astronomical body that is scattered back out into space. The Bond albedo is related to the geometric albedo by the expression $A=p\cdot q$. , where q is termed the phase integral. (Source)
Equivalent Spherical Diameter
The equivalent spherical diameter of an irregularly shaped object is the diameter of a sphere of equivalent volume. (Source)

### Key Formula

Equation 1 shows the formula that I see being used in a number of papers.

 Eq. 1 $\displaystyle D({{p}_{V}},H)=\frac{{1329}}{{\sqrt{{{{p}_{V}}}}}}\cdot {{10}^{{-0.2\cdot H}}}\left[ {\text{km}} \right]$

where

• D is the diameter of the asteroid [km].
• pV is geometric albedo [dimensionless].
• H is absolute magnitude of the asteroid [dimensionless].

The detailed derivation of Equation 1 is a bit involved, but quite interesting. See this document for details (section 4.2).

### Post Objective

This post will consist of two calculations using Equation 1:

## Analysis

### Setup

Figure 2 shows my setup for the calculations.

Figure 2: Calculation Setup.

### Two Asteroid Examples

Figure 3 shows my calculations for the effective diameter of 2012 TC4 and Ceres. My  calculations are in good agreement with the published information.

Figure 3: Calculations for Two Asteroids.

### Minor Planets Center Table

Figure 4 shows my calculations for duplicating the effective diameter versus absolute magnitude table. Within the rounding used by the Minor Planets Center, I duplicated their results.

Figure 4: My Duplication of the Minor Planets Center Table.

## Conclusion

I was able to use Equation 1 to duplicate a number of results for asteroid sizes. This exercise showed me that it is difficult to get an accurate size estimate for asteroid because their albedos can vary so widely.

Save

Save

Save

Save

Save

Save

Save

Save

Posted in Astronomy | 1 Comment

## Calculating a Parameter's Worst-Case Range

Quote of the Day

You have enemies? Good. That means you've stood up for something, sometime in your life.

— Winston Churchill. This is true in both politics and corporate life.

## Introduction

Figure 1: Dying Gasp Circuit.

Many electronic systems are required to generate an alarm when they detect their power failing – the alarm is referred to as a "dying gasp". These systems are required to generate a dying gasp alarm when their input voltage drops below a specified level.

The circuit in Figure 1 provides a typical implementation example. When the input voltage, called VSupply, is at a level where the voltage across R2 drops below VReference, a level that is accurately and inexpensively generated using bandgap voltage reference circuits.

Because of circuit tolerances associated with R1, R2, and VReference, there is a range VSupply values could trigger a dying gasp alarm. In this post, I will show three different ways to determine the range of critical VSupply values. The third method is the most interesting because it involves using feedback in a circuit simulator to solve generate the required supply voltage.

For those who are interested in more dying gasp design details, I addressed how to design a charge storage system to enable the hardware to generate a dying gasp alarm in this post. For those who are interested, my Mathcad/LTSpice source and its PDF are here.

## Background

### Definitions

Tolerance
The total amount by which a given dimension may vary, or the difference between the limits. (Source: ANSI Y14.5M-1982)
Worst-Case Analysis
Worst-case circuit analysis is an analysis technique which, by accounting for component variability, determines the circuit performance under a worst-case scenario. (Source)
Extreme Value Analysis (EVA)
EVA involves evaluating a circuit's performance using every possible combination of extreme component value. It is not a statistical method – every possible combination of extreme component value is analyzed.
RSS assumes that the design tolerance for a system parameter is composed of the sum of Gaussian distributed variables. Each variable is assumed to have tolerances that can be modeled using a centered normal distribution with the total tolerance range of ±3·σ about the mean. The overall tolerance of a system parameter is estimated by computing the square root of the sum of the squared  variable tolerances.
Monte Carlo Analysis
Monte Carlo analysis is a statistical analysis that calculates the response of a circuit when device model parameters are randomly varied between specified tolerance limits according to a specified statistical distribution. (Source)

### Three Approaches

I will use three different ways of determining the range of power supply values that will cause the system to generate a dying gasp.

• Arithmetic Sum

This is the traditional approach to worst-case analysis. It produces the absolute maximum and minimum values for a parameter. This approach has two shortcomings: (1) for complex systems, it is difficult to determine, (2) for many system, it produces a result that is so pessimistic that there is no way to make an economically feasible system.

• Monte Carlo method in Mathcad

For Monte Carlo analysis, we normally assume that the tolerance of a variable can be modeled using a uniform probability density. We then perform multiple analysis using values randomly generated variable values. This produces a set of output values will approximately reflect the probability density of the real product. The main issue with using Mathcad is that it requires that you have an algebraic solution for the system's response to the input variables. For complex systems, we frequently do not an explicit formula.

• Monte Carlo method in LTSpice

Same approach as for Mathcad, but implemented using a circuit simulator. This approach can handle very complex systems that would defeat Mathcad.

I should note that there are other approaches that could be used (e.g. RSS, EVA), but I will focus on the three methods I listed.

### Using Feedback to Solve For a Circuit Value

In my third method (LTSpice-based), I use feedback to have the circuit determine the required power supply setting that causes the resistor divider to output the reference voltage. Figure 2 shows a block diagram of how that approach is implemented.

## Analysis

### Arithmetic

Figure 3 shows the the arithmetic approach. In this case, I applied Kirchhoff's voltage law to the circuit of Figure 1 and determined the supply voltage as a function of the resistor values and the reference voltage. The resulting formula was simple enough that the combination of component values required to produce extreme values could be determined by inspection. This situation rarely occurs in practice, but it occurred today.

Figure 3: Minimum/Maximum Solution Using Arithmetic Approach.

Figure 4 shows how I used Mathcad to perform the Monte Carlo analysis. Observe that the Monte Carlo analysis did produce as extreme values as the Arithmetic approach. If I used more random iterates, I would get closer to the true minimum and maximum values.

Figure 4: Estimate of Minimum and Maximum Values Using the Monte Carlo Method.

### Monte Carlo Method in LTSpice

Figure 5 shows I took the circuit of Figure 1 and used feedback to generate the vSupply value for a given set of R1, R2, and reference voltage values. I ran the simulation for 5000 sets of randomly selected component values.

Figure 5: Using Feedback to Solve For The Critical Supply Voltage.

Figure 6 shows the results of my simulation. As you can see, 5000 simulations were performed.

Figure 6: Plot of the Circuit Solutions.

Figure 7 shows a histogram of the results in Figure 5. The results of Figure 6 are consistent with the results of the previous two methods. As the central limit theorem would lead us to believe, the distribution of critical vSupply voltages has a Gaussian shape.

Figure 7: Histogram of the Critical Dying Gasp Supply Voltages.

## Conclusion

I performed a worst-case analysis of a simple comparator circuit using three methods and got three consistent results – as I should have. The LTSpice approach was the most interesting to me because it can be performed using the LTSpice simulator on circuits for which I have no transfer functions.

Save

Save

Save

Save

Save

Save

## The Tyranny of the Spreadsheet Cell

Quote of the Day

Never give up on a dream just because of the time it will take to accomplish it. The time will pass anyway.

— Earl Nightingale.

Figure 1: The Concept of Cell is Both a Strength and a Weakness of Spreadsheets. (Source)

I had a conversation the other day with an engineer to whom I was expressing my frustrations with using Excel to process large data sets of complex numbers.  She also has processed large data sets with Excel and commented that she found the processing painful. I told her about how I was using Excel to work with my data sets, but my techniques all seemed contrived and overly complex. While you can use Excel to work with these data sets, it is a bit like trying to use a Swiss Army knife as a screwdriver. Yes, it can turn a screw but there are much better ways!

As we talked about our Excel frustrations, I decided that all my issues with the Excel have to do with its reliance on the concept of a cell.

I have grown weary of the tyranny of the spreadsheet cell. In a spreadsheet, you have to deal with data on a cell level. Even when I think of the data as an aggregation (ex. matrix, list, vector), Excel forces me to deal with each individual data item when the entire data items is better viewed as an aggregation of data. When I apply a function in Excel, I must explicitly apply the function to each individual data item. While Excel does support some forms of aggregation, like with matrices and named ranges, it almost seems like it was added as an afterthought.

My issues are not limited to the lack of data aggregations – Excel is clumsy to use with complex numbers. I recently had to deal with impedance calculations involving a massive number of complex number calculations in Excel – it worked, but it was extremely painful. The same calculations in Mathcad took a couple of simple statements – the brevity was because (1) Mathcad handles complex numbers as easily as real numbers, and (2) Mathcad handles data aggregations as easily as it does individual numbers.

However, Excel does have its strengths – it provides an excellent demonstration of the power of an integrated data analysis environment. The ability to gather data, process the data with automation (i.e. VBA), and to display the data makes for a powerful tool. The power of tool is limited, however, by the cell-based data model. I would argue that ability to deal with data aggregations makes the combination of Rstudio and ggplot2 a far superior data processing environment to that of Excel.

Understand that I work with Excel every day. I even occasionally compete in Excel competitions to keep my skills up. But I do not try to use it to solve every problem. Excel is great for small data sets of real numbers with a tabular structure – it also does well with lists of strings. For almost anything else, Mathcad, Mathematica, or R then become my tools of choice.

Save

Save

Save

Save

Save

Posted in software | 8 Comments

## Asteroid Impact Damage Estimate

Quote of the Day

People seldom improve when they have no other model but themselves to copy.

— Oliver Goldsmith, writer

## Introduction

Figure 1: Photo of Asteroid 1997 XF11 (Source).

I have seen a number of articles in the popular scientific press about asteroid 1997 XF11 and the close approach it made to Earth back in June. The June approach was not that close – ~27 million kilometers.  The closest approach is expected in 2028 and will be 980,000 km or  2.4 times the average Earth-Moon distance.

It seems like stories of asteroids approaching close to the Earth appear every so many months. This makes sense because astronomers have done an excellent identifying a large percentage (~93%) of the large Earth orbit crossing asteroids – back in the old days, we had no idea what was out there.

Some of the news stories focus on the destruction that an asteroid impact from an object like this would cause. For example:

• If it struck it could destroy a whole continent or potentially wipe out life on the planet. (UK article)
• If it were to hit us, would kill between a quarter and a half of the world's population. (Another UK article)
• The asteroid is about a quarter mile (400 meters) wide, large enough to cause considerable local or regional damage were it to hit the planet. (Space.com article)

In this post, I will use a web-based impact calculator to look at the potential impact of an asteroid like this on the Earth.

## Background

### Background on Asteroid Impact Modeling

This paper was used to provide the various models used in the web calculator. It is quite readable and worth some time.

### Orbit Details

The 1997 XF11 orbit details can be found on this Wikipedia page. The orbit details of the Earth can be found on this Wikipedia page.

### 1997 XF11 Size

It is difficult to determine the size of 1997 XF11 because it is so small that we cannot image it. Thus, we can only estimate its size based on its brightness and assumptions about the reflectance of its surface. The following quote describes the difficulty in measuring the asteroid's size.

Better colour information obtained by imaging the asteroid through different filters may enable the size of the rock to be determined more accurately. The problem is that we cannot actually `see' details on the asteroid at this distance, and so the only way to estimate its size is by its brightness. To do this, we assume that it reflects about as much light as similar objects in the solar system. If 1997 XF11 is actually a much whiter, brighter colour, then it will be smaller for the same brightness: conversely, if it's made of especially dark rock the suggested one mile diameter could be a serious underestimate.

For those interested in the details of estimating an asteroids size from its brightness, see this post.

### 1997 XF11 Orbit

Figure 2 shows the orbital data from NASA.

Figure 2: Orbit of 1997XF11 (NASA).

## Analysis

### Asteroid Velocity

The asteroid's speed is a critical parameter in determining its impact significance. Figure 3 shows how I use the orbital speed formula to determine the asteroid's speed.

Figure 3: Asteroid Velocity at Point of Impact.

### Input Dialog

Figure 4 shows the input dialog for the web calculator. I am assuming nominal values:

• 2 km diameter (middle of the stated range)
• nominal density of 3000 kg/m3
• 45° impact angle (the most common impact angle – Shoemaker)
• an ocean impact at point with a depth equal to the Pacific average
• an orbital velocity of 34 km/s (Figure 3)
• Effect at 1000 km distance (a continental distance)

Figure 4: Input Dialog with My Inputs.

### Output Dialog

Figure 5 shows the output dialog for the calculator. You must click on each category to see all the results.

Figure 5: Output Dialog of the Impact Calculator.

### Result Summary

I concatenate all the individual results into a single graphic in Figure 6.

Figure 6: Output Result Summary.

The asteroid impact would be devastating.

• Seismic effects comparable to an 8.1 magnitude quake
• An air blast that breaks windows at 1000 km distance
• A 100 meter high tsunami

## Conclusion

The calculator results are interesting. I can believe that the impact from an object like this would be devastating. According to the calculator, an impact like this occurs on average every 7 million years. Since the Earth is 4.5 billion years old, we must have many craters from asteroids like this – I assume most of the craters are underwater and not visible or on land and worn down by geologically processes. I did see that the Chesapeake Bay was formed from the impact of an asteroid about this size.

Save

Save

Save

Save

Save

Save

## Planetary Atmosphere Leakage

Quote of the Day

The progress of evolution from President Washington to President Grant was alone evidence to upset Darwin.

Henry Adams (1838-1918). The tenor of US political rhetoric has changed little over the last 195 years.

## Introduction

Figure 1: Photograph from NASA's MAVEN satellite showing atomic oxygen leaking from the atmosphere of Mars. (Source)

I have always been interested in the fact that some planets have atmospheres and others do not. At the time of formation, planets have a primary atmosphere that consists largely of light elements (hydrogen and helium) – Earth now has a secondary atmosphere formed outgassing from tectonic activity and comet impact residue. For small bodies, these low-molecular weight elements escape into space. I had never looked at how these gases escaped until I recently found a Wikipedia article about how gases escape from planetary atmospheres (e.g. Figure 1), and the math and physics involved were too enticing to pass up.

There are actually a number of mechanisms by which gases can leave a planet's atmosphere, including:

This post will focus on Jeans escape, a mechanism that depends on fraction of gas molecules at sufficiently high temperatures having enough velocity to escape a planet's gravitational field. While Jeans escape is not a major source of atmospheric loss for Earth, it is an important factor for smaller worlds like the moon and Mars.

My Mathcad source and its PDF are here.

## Background

### Objective

I saw Figure 2 on this Quora post and I decided to learn about the meaning of this plot by regenerating it using Mathcad.

Figure 2: Interesting Graph that I Will Duplicate. (Source)

### Definitions

Figure 3: NASA Photo Showing Hydrogen Leakage in Red Around the Earth. (Source)

Jeans Escape
Particles (molecules or atom) with a speed greater than a planet's escape velocity can escape from the planet assuming the particle is not slowed down by impacting another particle. (Reference)
Exosphere
A thin, atmosphere-like volume surrounding a planet or natural satellite where molecules are gravitationally bound to that body, but where the density is too low for them to behave as a gas by colliding with each other. Figure 3 shows a good example of the Earth's exosphere as measured by a NASA satellite. (Source)
Exobase
The altitude at which upward-traveling molecules experience one collision on average. Molecules at the exobase altitude or higher are unlikely to encounter other molecules – those exobase molecules at the high-spend end of their velocity distributions will have an opportunity to escape from the planet. (Reference)
Escape Velocity
The minimum speed needed for an object to "break free" from the gravitational attraction of a massive body. (Source)
Maxwell–Boltzmann distribution
The statistical distribution that describes the velocity profile of molecules or atoms in a gas. (Source)

### Average Molecular Speed

The average speed of a molecule is given by Equation 1 (Source).

 Eq. 1 $\displaystyle {{\bar{v}}_{{gas}}}=\sqrt{{\frac{{8\cdot R\cdot {{T}}}}{{\pi \cdot MW}}}}$

where

• $\bar{v}_{gas}$ is the mean velocity of a gas molecule.
• R is universal gas constant constant.
• MW is the molecular weight of the gas.
• T is the exobase temperature of gas (absolute). I should point out that I found conflicting information on exobase temperatures, which I document here.

### Escape Velocity

We can calculate the escape velocity from a planet using Equation 2 to determine when gas leakage will be significant on long time scales (Source).

 Eq. 2 $\displaystyle {{v}_{e}}=\sqrt{{\frac{{2\cdot G\cdot M}}{R}}}$

where

• M is the mass of the planet.
• R is the radius of the planet.
• G is universal gravitational constant.
• ve is the escape velocity from the planet.

## Analysis

### An Escape Velocity Rule of Thumb

Many analyses of the planetary gas leakage make use of the following rule of thumb (Source).

Calculations show that if the escape velocity of a planet exceeds the average speed of a given type of molecule by a factor of 6 or more, then these molecules will not have escaped in significant amounts during the lifetime of the solar system.

### Calculations

#### Setup

Figures 4(a) and 4(b) show how I setup my calculation.

 Figure 4 (a): Calculation Setup. Figure 4(b): Load in Planet Data.

#### Modeling

Figure 5 shows how I plotted my data in Mathcad. I am not happy with this plot because it does not look as clear as I would like. As an alternative, I plotted the data (Figure 6) using a presentation-grade graphics package.

 Figure 5: Plot of the Escape Velocity and the Planetary Escape Velocities ÷ 6.

### Graphic View

Figure 6 is a plot I made using the same data as in Mathcad but with the scientific plotting package called Originlab.

Figure 6: Gas Molecular Velocities Versus Planetary Escape Velocities ÷ 6.

Figure 6 tells us that

• The terrestrial planets (Mercury, Venus, Earth, and Mars) do not have sufficient gravity to retain large amounts of hydrogen and helium.
• The gas and ice giants (Jupiter, Saturn, Uranus, and Neptune) have sufficient gravity to retain hydrogen and helium.
• The moon is large enough that it could retain some CO2, but in fact its atmosphere is almost nonexistent. This is because other mechanisms (e.g. solar radiation pressure) drove away what little gas was there.

## Conclusion

I have always wondered why hydrogen and helium have leaked away from the Earth's atmosphere, but oxygen, nitrogen, and carbon dioxide have remained. This plot shows me that the Earth is massive enough to retain these gases, but not massive enough to retain hydrogen and helium.

Of course, other factors also play a role. For example, the Earth has a magnetic field that prevents our atmosphere from being eroded away by the solar wind. Mars is smaller than the Earth and has no magnetic field. This means that gases will be driven off by the solar wind.

To show how complex this issue is, Venus is closer to the Sun than the Earth and has a much smaller geomagnetic field than the Earth. However, it has retained an enormous amount of carbon dioxide in its atmosphere and has developed an induced magnetic field in its ionosphere that helps minimize the impact of atmospheric erosion by the solar wind.

## Appendix A: Planetary Exobase Temperatures.

The following tables contain the exobase temperatures for the planet's of our solar system. Note that I found conflicting data for the the exobase temperatures of Mercury, Venus, and Jupiter. I chose to use this data for consistency.

Figure 6: Basic Atmospheric Parameters for the Giant Planets. (Source)

Figure 7: Basic Atmospheric Parameters for Venus, Earth, Mars, Titan. (Source)

Figure 8: Basic Atmospheric Parameters for Mercury, the Moon, Triton, and Pluto. (Source)

### Appendix B: Conflicting Versions of Figure 6.

Figure 9 shows another common form of my Figure 6. Notice that the x-axis is reversed and Jupiter, Saturn, Neptune, and Uranus have different exobase temperatures than in my Figure 6.

Figure 9: Similar Example with Contradictory Information (Source).

The Wikipedia also has a chart (Figure 10) based on data similar to that of Figure 9. I like the style of this graphic.

Figure 10: Wikipedia Version of My Figure 6.

Posted in Astronomy | 2 Comments

## Fiber Optic Cable and Lightning

Quote of the Day

Many spend their time berating practitioners for not applying their method. We all need to disseminate our ideas, but most of our time should be spent applying and improving our methods, not selling them. The best way to sell a mouse trap is to display some trapped mice.

— David Parnas. This has been my approach to selling people on Computer Algebra Systems (CASs). I believe the best way to sell CASs to engineers is to show them solved, real-world engineering problems – which are my trapped mice.

Figure 1: Example of Lightning Damage From a Surge Coming Through the Ethernet Ports.

Lightning is a tough problem. All of my personal electronic systems are well grounded and have the best surge protection I can buy. Yet I still suffer occasional losses due to lightning – for example, this weekend I replaced a surge-blown power adapter at my cabin in northern Minnesota. Intuitively, you would think that fiber optic systems should be better protected against lightning strikes than copper-based systems because glass fiber does not conduct electricity. This is not necessarily true.

Unfortunately, the story is a bit more complicated than just copper versus glass. There are literally dozens of fiber optic cable types, however, for the sake of this discussion, I will assume that there are two types of fiber optic cable: those that contain no metal and those that do.

Engineers generally refer to a fiber optic cable that contains no metal as a dielectric cable (Figure 2). My personal belief is that homes connected with dielectric cable experience less surge damage than homes connected with metal-bearing cable – I am in the process of testing this hypothesis. Notice that dielectric cable contains strength members made of Kevlar that allow it to be pulled into position.

Figure 2: Standard Dielectric Cable. (Source)

One issue with dielectric cable is that it does not contain a tracer wire, which allows people digging to determine the location of underground cables by using a wire tracer. These tracer wires are commonly used with standard utility services, such as gas, water, and electricity. Thus, a buried dielectric cable is more likely to experience an accidental cut than a cable with an embedded tracer wire. You can run a tracer wire outside of the dielectric cable, but then you need to make sure it is grounded properly.

While we are seeing service providers use more dielectric cable (example: all-dielectric, self-supporting cable [ADSS]), the vast majority of deployments use cable that contain metal and for good reasons. In Minnesota, over 90% of the deployments involve the use of aerial fiber cable – cable strung in the air along poles. Figure 3 shows the construction of a typical aerial fiber cable. This cable contain a heavy metal strength member that provides the cable sufficient tension resistance to survive hanging between poles.  Pole deployments require very strong cables the cables must not only bear their own weight, but the stresses added by accumulated ice and wind.

Figure 3: Typical Aerial Fiber Optic Cable. (Source)

Unfortunately, metal in the cable provides a path for lightning to travel. This metal is always grounded for safety, but even a grounded cable will develop some surge voltage on it when lightning strikes.

When a fiber optic cable is run to a home, it frequently has metal strength members along its sides (Figure 3). This strength members make it easy to pull the cable through conduit or trenches. The strength members can also be used by wire tracers to locate the cable.

Figure 4: Commonly Used Fiber Optic Cable with Two Strength Members. (Source)

Lightning can also travel along the metallic path provided by these strength members. As with cable deployed on poles, the strength members are always grounded for safety. However, even a grounded cable will develop some surge voltage on it when lightning strikes.

We continue to work to reduce the likelihood a lightning damaging fiber optic systems. ADSS cable is a big step forward and will help, but the need for a tracer wire near the home still complicates the issue. I think putting fiber optic hardware indoors and feeding it with dielectric cable within the home and with grounded, metal-bearing cable outside the home is probably the long-term answer.

Posted in Fiber Optics | 2 Comments

## 10,000 Boomers Turning 65 Everyday

Quote of the Day

I hope I shall possess firmness and virtue enough to maintain what I consider the most enviable of all titles, the character of an honest man.

— George Washington

## Introduction

Figure 1: US Birth Rate (births/1000 people) with the Baby Boom Years (1946 to 1964) in Red. (Source)

I have started doing some succession planning for my engineering team. I am having to deal with the retirement of key staff members, and I need to ensure continuity of productivity. The majority of the engineers on my team are "baby boomers" – people born between 1946 and 1964 (inclusive). I am a boomer myself.

The front-end of the baby boomers began to turn 65 in 2011 and they will continue to turn 65 until 2029. I started to wonder how many boomers are turning 65 every day? The Social Security Administration estimates that 10,000 Americans are turning 65 every day (source). As I thought about, I should be able to estimate the number of people that are turning 65 every day by examining graphs of the US population and birth rate (Figure 1). It is a nice Fermi problem and the subject of this post.

## Background

### Approach

I will base my estimate primarily on the number of babies born during each year of the boomer years, subtracting the average number who typically die before turning 65, and adding the number of foreign-born people who would be missed in the baby totals.

### Modeling Survival

Actuaries have done an excellent job assembling life tables. Life tables can tell you many things, but for this post I am focused the number of live births that survive to age 65. Appendix A shows a common life table that indicates that 83% of live births will survive to age 65.

### Modeling Immigration

Not all people in the US turning 65 today are native-born. Since I do not know the age profile of the immigrants, I am going to have make a guess as to the percentage of people turning 65 everyday that are foreign-born. Figure 2 shows the percentage of foreign-born residents as a function of time. I am going to assume that most the immigrants turning 65 today came during the 1940s through 1960s when the percentage of foreign-born people averaged ~7% of the population. I will increase my estimate of the number of native-born people turning 65 everyday by $\displaystyle \frac{1}{{1-7\%}}$ to account for the number of foreign-born people turning 65.

Figure 2: Percentage of Foreign-Born US Residents. (Source)

## Analysis

### Rough Estimate

The quickest way to estimate the number of boomers retiring everyday is to

• Compute the average number of boomers born each year by dividing the total number of boomers by the number of years boomers were being born.
• Divide average yearly birth rate by 365 to get the average daily birth rate.
• Multiple the number of births by the survival percentage to obtain the number of people turning 65 each year.
• Divide by 93% (100%-7%) to estimate the number of additional people turning 65 each day that were foreign-born people .

Figure 3 shows my mathematical work.

Figure 3: Rough Estimate of the Daily Number of Americans Turning 65.

This estimate is very close 10,000 people turning 65 every day.

### More Detailed Estimate

Let's try a slightly different approach that will provide use an estimate for the number of people turning 65 per day per year.

#### US Birth Rate Versus Time

Figure 4 shows my digitized version of Figure 1, which is the US annual birth rate (i.e. births/1000 population).

Figure 4: Digitized Version of Figure 1.

#### US Population Versus Time

Figure 5 shows the US population versus time (Source).

Figure 5: US Population Versus Time.

I can use the birth rate (Figure 4) and population data (Figure 5) to estimate the number of babies born per year.

#### Compute Daily Birth Rate By Year

Figure 6 shows the number of births per year, which I computed by taking the population times the birth rate. For fun, I also estimate the total number of boomer babies at 76.9 million, which is very close to the official value of 77.3 million (Source).

Figure 6: Yearly Boomer Births.

#### Graph of Daily Retirement Rates

Figure 7 shows my estimate for the number of people turning 65 every day. The calculation simply time shifts the number of births by 65 years, removes all those who would not have survived to 65, and add in my estimate for the number of foreign-born people turning 65 as a fraction of the native born. I estimate that the peak rate of people turning 65 will be about 10,700 per day and will occur in 2023.

Figure 7: Number of Americans Turning 65 Everyday By Year.

## Conclusion

I confirmed that ~10,000 people are turning 65 every day. That is a lot of retired folks – this means the following years will see many opportunities for young people. While things may not always look great employment-wise for our young people today, I have great hope that they will have many opportunities in the next few years.

## Appendix A: Survival Percentage to 65 Years

Figure 8 shows the number of people out of 100,000 who survive to various ages. This data shows that 83.3 % of people born will reach age 65.

Figure 8: Survival Rates from Birth to Various Ages. (Source)

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

## Airliner vs Car Fuel Usage

Quote of the Day

The majority of men are bundles of beginnings.

— Ralph Waldo Emerson. I understand this saying well as I have three brothers, and I am the father of two sons.

## Introduction

Figure 1: Boeing 787 – Modern, Fuel Efficient Airliner. (Source)

My youngest son and his wife are going to have a baby girl in November – my first grandchild. Since they live in Montana, I will soon be doing some long-distance traveling. As I am famously cheap, I usually drive the 1000+ mile distance to visit them. I have started to think about flying there because the drive to their home is 16 hours of extreme boredom.

As I looked at the cost of airline tickets versus driving, I became curious as to how much fuel would be used to fly me to a Montana airport like Bozeman or Butte. I was surprised to learn that airliners can be quite fuel efficient compared to cars. This post contains my analysis.

## Background

### Data Sources

I am going to base the airliner portion of this work on long-haul and turbo-prop aircraft data available on the Wikipedia.

The car fuel economy data available from the US Department of Energy. There is fuel economy data available on hundreds of cars – I limited my view to data from Honda and Subaru, my two favorite brands.

This post assumes that all the airliner seats are occupied. Appendix A shows the load factors of various airlines. I will also ignore any energy differences that exist between gasoline and jet fuel – I am only looking at volume of fuel.

### Unit Conversions

Equation 1 shows how to convert between kg per km and L per 100 km using the density of aviation fuel (0.81 gm/cm3). I should note that the Wikipedia page on airliner fuel usage also provides a "mileage" in terms of fuel volume, but it looks like each manufacturer used a different density value. I decided to use an average fuel density that I applied to all aircraft.

 Eq. 1 $\displaystyle \frac{\text{L}}{{\text{100}\cdot \text{km}}}=\text{L}\cdot \text{1000}\cdot \frac{{\text{c}{{\text{m}}^{3}}}}{\text{L}}\cdot \text{0}\text{.81}\cdot \frac{{\text{gm}}}{{\text{c}{{\text{m}}^{3}}}}\cdot \frac{{\text{0}\text{.001}\cdot \text{kg}}}{{\text{gm}}}\cdot \frac{\text{1}}{{\text{100}\cdot \text{km}}}=0.0081\cdot \frac{{\text{kg}}}{{\text{km}}}$

## Analysis

### Airliner Fuel Economy Data

Figure 2 shows how are converted the airliner data from kg/km to L per 100 km.

Figure 2: Mathcad Unit Conversion Routine.

Figure 3 shows my table of long-haul and turbo-prop airliner fuel usage, which range from 2.31 to 6.11 L per 100 km per seat.

Figure 3: Long-Haul Airliner Fuel Economy Ranking.

### Automobile Fuel Economy

The fuel economy data was given in an Excel workbook, so I just did my unit conversion work using a pivot table and a calculated field. Figure 4 shows a screenshot of my pivot table of Honda and Subaru highway-driving, fuel economy data, which has a range from 5.88 to 10.23 liters per 100 km.

Figure 4: 2016 Honda and Subaru Fuel Economy.

## Conclusion

I found that fully loaded long-haul and turbo-prop airliners have a fuel economy between 2.31 to 6.11 liters per 100 km per seat. My favorite brands of automobiles had fuel economies of between 5.88 to 10.23 liters per 100 km. So many fully loaded airliners use substantially less fuel per km than a car carrying a single passenger.

## Appendix A: Airliner Occupancy Levels

Figure 5 shows the load factor (i.e. percentage of seats occupied) by airlines for various years – it looks like most airlines operate at ~85%. I know that most of my flights are fully occupied, i.e. load factor =100%.

Figure 5: Aircraft Load Factors. (Source)

Posted in General Science | 7 Comments