Asteroid Size Estimation

 

Quote of the Day

The job of a successful leader is to build relationships that are based on mutual respect and the recognition that others know things that we may need to know to get the job done.

— Edgar Schein. This is as good a description of leadership that I have found.


Introduction

Figure 1: View of 2012 TC4 From Earth.

Figure 1: View of 2012 TC4 From Earth. Note how small and faint the asteroid is in the photo. (Source)

I often see announcements of Near-Earth Objects (NEOs) in the scientific press. For asteroids, these announcements are usually accompanied by a size estimate of the asteroid. In this post, I will discuss a commonly used formula for the effective spherical diameter of an asteroid based on its normalized brightness (i.e. absolute magnitude).

Most asteroids are such small objects that astronomers cannot obtain a large enough image to make a direct measurement. Instead, the astronomers estimate the size of the asteroid by on its range and brightness. Unfortunately, the brightness of an asteroid depends on it surface composition, which is usually unknown when the asteroid is little more than a point of light when first seen from Earth. Astronomers are then forced to give a wide range of possible diameters to account for their surface composition uncertainty.

Background

Definitions

Apparent Magnitude (V)
The visual brightness of an asteroid when observed and measured visually or with a CCD camera. (Source)
Absolute Magnitude (H)
The visual magnitude of an asteroid if it were 1 AU from the Earth and 1 AU from the Sun and fully illuminated, i.e. at zero phase angle – a geometrically impossible situation. (Source)
Geometric Albedo (pV)
The ratio of the brightness of a planetary body, as viewed from the Sun, to that of a white, diffusely reflecting sphere of the same size and at the same distance. Zero for a perfect absorber and 1 for a perfect reflector. (Source)
I smiled when I read this description of geometric albedo – it is VERY similar to the description of target strength as used by the sonar community. I am always amazed at the similarity between the various engineering and scientific disciplines.
Bond Albedo (A)
The Bond albedo is the fraction of power in the total electromagnetic radiation incident on an astronomical body that is scattered back out into space. The Bond albedo is related to the geometric albedo by the expression A=p\cdot q. , where q is termed the phase integral. (Source)
Equivalent Spherical Diameter
The equivalent spherical diameter of an irregularly shaped object is the diameter of a sphere of equivalent volume. (Source)

Key Formula

Equation 1 shows the formula that I see being used in a number of papers.

Eq. 1 \displaystyle D({{p}_{V}},H)=\frac{{1329}}{{\sqrt{{{{p}_{V}}}}}}\cdot {{10}^{{-0.2\cdot H}}}\left[ {\text{km}} \right]

where

  • D is the diameter of the asteroid [km].
  • pV is geometric albedo [dimensionless].
  • H is absolute magnitude of the asteroid [dimensionless].

The detailed derivation of Equation 1 is a bit involved, but quite interesting. See this document for details (section 4.2).

Post Objective

This post will consist of two calculations using Equation 1:

Analysis

Setup

Figure 2 shows my setup for the calculations.

Figure 2: Calculation Setup.

Figure 2: Calculation Setup.

U of I page on asteroids Table of Astronomical Symbols List of Minor Planet Magnitudes and Sizes

Two Asteroid Examples

Figure 3 shows my calculations for the effective diameter of 2012 TC4 and Ceres. My  calculations are in good agreement with the published information.

Figure 3: Calculations for Two Asteroids.

Figure 3: Calculations for Two Asteroids.

Minor Planets Center Table

Figure 4 shows my calculations for duplicating the effective diameter versus absolute magnitude table. Within the rounding used by the Minor Planets Center, I duplicated their results.

Figure 4: My Duplication of the Minor Planets Center Table.

Figure 4: My Duplication of the Minor Planets Center Table.

Conclusion

I was able to use Equation 1 to duplicate a number of results for asteroid sizes. This exercise showed me that it is difficult to get an accurate size estimate for asteroid because their albedos can vary so widely.

Save

Save

Save

Save

Save

Save

Save

Save

Posted in Astronomy | 1 Comment

Calculating a Parameter's Worst-Case Range

 

Quote of the Day

You have enemies? Good. That means you've stood up for something, sometime in your life.

— Winston Churchill. This is true in both politics and corporate life.


Introduction

Figure 1: Dying Gasp Circuit.

Figure 1: Dying Gasp Circuit.

Many electronic systems are required to generate an alarm when they detect their power failing – the alarm is referred to as a "dying gasp". These systems are required to generate a dying gasp alarm when their input voltage drops below a specified level.

The circuit in Figure 1 provides a typical implementation example. When the input voltage, called VSupply, is at a level where the voltage across R2 drops below VReference, a level that is accurately and inexpensively generated using bandgap voltage reference circuits.

Because of circuit tolerances associated with R1, R2, and VReference, there is a range VSupply values could trigger a dying gasp alarm. In this post, I will show three different ways to determine the range of critical VSupply values. The third method is the most interesting because it involves using feedback in a circuit simulator to solve generate the required supply voltage.

For those who are interested in more dying gasp design details, I addressed how to design a charge storage system to enable the hardware to generate a dying gasp alarm in this post. For those who are interested, my Mathcad/LTSpice source and its PDF are here.

Background

Definitions

Tolerance
The total amount by which a given dimension may vary, or the difference between the limits. (Source: ANSI Y14.5M-1982)
Worst-Case Analysis
Worst-case circuit analysis is an analysis technique which, by accounting for component variability, determines the circuit performance under a worst-case scenario. (Source)
Extreme Value Analysis (EVA)
EVA involves evaluating a circuit's performance using every possible combination of extreme component value. It is not a statistical method – every possible combination of extreme component value is analyzed.
Root Sum Square Analysis (RSS)
RSS assumes that the design tolerance for a system parameter is composed of the sum of Gaussian distributed variables. Each variable is assumed to have tolerances that can be modeled using a centered normal distribution with the total tolerance range of ±3·σ about the mean. The overall tolerance of a system parameter is estimated by computing the square root of the sum of the squared  variable tolerances.
Monte Carlo Analysis
Monte Carlo analysis is a statistical analysis that calculates the response of a circuit when device model parameters are randomly varied between specified tolerance limits according to a specified statistical distribution. (Source)

Three Approaches

I will use three different ways of determining the range of power supply values that will cause the system to generate a dying gasp.

  • Arithmetic Sum

    This is the traditional approach to worst-case analysis. It produces the absolute maximum and minimum values for a parameter. This approach has two shortcomings: (1) for complex systems, it is difficult to determine, (2) for many system, it produces a result that is so pessimistic that there is no way to make an economically feasible system.

  • Monte Carlo method in Mathcad

    For Monte Carlo analysis, we normally assume that the tolerance of a variable can be modeled using a uniform probability density. We then perform multiple analysis using values randomly generated variable values. This produces a set of output values will approximately reflect the probability density of the real product. The main issue with using Mathcad is that it requires that you have an algebraic solution for the system's response to the input variables. For complex systems, we frequently do not an explicit formula.

  • Monte Carlo method in LTSpice

    Same approach as for Mathcad, but implemented using a circuit simulator. This approach can handle very complex systems that would defeat Mathcad.

I should note that there are other approaches that could be used (e.g. RSS, EVA), but I will focus on the three methods I listed.

Using Feedback to Solve For a Circuit Value

In my third method (LTSpice-based), I use feedback to have the circuit determine the required power supply setting that causes the resistor divider to output the reference voltage. Figure 2 shows a block diagram of how that approach is implemented.

Analysis

Arithmetic

Figure 3 shows the the arithmetic approach. In this case, I applied Kirchhoff's voltage law to the circuit of Figure 1 and determined the supply voltage as a function of the resistor values and the reference voltage. The resulting formula was simple enough that the combination of component values required to produce extreme values could be determined by inspection. This situation rarely occurs in practice, but it occurred today.

Figure 3: Minimum/Maximum Solution Using Arithmetic Approach.

Figure 3: Minimum/Maximum Solution Using Arithmetic Approach.

Monte Carlo in Mathcad

Figure 4 shows how I used Mathcad to perform the Monte Carlo analysis. Observe that the Monte Carlo analysis did produce as extreme values as the Arithmetic approach. If I used more random iterates, I would get closer to the true minimum and maximum values.

Figure 4: Estimate of Minimum and Maximum Values Using the Monte Carlo Method.

Figure 4: Estimate of Minimum and Maximum Values Using the Monte Carlo Method.

Monte Carlo Method in LTSpice

Figure 5 shows I took the circuit of Figure 1 and used feedback to generate the vSupply value for a given set of R1, R2, and reference voltage values. I ran the simulation for 5000 sets of randomly selected component values.

Figure 4: Using Feedback to Solve For The Critical Supply Voltage.

Figure 5: Using Feedback to Solve For The Critical Supply Voltage.

Figure 6 shows the results of my simulation. As you can see, 5000 simulations were performed.

Figure M: Plot of the Circuit Solutions.

Figure 6: Plot of the Circuit Solutions.

Figure 7 shows a histogram of the results in Figure 5. The results of Figure 6 are consistent with the results of the previous two methods. As the central limit theorem would lead us to believe, the distribution of critical vSupply voltages has a Gaussian shape.

Figure 6: Histogram of the Critical Dying Gasp Supply Voltages.

Figure 7: Histogram of the Critical Dying Gasp Supply Voltages.

Conclusion

I performed a worst-case analysis of a simple comparator circuit using three methods and got three consistent results – as I should have. The LTSpice approach was the most interesting to me because it can be performed using the LTSpice simulator on circuits for which I have no transfer functions.

Save

Save

Save

Save

Save

Save

Posted in Electronics | Leave a comment

The Tyranny of the Spreadsheet Cell

 

Quote of the Day

Never give up on a dream just because of the time it will take to accomplish it. The time will pass anyway.

— Earl Nightingale.


Figure 1: The Concept of Cell is Both a Strength and a Weakness of Spreadsheets. (Source)

Figure 1: The Concept of Cell is Both a Strength and a Weakness of Spreadsheets. (Source)

I had a conversation the other day with an engineer to whom I was expressing my frustrations with using Excel to process large data sets of complex numbers.  She also has processed large data sets with Excel and commented that she found the processing painful. I told her about how I was using Excel to work with my data sets, but my techniques all seemed contrived and overly complex. While you can use Excel to work with these data sets, it is a bit like trying to use a Swiss Army knife as a screwdriver. Yes, it can turn a screw but there are much better ways!

As we talked about our Excel frustrations, I decided that all my issues with the Excel have to do with its reliance on the concept of a cell.

I have grown weary of the tyranny of the spreadsheet cell. In a spreadsheet, you have to deal with data on a cell level. Even when I think of the data as an aggregation (ex. matrix, list, vector), Excel forces me to deal with each individual data item when the entire data items is better viewed as an aggregation of data. When I apply a function in Excel, I must explicitly apply the function to each individual data item. While Excel does support some forms of aggregation, like with matrices and named ranges, it almost seems like it was added as an afterthought.

My issues are not limited to the lack of data aggregations – Excel is clumsy to use with complex numbers. I recently had to deal with impedance calculations involving a massive number of complex number calculations in Excel – it worked, but it was extremely painful. The same calculations in Mathcad took a couple of simple statements – the brevity was because (1) Mathcad handles complex numbers as easily as real numbers, and (2) Mathcad handles data aggregations as easily as it does individual numbers.

However, Excel does have its strengths – it provides an excellent demonstration of the power of an integrated data analysis environment. The ability to gather data, process the data with automation (i.e. VBA), and to display the data makes for a powerful tool. The power of tool is limited, however, by the cell-based data model. I would argue that ability to deal with data aggregations makes the combination of Rstudio and ggplot2 a far superior data processing environment to that of Excel.

Understand that I work with Excel every day. I even occasionally compete in Excel competitions to keep my skills up. But I do not try to use it to solve every problem. Excel is great for small data sets of real numbers with a tabular structure – it also does well with lists of strings. For almost anything else, Mathcad, Mathematica, or R then become my tools of choice.

Save

Save

Save

Save

Save

Posted in software | 8 Comments

Asteroid Impact Damage Estimate

 

Quote of the Day

People seldom improve when they have no other model but themselves to copy.

— Oliver Goldsmith, writer


Introduction

Figure 1: Photo of Asteroid 1997XF11.

Figure 1: Photo of Asteroid 1997 XF11 (Source).

I have seen a number of articles in the popular scientific press about asteroid 1997 XF11 and the close approach it made to Earth back in June. The June approach was not that close – ~27 million kilometers.  The closest approach is expected in 2028 and will be 980,000 km or  2.4 times the average Earth-Moon distance.

It seems like stories of asteroids approaching close to the Earth appear every so many months. This makes sense because astronomers have done an excellent identifying a large percentage (~93%) of the large Earth orbit crossing asteroids – back in the old days, we had no idea what was out there.

Some of the news stories focus on the destruction that an asteroid impact from an object like this would cause. For example:

  • If it struck it could destroy a whole continent or potentially wipe out life on the planet. (UK article)
  • If it were to hit us, would kill between a quarter and a half of the world's population. (Another UK article)
  • The asteroid is about a quarter mile (400 meters) wide, large enough to cause considerable local or regional damage were it to hit the planet. (Space.com article)

In this post, I will use a web-based impact calculator to look at the potential impact of an asteroid like this on the Earth.

Background

Background on Asteroid Impact Modeling

This paper was used to provide the various models used in the web calculator. It is quite readable and worth some time.

Orbit Details

The 1997 XF11 orbit details can be found on this Wikipedia page. The orbit details of the Earth can be found on this Wikipedia page.

1997 XF11 Size

It is difficult to determine the size of 1997 XF11 because it is so small that we cannot image it. Thus, we can only estimate its size based on its brightness and assumptions about the reflectance of its surface. The following quote describes the difficulty in measuring the asteroid's size.

Better colour information obtained by imaging the asteroid through different filters may enable the size of the rock to be determined more accurately. The problem is that we cannot actually `see' details on the asteroid at this distance, and so the only way to estimate its size is by its brightness. To do this, we assume that it reflects about as much light as similar objects in the solar system. If 1997 XF11 is actually a much whiter, brighter colour, then it will be smaller for the same brightness: conversely, if it's made of especially dark rock the suggested one mile diameter could be a serious underestimate.

For those interested in the details of estimating an asteroids size from its brightness, see this post.

1997 XF11 Orbit

Figure 2 shows the orbital data from NASA.

Figure 2: Orbit of 1997XF11 (NASA).

Figure 2: Orbit of 1997XF11 (NASA).

Analysis

Asteroid Velocity

The asteroid's speed is a critical parameter in determining its impact significance. Figure 3 shows how I use the orbital speed formula to determine the asteroid's speed.

Figure 3: Asteroid Velocity at Point of Impact.

Figure 3: Asteroid Velocity at Point of Impact.

Orbital Speed

Input Dialog

Figure 4 shows the input dialog for the web calculator. I am assuming nominal values:

  • 2 km diameter (middle of the stated range)
  • nominal density of 3000 kg/m3
  • 45° impact angle (the most common impact angle – Shoemaker)
  • an ocean impact at point with a depth equal to the Pacific average
  • an orbital velocity of 34 km/s (Figure 3)
  • Effect at 1000 km distance (a continental distance)
Figure 4: Input Dialog with My Inputs.

Figure 4: Input Dialog with My Inputs.

Output Dialog

Figure 5 shows the output dialog for the calculator. You must click on each category to see all the results.

Figure 5: Output Dialog of the Impact Calculator.

Figure 5: Output Dialog of the Impact Calculator.

Result Summary

I concatenate all the individual results into a single graphic in Figure 6.

Figure 6: Output Result Summary.

Figure 6: Output Result Summary.

The asteroid impact would be devastating.

  • Seismic effects comparable to an 8.1 magnitude quake
  • An air blast that breaks windows at 1000 km distance
  • A 100 meter high tsunami

Conclusion

The calculator results are interesting. I can believe that the impact from an object like this would be devastating. According to the calculator, an impact like this occurs on average every 7 million years. Since the Earth is 4.5 billion years old, we must have many craters from asteroids like this – I assume most of the craters are underwater and not visible or on land and worn down by geologically processes. I did see that the Chesapeake Bay was formed from the impact of an asteroid about this size.

Save

Save

Save

Save

Save

Save

Posted in Astronomy | Leave a comment

Planetary Atmosphere Leakage

 

Quote of the Day

The progress of evolution from President Washington to President Grant was alone evidence to upset Darwin.

Henry Adams (1838-1918). The tenor of US political rhetoric has changed little over the last 195 years.


Introduction

Figure 1: Atomic oxygen scattering ultraviolet sunlight in the upper atmosphere of Mars, imaged by MAVEN’s Imaging Ultraviolet Spectrograph. Atomic oxygen is produced by the breakdown of carbon dioxide and water. Most oxygen is trapped near the planet, (indicated with a red circle) but some extends high above the planet and shows that that Mars is losing the gas to space.

Figure 1: Photograph from NASA's MAVEN satellite showing atomic oxygen leaking from the atmosphere of Mars. (Source)

I have always been interested in the fact that some planets have atmospheres and others do not. At the time of formation, planets have a primary atmosphere that consists largely of light elements (hydrogen and helium) – Earth now has a secondary atmosphere formed outgassing from tectonic activity and comet impact residue. For small bodies, these low-molecular weight elements escape into space. I had never looked at how these gases escaped until I recently found a Wikipedia article about how gases escape from planetary atmospheres (e.g. Figure 1), and the math and physics involved were too enticing to pass up.

There are actually a number of mechanisms by which gases can leave a planet's atmosphere, including:

This post will focus on Jeans escape, a mechanism that depends on fraction of gas molecules at sufficiently high temperatures having enough velocity to escape a planet's gravitational field. While Jeans escape is not a major source of atmospheric loss for Earth, it is an important factor for smaller worlds like the moon and Mars.

My Mathcad source and its PDF are here.

Background

Objective

I saw Figure 2 on this Quora post and I decided to learn about the meaning of this plot by regenerating it using Mathcad.

Figure 2: Interesting Graph that I Will Duplicate. (Source)

Figure 2: Interesting Graph that I Will Duplicate. (Source)

Definitions

Figure 2: Photo from NASA's DISCOVR Satellite Showing Hydrogen Leakage in Red Around the Earth. (Source)

Figure 3: NASA Photo Showing Hydrogen Leakage in Red Around the Earth. (Source)

Jeans Escape
Particles (molecules or atom) with a speed greater than a planet's escape velocity can escape from the planet assuming the particle is not slowed down by impacting another particle. (Reference)
Exosphere
A thin, atmosphere-like volume surrounding a planet or natural satellite where molecules are gravitationally bound to that body, but where the density is too low for them to behave as a gas by colliding with each other. Figure 3 shows a good example of the Earth's exosphere as measured by a NASA satellite. (Source)
Exobase
The altitude at which upward-traveling molecules experience one collision on average. Molecules at the exobase altitude or higher are unlikely to encounter other molecules – those exobase molecules at the high-spend end of their velocity distributions will have an opportunity to escape from the planet. (Reference)
Escape Velocity
The minimum speed needed for an object to "break free" from the gravitational attraction of a massive body. (Source)
Maxwell–Boltzmann distribution
The statistical distribution that describes the velocity profile of molecules or atoms in a gas. (Source)

Average Molecular Speed

The average speed of a molecule is given by Equation 1 (Source).

Eq. 1 \displaystyle {{\bar{v}}_{{gas}}}=\sqrt{{\frac{{8\cdot R\cdot {{T}}}}{{\pi \cdot MW}}}}

where

  • \bar{v}_{gas} is the mean velocity of a gas molecule.
  • R is universal gas constant constant.
  • MW is the molecular weight of the gas.
  • T is the temperature of gas (absolute).

Escape Velocity

We can calculate the escape velocity from a planet using Equation 2 to determine when gas leakage will be significant on long time scales (Source).

Eq. 2 \displaystyle {{v}_{e}}=\sqrt{{\frac{{2\cdot G\cdot M}}{R}}}

where

  • M is the mass of the planet.
  • R is the radius of the planet.
  • G is universal gravitational constant.
  • ve is the escape velocity from the planet.

Analysis

An Escape Velocity Rule of Thumb

Many analyses of the planetary gas leakage make use of the following rule of thumb (Source).

Calculations show that if the escape velocity of a planet exceeds the average speed of a given type of molecule by a factor of 6 or more, then these molecules will not have escaped in significant amounts during the lifetime of the solar system.

Calculations

Setup

Figures 4(a) and 4(b) show how I setup my calculation.

Setup Analysis
Figure 4 (a): Calculation Setup. Figure 4(b): Load in Planet Data.

Modeling

Figure 5 shows how I plotted my data in Mathcad. I am not happy with this plot because it does not look as clear as I would like. As an alternative, I plotted the data (Figure 6) using a presentation-grade graphics package.

MathcadPlot
Figure 5: Plot of the Escape Velocity and the Planetary Escape Velocities ÷ 6.

Graphic View

Figure 6 is a plot I made using the same data as in Mathcad but with the scientific plotting package called Originlab.

Figure 6: Gas Molecular Velocities Versus Planetary Escape Velocities.

Figure 6: Gas Molecular Velocities Versus Planetary Escape Velocities ÷ 6.

Figure 6 tells us that

  • The terrestrial planets (Mercury, Venus, Earth, and Mars) do not have sufficient gravity to retain large amounts of hydrogen and helium.
  • The gas and ice giants (Jupiter, Saturn, Uranus, and Neptune) have sufficient gravity to retain hydrogen and helium.
  • The moon is large enough that it could retain some CO2, but in fact its atmosphere is almost nonexistent. This is because other mechanisms (e.g. solar radiation pressure) drove away what little gas was there.

Conclusion

I have always wondered why hydrogen and helium have leaked away from the Earth's atmosphere, but oxygen, nitrogen, and carbon dioxide have remained. This plot shows me that the Earth is massive enough to retain these gases, but not massive enough to retain hydrogen and helium.

Of course, other factors also play a role. For example, the Earth has a magnetic field that prevents our atmosphere from being eroded away by the solar wind. Mars is smaller than the Earth and has no magnetic field. This means that gases will be driven off by the solar wind.

To show how complex this issue is, Venus is closer to the Sun than the Earth and has a much smaller geomagnetic field than the Earth. However, it has retained an enormous amount of carbon dioxide in its atmosphere and has developed an induced magnetic field in its ionosphere that helps minimize the impact of atmospheric erosion by the solar wind.

 Appendix A: Planetary Exobase Temperatures.

The following tables contain the exobase temperatures for the planet's of our solar system. Note that I found conflicting data for the the exobase temperatures of Mercury, Venus, and Jupiter. I chose to use this data for consistency.

Figure 1: Basic Atmospheric Parameters for the Giant Planets. (Source)

Figure 6: Basic Atmospheric Parameters for the Giant Planets. (Source)

Figure M: Basic Atmospheric Parameters for Venus, Earth, Marse, Titan. (Source)

Figure 7: Basic Atmospheric Parameters for Venus, Earth, Mars, Titan. (Source)

Figure M: Basic Atmospheric Parameters for Mercury, the Moon, Triton, and Pluto. (Source)

Figure 8: Basic Atmospheric Parameters for Mercury, the Moon, Triton, and Pluto. (Source)

 Appendix B: Conflicting Versions of Figure 6.

Figure 9 shows another common form of my Figure 6. Notice that the x-axis is reversed and Jupiter, Saturn, Neptune, and Uranus have different exobase temperatures than in my Figure 6.

Figure M: Similar Example with Contradictory Information.

Figure 9: Similar Example with Contradictory Information (Source).

The Wikipedia also has a chart (Figure 10) based on data similar to that of Figure 9. I like the style of this graphic.

Figure 10: Wikipedia Version of My Figure 6.

Figure 10: Wikipedia Version of My Figure 6.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Posted in Astronomy | 2 Comments

Fiber Optic Cable and Lightning

 

Quote of the Day

Many spend their time berating practitioners for not applying their method. We all need to disseminate our ideas, but most of our time should be spent applying and improving our methods, not selling them. The best way to sell a mouse trap is to display some trapped mice.

— David Parnas. This has been my approach to selling people on Computer Algebra Systems (CASs). I believe the best way to sell CASs to engineers is to show them solved, real-world engineering problems – which are my trapped mice.


Figure 1: Example of Lightning Damage From a Surge Coming Through the Ethernet Ports.

Figure 1: Example of Lightning Damage From a Surge Coming Through the Ethernet Ports.

Lightning is a tough problem. All of my personal electronic systems are well grounded and have the best surge protection I can buy. Yet I still suffer occasional losses due to lightning – for example, this weekend I replaced a surge-blown power adapter at my cabin in northern Minnesota. Intuitively, you would think that fiber optic systems should be better protected against lightning strikes than copper-based systems because glass fiber does not conduct electricity. This is not necessarily true.

Unfortunately, the story is a bit more complicated than just copper versus glass. There are literally dozens of fiber optic cable types, however, for the sake of this discussion, I will assume that there are two types of fiber optic cable: those that contain no metal and those that do.

Engineers generally refer to a fiber optic cable that contains no metal as a dielectric cable (Figure 2). My personal belief is that homes connected with dielectric cable experience less surge damage than homes connected with metal-bearing cable – I am in the process of testing this hypothesis. Notice that dielectric cable contains strength members made of Kevlar that allow it to be pulled into position.

Figure 2: Standard Dielectric Cable. (Source)

Figure 2: Standard Dielectric Cable. (Source)

One issue with dielectric cable is that it does not contain a tracer wire, which allows people digging to determine the location of underground cables by using a wire tracer. These tracer wires are commonly used with standard utility services, such as gas, water, and electricity. Thus, a buried dielectric cable is more likely to experience an accidental cut than a cable with an embedded tracer wire. You can run a tracer wire outside of the dielectric cable, but then you need to make sure it is grounded properly.

While we are seeing service providers use more dielectric cable (example: all-dielectric, self-supporting cable [ADSS]), the vast majority of deployments use cable that contain metal and for good reasons. In Minnesota, over 90% of the deployments involve the use of aerial fiber cable – cable strung in the air along poles. Figure 3 shows the construction of a typical aerial fiber cable. This cable contain a heavy metal strength member that provides the cable sufficient tension resistance to survive hanging between poles.  Pole deployments require very strong cables the cables must not only bear their own weight, but the stresses added by accumulated ice and wind.

Figure 3: Typical Aerial Fiber Optic Cable. (Source)

Figure 3: Typical Aerial Fiber Optic Cable. (Source)

Unfortunately, metal in the cable provides a path for lightning to travel. This metal is always grounded for safety, but even a grounded cable will develop some surge voltage on it when lightning strikes.

When a fiber optic cable is run to a home, it frequently has metal strength members along its sides (Figure 3). This strength members make it easy to pull the cable through conduit or trenches. The strength members can also be used by wire tracers to locate the cable.

Figure 4: Commonly Used Fiber Optic Cable with Two Strength Members. (Source)

Figure 4: Commonly Used Fiber Optic Cable with Two Strength Members. (Source)

Lightning can also travel along the metallic path provided by these strength members. As with cable deployed on poles, the strength members are always grounded for safety. However, even a grounded cable will develop some surge voltage on it when lightning strikes.

We continue to work to reduce the likelihood a lightning damaging fiber optic systems. ADSS cable is a big step forward and will help, but the need for a tracer wire near the home still complicates the issue. I think putting fiber optic hardware indoors and feeding it with dielectric cable within the home and with grounded, metal-bearing cable outside the home is probably the long-term answer.

Posted in Fiber Optics | 2 Comments

10,000 Boomers Turning 65 Everyday

 

Quote of the Day

I hope I shall possess firmness and virtue enough to maintain what I consider the most enviable of all titles, the character of an honest man.

— George Washington


Introduction

Figure 1: US Birth Rate (births/1000 people) with the Baby Boom Years (1946 to 1964) in Red. (Source)

Figure 1: US Birth Rate (births/1000 people) with the Baby Boom Years (1946 to 1964) in Red. (Source)

I have started doing some succession planning for my engineering team. I am having to deal with the retirement of key staff members, and I need to ensure continuity of productivity. The majority of the engineers on my team are "baby boomers" – people born between 1946 and 1964 (inclusive). I am a boomer myself.

The front-end of the baby boomers began to turn 65 in 2011 and they will continue to turn 65 until 2029. I started to wonder how many boomers are turning 65 every day? The Social Security Administration estimates that 10,000 Americans are turning 65 every day (source). As I thought about, I should be able to estimate the number of people that are turning 65 every day by examining graphs of the US population and birth rate (Figure 1). It is a nice Fermi problem and the subject of this post.

Background

Approach

I will base my estimate primarily on the number of babies born during each year of the boomer years, subtracting the average number who typically die before turning 65, and adding the number of foreign-born people who would be missed in the baby totals.

Modeling Survival

Actuaries have done an excellent job assembling life tables. Life tables can tell you many things, but for this post I am focused the number of live births that survive to age 65. Appendix A shows a common life table that indicates that 83% of live births will survive to age 65.

Modeling Immigration

Not all people in the US turning 65 today are native-born. Since I do not know the age profile of the immigrants, I am going to have make a guess as to the percentage of people turning 65 everyday that are foreign-born. Figure 2 shows the percentage of foreign-born residents as a function of time. I am going to assume that most the immigrants turning 65 today came during the 1940s through 1960s when the percentage of foreign-born people averaged ~7% of the population. I will increase my estimate of the number of native-born people turning 65 everyday by \displaystyle \frac{1}{{1-7\%}} to account for the number of foreign-born people turning 65.

Figure M: Percentage of Foreign-Born US Residents. (Source)

Figure 2: Percentage of Foreign-Born US Residents. (Source)

Analysis

Rough Estimate

The quickest way to estimate the number of boomers retiring everyday is to

  • Compute the average number of boomers born each year by dividing the total number of boomers by the number of years boomers were being born.
  • Divide average yearly birth rate by 365 to get the average daily birth rate.
  • Multiple the number of births by the survival percentage to obtain the number of people turning 65 each year.
  • Divide by 93% (100%-7%) to estimate the number of additional people turning 65 each day that were foreign-born people .

Figure 3 shows my mathematical work.

Figure M: Rough Estimate of the Daily Number of Americans Turning 65.

Figure 3: Rough Estimate of the Daily Number of Americans Turning 65.

This estimate is very close 10,000 people turning 65 every day.

More Detailed Estimate

Let's try a slightly different approach that will provide use an estimate for the number of people turning 65 per day per year.

US Birth Rate Versus Time

Figure 4 shows my digitized version of Figure 1, which is the US annual birth rate (i.e. births/1000 population).

Figure M: Digitized Version of Figure 1.

Figure 4: Digitized Version of Figure 1.

US Population Versus Time

Figure 5 shows the US population versus time (Source).

Figure M: US Population Versus Time.

Figure 5: US Population Versus Time.

US Population

I can use the birth rate (Figure 4) and population data (Figure 5) to estimate the number of babies born per year.

Compute Daily Birth Rate By Year

Figure 6 shows the number of births per year, which I computed by taking the population times the birth rate. For fun, I also estimate the total number of boomer babies at 76.9 million, which is very close to the official value of 77.3 million (Source).

Figure 6: Yearly Boomer Births.

Figure 6: Yearly Boomer Births.

Graph of Daily Retirement Rates

Figure 7 shows my estimate for the number of people turning 65 every day. The calculation simply time shifts the number of births by 65 years, removes all those who would not have survived to 65, and add in my estimate for the number of foreign-born people turning 65 as a fraction of the native born. I estimate that the peak rate of people turning 65 will be about 10,700 per day and will occur in 2023.

Figure M: Number of Americans Turning 65 Everyday By Year.

Figure 7: Number of Americans Turning 65 Everyday By Year.

Conclusion

I confirmed that ~10,000 people are turning 65 every day. That is a lot of retired folks – this means the following years will see many opportunities for young people. While things may not always look great employment-wise for our young people today, I have great hope that they will have many opportunities in the next few years.

Appendix A: Survival Percentage to 65 Years

Figure 8 shows the number of people out of 100,000 who survive to various ages. This data shows that 83.3 % of people born will reach age 65.

Figure M: Survival Rates from Birth to Various Ages. (Source)

Figure 8: Survival Rates from Birth to Various Ages. (Source)

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Posted in General Mathematics | Leave a comment

Airliner vs Car Fuel Usage

 

Quote of the Day

The majority of men are bundles of beginnings.

— Ralph Waldo Emerson. I understand this saying well as I have three brothers, and I am the father of two sons.


Introduction

Figure 1: Modern Fuel Efficient 787 Airliner. (Source)

Figure 1: Boeing 787 – Modern, Fuel Efficient Airliner. (Source)

My youngest son and his wife are going to have a baby girl in November – my first grandchild. Since they live in Montana, I will soon be doing some long-distance traveling. As I am famously cheap, I usually drive the 1000+ mile distance to visit them. I have started to think about flying there because the drive to their home is 16 hours of extreme boredom.

As I looked at the cost of airline tickets versus driving, I became curious as to how much fuel would be used to fly me to a Montana airport like Bozeman or Butte. I was surprised to learn that airliners can be quite fuel efficient compared to cars. This post contains my analysis.

Background

Data Sources

I am going to base the airliner portion of this work on long-haul and turbo-prop aircraft data available on the Wikipedia.

The car fuel economy data available from the US Department of Energy. There is fuel economy data available on hundreds of cars – I limited my view to data from Honda and Subaru, my two favorite brands.

This post assumes that all the airliner seats are occupied. Appendix A shows the load factors of various airlines. I will also ignore any energy differences that exist between gasoline and jet fuel – I am only looking at volume of fuel.

Unit Conversions

Equation 1 shows how to convert between kg per km and L per 100 km using the density of aviation fuel (0.81 gm/cm3). I should note that the Wikipedia page on airliner fuel usage also provides a "mileage" in terms of fuel volume, but it looks like each manufacturer used a different density value. I decided to use an average fuel density that I applied to all aircraft.

Eq. 1 \displaystyle \frac{\text{L}}{{\text{100}\cdot \text{km}}}=\text{L}\cdot \text{1000}\cdot \frac{{\text{c}{{\text{m}}^{3}}}}{\text{L}}\cdot \text{0}\text{.81}\cdot \frac{{\text{gm}}}{{\text{c}{{\text{m}}^{3}}}}\cdot \frac{{\text{0}\text{.001}\cdot \text{kg}}}{{\text{gm}}}\cdot \frac{\text{1}}{{\text{100}\cdot \text{km}}}=0.0081\cdot \frac{{\text{kg}}}{{\text{km}}}

Analysis

Airliner Fuel Economy Data

Figure 2 shows how are converted the airliner data from kg/km to L per 100 km.

Figure 2: Mathcad Unit Conversion Routine.Unit Conversion Routine.

Figure 2: Mathcad Unit Conversion Routine.

jet fuel density

Figure 3 shows my table of long-haul and turbo-prop airliner fuel usage, which range from 2.31 to 6.11 L per 100 km per seat.

Figure 3: Long-Haul Airliner Fuel Economy Ranking.

Figure 3: Long-Haul Airliner Fuel Economy Ranking.

Automobile Fuel Economy

The fuel economy data was given in an Excel workbook, so I just did my unit conversion work using a pivot table and a calculated field. Figure 4 shows a screenshot of my pivot table of Honda and Subaru highway-driving, fuel economy data, which has a range from 5.88 to 10.23 liters per 100 km.

Figure 4: 2016 Honda and Subaru Fuel Economy.

Figure 4: 2016 Honda and Subaru Fuel Economy.

Conclusion

I found that fully loaded long-haul and turbo-prop airliners have a fuel economy between 2.31 to 6.11 liters per 100 km per seat. My favorite brands of automobiles had fuel economies of between 5.88 to 10.23 liters per 100 km. So many fully loaded airliners use substantially less fuel per km than a car carrying a single passenger.

Appendix A: Airliner Occupancy Levels

Figure 5 shows the load factor (i.e. percentage of seats occupied) by airlines for various years – it looks like most airlines operate at ~85%. I know that most of my flights are fully occupied, i.e. load factor =100%.

Figure 5: Aircraft Load Factors. (Source)

Figure 5: Aircraft Load Factors. (Source)

Posted in General Science | 7 Comments

Shimming and Trimming Stairs to Equalize the Risers

 

Quote of the Day

Wealth, like happiness, can never be attained when sought after directly. It comes as a by-product of providing a useful service.

— Henry Ford


Introduction

Figure 1: My Crude Stair Model.

Figure 1: My Crude Stair Model.

Almost exactly six years ago, I wrote a post on a carpentry hack that  resolved a problem I had with a stairway after a contractor installed an insulated floor in my basement. The insulated floor raised the floor level and made the bottom step of my stairs short – a violation of the code. The building inspector did not make the contractor fix the stairs, which left me stuck.

Since I had carpet installers coming in the next day and it was late at night, I decided to shim the stairs to make all the risers equal height (see Figure 1). The original post addressed how I computed the thickness of the shims used. I recently have had questions from readers who have encountered every possible floor height change:

  • Lower floor height raising or lowering
  • Upper floor height raising or lowering
  • Both the upper and lower floors changing
Figure 2: Stringer Ilustration.

Figure 2: Stringer Illustration. (Source)

In this post, I will show how to generalize my previous stair solution to handle these three cases. I also present some illustrations of what is involved in adjusting the stairs. The general solution used a Mathcad program to compute both the final riser height and the thickness of the shims or trim cuts required for each step. In general, you may need a combination of shims and trim cuts to resolve a riser height problem caused by changes in flooring height.

Because trim cuts on existing stringers (Figure 1) are difficult, I normally would just cut new stringers – I shimmed in this case because no trim cuts were needed, no stringer material material was available, and there was no time. The solution in my case was quick, cheap, and has worked well for the last six years – your case may be different.

My Mathcad source and its PDF are here for those who are interested.

Background

Definitions

shim
A thin often tapered piece of material (as wood, metal, or stone) used to fill in space between things (as for support, leveling, or adjustment of fit). (Source)
trimming
The act of making small cuts in material to provide space for fitting things together. If I had to trim a stringer, I probably would use a small circular saw to cut the straight sections and an oscillating saw for the corners. (Source: me)

Lower Floor Level Change

Equation 1 provides a formula for computing the thickness of shims (positive values) or trims (negative values) that must be done to each step in order to equalize each step after the level of the floor has changed.

Eq. 1 \displaystyle {{\tau }_{i}}={{\tau }_{0}}\cdot \frac{{N-i}}{N}

where

  • τi is the shim (positive) or trimming (negative) thickness required for the ith riser. I count the steps from the bottom to the top.
  • τ0 is the thickness added (positive) or subtracted (negative) from the lower floor level.
  • N is the number of steps.

Upper Floor Level Change

Equation 2 provides a formula for computing the thickness of shims (positive values) or trims (negative values) that must be done to each step in order to equalize each step after the level of the floor has changed.

Eq. 2 \displaystyle {{\tau }_{i}}={{\tau }_{N}}\cdot \frac{{i}}{N}

where

  • τN is the thickness added (positive) or subtracted (negative) from the upper floor level.

Combined Upper and Lower Floor Level Change

To solve this case, I just apply Equation 1 followed by Equation 2.

Analysis

Conventions

In each example, I show the Mathcad formula and its output (a nested array): (1) the first element of the array shows the new rise value; (2) the second value is the array of shim/trim values. I always treat the lower floor height change as the zeroth element of the shim/trim array and the upper floor height change as the Nth element of the shim/trim array.

Lower Floor Raised Example

Figure 3 illustrates the only case that I have dealt with directly. I had an insulated floor installed in my basement and it raised my lower floor level – I color in blue the positive level changes (i.e. shims).

Figure 2: Lower Floor Raised Example.

Figure 3: Lower Floor Raised Example.

Lower Floor Lowered Example

Figure 4 illustrates a situation that a reader recently presented me. He actually lowered his lower floor level – I color in yellow the negative level changes (i.e. trimming). The example shows my risers before and after trimming.

Lower Floor Raised Example

Figure 4: Lower Floor Lowered Example.

Upper Floor Raised Example

Figure 5 illustrates the case where I the upper floor has increased in height.

Lower Floor Lowered Example

Figure 5: Upper Floor Raised Example.

Upper Floor Raised/Lower Floor Lowered Example

Figure 6 illustrates the case where the upper floor raised and the lower floor dropped. This case requires both shims and cuts. In this case, I would strongly consider cutting new stringers because all this shimming and trimming looks like work.

 Figure 5: Upper Floor Raised, Lower Floor Lowered.


Figure 6: Upper Floor Raised, Lower Floor Lowered.

Conclusion

I hope this example helps those who are trying to shim their stairs under general circumstances. If I have time, I will try to come up with a good Excel solution to this problem.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Posted in Construction | Leave a comment

Lateral Force Rating for Various Nails

 

Quote of the Day

It's like, at the end, there's this surprise quiz: Am I proud of me? I gave my life to become the person I am right now. Was it worth what I paid?

— Richard Bach


Introduction

Figure 1: Lateral Strength Reference Data from the NDS. (Source)

Figure 1: Excerpt Lateral Strength Reference Data from the NDS for 1.5 inch Thick Stock. (Source)

I am an amateur carpenter, and I work hard to ensure that I always comply with the applicable building codes. The various codes include requirements for a properly nailed joint (examples). Since I like to understand where these requirements come from, I have been the reading some sections of the National Design Specification for Wood Construction (NDS) that address fastening with nails.  During my reading, I saw many interesting formulas associated with determining the NDS design ratings for nail withdrawal force  (covered here) and lateral force resistance, which is the subject of this post.

My focus here is on duplicating some of the results in NDS Table 11N, which contains lateral resistance design values for different nails in different species of wood. I did this exercise so that I could confirm that I understood the mechanics associated with the formulas that I read in the NDS. I will not be presenting any tutorial information because the NDS goes into tremendous detail on the subject.

Disclaimer: I am NOT a structural engineer. I am just a guy who finds the subject interesting. If you have structural questions, contact a structural engineer.

Background

Definitions

Sinker Nail
A type of nail used in contemporary wood-frame construction; thinner than a common nail, coated with adhesive to enhance holding power, with a funnel-shaped head, and a grid stamped on the top of the head. (Source)
Common Nail
A nail with a mostly smooth, uncoated shank less than one third the diameter of its head, used for interior construction, especially framing. (Source)
Box Nail
Box nails are made for use in thin dry wood. To reduce a nail's tendency to split such wood, the point is slightly blunted, so that it crushes the wood fibers and punches its way through instead of enlarging a crack. Box nails are thinner than the corresponding penny size in common nails, and about ⅛ inch shorter than their nominal size. Often they are coated with a resin (such as nylon) that is melted by the heat generated in driving the nail and glues the nail in place. (Source)
Dowel
Dowel is a generic term used for a fastener that transfers a load between connected members by a combination of flexure and shear in the dowel, and shear and bearing (referred to as embedment) in the timber. (Source)

Modeling Information

Figure 2 shows the yield limit formulas from the NDS that are applied to dowels loaded in shear – a nail is a form of dowel. If you want more background on these formulas, please see the NDS.

Figure 3: Key Formulas in Determining the Laterial Resistance Rating of a Nail.

Figure 2: Key Formulas in Determining the Lateral Resistance Rating of a Nail. (Source)

The NDS defines a reduction term (Figure 3) that must be applied to the calculations.

Figure 4: Definition of Reduction Term.

Figure 3: Definition of Reduction Term.

I coded these formulas into a Mathcad program (shown below). I assume that both connected members are 1.5 inches thick. For additional verification, I work the case of two 1.75 inch members in Appendix A.

Nail Information

I have summarized the characteristics of various types of nails in Figure 4. I grabbed this data from various sources (example).

Figure 2: Table of Nail Characteristics.

Figure 4: Table of Nail Characteristics.

Analysis

Lateral Strength Function

Figure 5 shows my Mathcad implementation of what I thought I read in the NDS, which goes into excruciating detail on this topic. Please refer to the NDS because I cannot do the subject justice here. The key point is that the variable Ζ contains the formulas for the four dowel failure modes, and the program returns the minimum of these four values n units of pounds-force.

Figure M: My Realization of the NDS Dowel Lateral Strength Function.

Figure 5: My Realization of the NDS Dowel Lateral Strength Function.

Solution Setup

Figure 6 shows how I used Mathcad to generate the same results as are in Table 11N, which I confirm by taking the difference between my results and the Table 11N and getting a result of all Øs.

Figure M: Checking My Function Output Against the Reference.

Figure 6: Checking My Function Output Against the Reference.

Conclusion

I was able to duplicate some of the results in Table 11N from the NDS. This gives me good confidence that I understand how the table was generated.

Appendix A: Analysis Repeated for 1.75 inch Thick Stock.

In Figure 7, I repeated my analysis of 1.5 inch thick stock for 1.75 inch thick stock and also duplicated the results in Table 11N.

Figure M: Example Using 1.75 in Wood.

Figure 7: Example Using 1.75 inch Thick Wood.

Posted in Construction | Leave a comment