World War 2 Submarine Hull Thickness Math

Quote of the Day

How are the children?

— Masai warrior greeting, intended to ensure that the warriors always keep their number one priority in mind.


I was reading a blog post on Gizmodo that did a bit of math to determine why a pipe breaks the way it does when the water inside freezes. As I looked at the post, I realized the math was the same as would be used to determine the hull thickness for a submarine rated to operate within a specified depth range. To verify my realization, I decided to do a bit of historical math and apply the formula to figuring out the hull thickness for a famous type of World War 2 submarines, the Balao Class Fleet Submarine (Figure 1).

Figure 1: Photograph of USS Balao.

Figure 1: Photograph of USS Balao.

The calculations are shown in Figure 2. The calculations agree with the pressure hull thickness actually used on this submarine. I have found a number of discussions on the Balao's operating depth (example). Richard O'Kane operated USS Tang down to 600 feet during sea trials. Apparently, the crews had great confidence in the construction of the Balao class.

Normally, I go through derivations of these equations. In this case, there are numerous discussions available on the web (e.g. here and here).

Figure 2: My Rough Analysis of the Required Steel Plate Thickness for a Balao-Class Submarine.

Figure 2: My Rough Analysis of the Required Steel Plate Thickness for a Balao-Class Submarine.

Wikipedia Post Design Margin Note Safety Margin for US Submarines O'Kane Reference Steel Yield Strength During WW2 Drawing of Balao Class Submarine O'Kane Biography Steel Yield Strength During WW2
Posted in Military History, Naval History | 2 Comments

An Example of Misusing Thermal Resistances

Quote of the Day

Being defeated is often a temporary condition. Giving up is what makes it permanent.

— Marilyn vos Savant


In my previous post, I provided some definitions of thermal resistances and thermal characteristics. In this post, I want to show the problem that got me interested in this subject. An engineer misused the numbers and got answers that did not make sense. I require the folks in my group to estimate the junction temperatures of the parts they use so that the know if there design may have a temperature problem. In this case, the calculation required the use of the ψJT thermal characteristic. The basic problem is straightforward:

  • Estimate the junction temperature of a part that dissipates 1.25 W.
  • We have a large number of thermal parameters available. Which one do we use?

Figure 1 shows how the calculation went.

Figure 1: Three Wacks at Estimating Junction Temperature.

Figure 1: Three Wacks at Estimating Junction Temperature.

I thought this was a good example to illustrate the importance of understanding the thermal parameters and using them properly.

As an aside, I also looked at estimating the ψJT using the estimating formula from the previous post (Figure 2).

Figure 2: Psi-JT Esimate with Comparison to the Measured Value.

Figure 2: Psi-JT Esimate with Comparison to the Measured Value.

The result is not too bad considering the limited data I have.

Posted in Electronics | Comments Off on An Example of Misusing Thermal Resistances

Compact Thermal Models for Electrical Components

Quote of the Day

The two things that make leaders stupid are envy and sex.

- Lyndon Johnson


Introduction

Figure 1: Example of a Compact Thermal Model.

Figure 1: Example of a Compact Thermal Model. (Source)

I am an electrical engineer and not a mechanical engineer, which means that if there was a choice of buying equipment parts on sites such as Octopart or calling in a professional, the first option may seem reasonable.
As such, I depend on electronic packaging professionals to provide me answers to my thermal questions at work. However, I am curious about how the packaging folks estimate the temperatures of my parts because it affects how my team designs products. This blog post documents my early self-education on how the thermal analysis of electronic components is performed. With a rounded understanding of thermal state change (materials changing from solids to liquids etc) we can better understand how different metals will perform as conductors. Their level of conductivity and energy lost through heat are important considerations before completing a deep thermal analysis of how electronic components perform. This blog post could help answer the question, What Are The Best Modern Wiring Materials? With their thermal thresholds taken into consideration, this post will provide insight into the most effective materials needed to efficiently work with electronic components. For all of your electrical product needs check out MOFSET.

After some explanatory setup, I will work through an interesting example that will estimate a thermal characteristic, known as ?JT, for a part based on its mechanical characteristics.

Background

Temperature Definitions

Note that when I refer to component temperatures, there are generally three temperatures that I worry about it: (1) ambient temperature, (2) junction temperature, and (3) case temperature. In practice, I tend to use ambient temperature and junction temperature the most. Case temperature is usually important when you have heat sinks attached to the component case. I am not allowed to use fans in my outdoor designs because fans are relatively unreliable and require maintenance (e.g. MTBF ? 10K hours under rugged conditions plus the need for regular filter changes). This means that my parts have to convectively cool themselves. I do use case temperature for certain indoor optical products, which often need heat sinks and forced air flow (i.e. fans).

Junction temperature
Junction temperature is the highest temperature of the actual semiconductor in an electronic device (Source).
Case temperature
The temperature of the outside of a semiconductor device's case. It is normally measured on the top of the case.
Ambient temperature
The temperature of the air that surrounds an electronic part. It often measured a defined distance away from a printed circuit board (e.g. 2 inches).

Why Do We Care About Component Junction Temperature?

There are four reasons why people care about the junction temperature of their semiconductors. Let's examine each of these reasons in detail.

Reason 1: Part Are Specified for Standard Ambient Operating Temperature Ranges

All parts have operating temperature limits and the part manufacturers will only stand by their parts if they are used at these ambient temperatures. The temperature limits are usually expressed in terms of one of the three standard ambient temperature operating ranges:

  • Commercial grade: 0 °C to 70 °C
  • Industrial grade: ?40 °C to 85 °C
  • Military grade: ?55 °C to 125 °C

Reason 2: Many Vendors Impose Special Component Temperature Limits

Many vendors impose a specific temperature limit on their semiconductor dice (e.g. the dreaded Xilinx 100 °C junction temperature limit). I have seen various junction temperature limits used: 110 °C, 120 °C, and 135 °C.

Reason 3: The Reliability of Components Varies with Temperature

The reliability of most electronic products is dominated by the reliability of the semiconductors in those products. Temperature is a key parameter affecting semiconductor reliability because many semiconductor failure modes are accelerated by higher temperatures. Semiconductor failure modes are tied to chemical reactions and the rate of a chemical reaction increases exponentially with temperature according to the Arrhenius equation. I have written about this dependence in a number of previous posts (e.g. here and here).

Reason 4: Company Standards and Practices

I frequently have had to conform to some government or corporate-imposed junction temperature limit. For example, Navy contracts set a junction temperature limit of 110 °C based on the reliability guidelines developed by Willis Willhouby. HP, Honeywell, and ATK all had similar temperature limits, but I cannot remember them all -- yes, I must be getting senile.

Types of Thermal Models

There are two common types of thermal models: detailed and compact. Detailed thermal models use mathematical representations of the objects under analysis that look very similar to the objects they represent. They usually use some form of finite element approach to generate their solutions. These software toys are expensive (e.g. Flotherm and Icepak). We use this type of model at my company ? very good and very expensive. Detailed thermal models are analogous to an electrical engineer's distributed parameter model.

Compact thermal models generally use some form of electrical component analog for thermal parameters. These electrical analogs do not look anything like the mechanical components they represent. Compact thermal models are not as accurate as a detailed thermal model, but they are computationally very efficient (i.e. fast) and the tools required are more readily available (e.g. spreadsheets, Mathcad, etc). Compact thermal models are analogous to an electrical engineer's lumped parameter model.

Thermal Resistances

The electrical engineering concepts of resistance, capacitance, and inductance are powerful in modeling because these components can be used as electrical analogs to anything that can modeled with a linear differential equation. The list of things that can be modeled with linear differential equations is long and important. For this post, we are only discussing thermal resistances. Circuits composed only of resistors do not have any variation with time. However, we can model thermal time variation by using capacitances and inductances. The following quote from the Wikipedia nicely states the thermal and electrical correspondences.

The heat flow can be modeled by analogy to an electrical circuit where heat flow is represented by current, temperatures are represented by voltages, heat sources are represented by constant current sources, absolute thermal resistances are represented by resistors and thermal capacitances by capacitors.

There are three thermal resistances that we encounter regularly: ?JA, ?JC, and ?JB. Let's discuss them in more detail.

?JA
?JA is the thermal resistance from the junction to the ambient environment, measured in units of °C/W. Ambient temperature plays the same role for a thermal engineer as ground does for an electrical engineer -- it is a reference level that is assumed to be fixed. Because ?JA depends on so many variables -- package, board construction, airflow, radiation -- it is measured on a JEDEC- specified, test PCB. ?JA may be the most misused of all the thermal parameter because people apply to it to PCBs that are completely different than the JEDEC test PCB. The only use for ?JA is for comparing the relative thermal merits of different packages, which is what JEDEC states in the following quote.

The intent of Theta-JA measurements is solely for a thermal performance comparison of one package to another in a standardized environment. This methodology is not meant to and will not predict the performance of a package in an application-specific environment.

?JC
The junction-to-case thermal resistance measures the ability of a device to dissipate heat from the surface of the die to the top or bottom surface of the package (e.g. often labeled with names like ?JCtop or ?JCbot). It is most often used for packages used with external heat sinks and applies to situations where all or nearly all of the heat is dissipated through the surface in consideration. The test method for ?JC is the Top Cold Plate Method. In this test, the bottom of the printed circuit board is insulated and a fixed-temperature cold plate is clamped against the top of the component, forcing nearly all of the heat from the die through the top of the package (Figure 2).

Figure 1: Theta-JC Test Fixure.

Figure 2: Theta-JC Test Fixture.

?JB
?JB is junction-to-board thermal resistance. From a measurement standpoint, ?JB is measured near pin 1 of the package (~ 1mm from the package edge) . ?JB is a function of both the package and the board because the heat must travel through the bottom of the package and through the board to the test point (Figure 3). The measurement requires a special test fixture that forces all the heat through the board (Source).

Figure 2: Theta-JB Test Fixture.

Figure 3: Theta-JB Test Fixture.

Thermal Characteristics

By analogy with an electrical resistor, the temperature drop across a thermal resistor is directly proportional to the heat flow through the thermal resistor. This not the case with the thermal characterization parameters. Because they are not resistor analogs, they cannot be used like a resistor for modeling purposes. However, they do allow an engineer to estimate the junction temperature of a component based on total power usage of a device, but only for the specific circuit conditions under which the parameter was determined.

Let's review the thermal characteristics in a bit more detail.

?JT
Junction-to-top thermal characteristic, is a measure of the junction temperature of a component as a function of the component top temperature, TT, and the power dissipated by the component, PComponent. ?JT is not a thermal resistance because its measurement includes thermal paths outside of the junction-to-board path. ?JT can be used to estimate junction temperature based on the equation {{T}_{J}}={{T}_{T}}+{{P}_{Component}}\cdot {{\psi }_{JT}}.
?JB
Junction-to-Board thermal characteristic, is a measure of the junction temperature of a component as a function of the board temperature below the component, TB , and the power dissipated by the component, PComponent. ?JB is not a thermal resistance because its measurement includes thermal paths outside of the junction-to-board path. ?JB can be used to estimate junction temperature based on the equation {{T}_{J}}={{T}_{B}}+{{P}_{Component}}\cdot {{\psi }_{JB}}.

Analysis

The following analysis will discuss an approximation for ?JT from Electronics Cooling Magazine, which did not include a derivation of the approximation. I include a derivation here. I also added an application example using three SSOPs and a chart that shows that the approximation works well for TQFP packages.

An Approximate Expression for ?JT

I find ?JT the most useful of the thermal parameters because it allows me to estimate junction temperature using the case temperature, which I can easily measure. However, most electronic component vendors do not specify ?JT. As I will show below, ?JT for molded plastic parts is closely related to ?JA, which is almost always specified. As I mentioned earlier, I do not find ?JA useful for estimating a part's junction temperature, but it is useful for comparing the relative thermal performance of different electronic packages.

Approximation Derivation

Equation 1 shows an approximation for ?JT that I find interesting and useful.

Eq. 1 {{\psi }_{JT}}=h\cdot \frac{{{\tau }_{EMC}}}{{{\kappa }_{EMC}}}\cdot {{\theta }_{JA}}

where

  • ?JT is the thermal characterization parameter for the junction to top-of-case temperature
  • h is the heat transfer coefficient for air under still conditions.
  • ?EMC is thickness of the Epoxy Molding Compound (EMC)
  • ?EMC is the thermal conductivity of the EMC.

Figure 4 shows my derivation of Equation 1, which I first saw in a couple of articles in Electronics Cooling Magazine.

Figure 3: Derivation of an Approximate Expression for Psi-JT.

Figure 4: Derivation of an Approximate Expression for Psi-JT.

Electronic Cooling Magazine Electronic Cooling Magazine Local Source Copy

Empirical Verification

I always like to look for empirical verification of any theoretical expression I deal with. There are a couple of ways I can verify the accuracy of this expression.

  • I can calculate estimates for ?JT using Equation 1 and compare my results to a measured value.

    This assumes that I can come up with accurate estimates for h, kEMC, and ?EMC.

  • Since ?JT is linear with respect to ?JA in the same package family, I can graph measured values for ?JT and ?JA and the plot should be linear.

    This assumes that h, kEMC, ?EMC are equal between all packages in the same family. This means that I can plot ?JT versus ?JA and I should get a straight line.

?JT Estimate Based on Mechanical Dimensions

Since TI has the best packaging data I can find, I will use Equation 1 to make a prediction of the ?JT for a TI package based on its material and mechanical properties. I have arbitrarily chosen an SSOP-type package for my example (Figure 5 shows the mechanical drawing).

Figure 4: Mechanical Drawing for TI SSOP Package.

Figure 5: Mechanical Drawing for TI SSOP Package.

I obtained the ?JT and ?JA from this document. I found values for h and ?EMC in an Electronics Cooling Magazine article. Assuming this information is correct, my calculations are shown in Figure 6. The results are reasonable given that we are using an approximation.

Figure 5: Comparison of Measured and Predicted Psi-JT's for SSOP Packages.

Figure 6: Comparison of Measured and Predicted Psi-JT's for SSOP Packages.

Graphical Look At The Relationship Between ?JT and ?JA

Table 1 shows data from this TI document on the thermal performance of their packages.

Table 1: List of TQFP ?JT and ?JA

Pkg Type

Pin Count

Pkg Designator

?JA

? JT

TQFP

128

PNP

48.39

0.248

TQFP

100

PZP

49.17

0.252

TQFP

64

PBP

52.21

0.267

TQFP

80

PFP

57.75

0.297

TQFP

64

PAP

75.83

0.347

TQFP

52

PGP

77.15

0.353

TQFP

48

PHP

108.71

0.511

Figure 7 shows a scatter chart of the data from Table 1. Note how the plot is quite linear with a slope of 0.004184, which means that ?JT is much smaller than ?JA.

Figure 5: Graph of Psi-JT Versus Theta-JA.

Figure 7: Graph of Psi-JT Versus Theta-JA a TQFP Family.

In Figure 8, I use Equation 1 to estimate what this slope should be by assuming all members of this packaging family have the same thickness of epoxy molding compound on top of the integrated circuit.

Figure 7: Estimating the Slope of the ?JT versus ?JA Line.

Figure 8: Estimating the Slope of the ?JT versus ?JA Line.

As we can see in Figure 8, Equation 1 provided a reasonable approximation for the slope of the line in Figure 7.

Conclusion

My goal was to learn a bit about compact thermal models and how they are used. This post provides the background information that I will need for some later posts on the subject. I was also able to confirm that an approximation for ?JT based on mechanical packaging information is probably a useful tool when useful vendor thermal specifications are missing – a distressingly common occurrence.

Posted in Electronics | 3 Comments

Difficulty of Viewing Dwarf Planets

Quote of the Day

Nothing can be of value without being an object of utility.

— Karl Marx


Introduction

Figure 1: Closeup of Pluto By New Horizons.

Figure 1: Closeup of Pluto By New Horizons spacecraft.

I was listening to an astronomer on the radio answering questions on viewing stars and planets. A question was asked about why we can have beautifully detailed photos taken from Earth of distant astronomical objects (e.g. Crab Nebula) but we cannot seem to obtain detailed photos of objects in our solar system like Pluto (Figure 1). The astronomer answered that the distant objects are huge and that we view them from Earth as having a larger viewing angle than a minor planet in our solar system. I thought it might be interesting to look at the relative viewing angle of these two objects when viewed from Earth.

Background

Example Images

Figures 2 and 3 show pictures of Pluto and the Crab Nebula. Notice how the picture of the Crab Nebula appears to be much more detailed than that of Pluto.



Figure 2: Pluto with Charon.

Figure 3: Crab Nebula

Some Definitions

Dwarf Planet
A celestial body in direct orbit of the Sun that is massive enough for its shape to be controlled by gravitation, but has not cleared its orbital region of other objects (Source).
Minor Planet
A minor planet is an astronomical object in direct orbit around the Sun that is neither a planet nor originally classified as a comet. Minor planets can be dwarf planets, asteroids, trojans, centaurs, Kuiper belt objects, and other trans-Neptunian objects (Source).
Planet
An astronomical object orbiting a star or stellar remnant that is massive enough to be rounded by its own gravity, is not massive enough to cause thermonuclear fusion, and has cleared its neighboring region of planetesimals (Source).
Viewing Angle
Angle of view describes the angular extent of a given scene that is imaged by a camera.

Visibility Criteria

I will compare the visibility of Pluto and the Crab Nebula by comparing their viewing angle from Earth. Let's define viewing as shown in Equation 1.

Eq. 1 \displaystyle \theta =\frac{d}{R}

where

  • R is the distance from Earth to the object being observed.
  • d is the distance across the object from the viewpoint of the Earth.

Analysis

Key Data

  • Pluto (Source)
    • Diameter: 2306 km
    • Distance: 39.4 AU

      Pluto's distance from both the Sun and the Earth varies all the time. For my rough analysis here, I will just use Pluto's mean distance from the Sun.

  • Crab Nebula (Source)
    • Diameter: 11 light-years
    • Distance: 6500 light-years

Calculations

Figure 4 shows my calculations of the viewing angles for Pluto and the Crab Nebula.

Figure 3: Viewing Angle Calculations.

Figure 4: Viewing Angle Calculations.

Conclusion

I had never thought about the difficulty of observing small objects in our solar system. I learned that those spectacular photographs we see from telescopes like the Hubble Space Telescope are of very large objects. Even with our largest instruments, the small, dark objects that occupy our solar system are very difficult to photograph in detail.

Posted in Astronomy | 1 Comment

My Borg Name is "1 of 13"

Quote of the Day

When I was 30, 35 years old, I knew, in a deep sense, every line of code I ever wrote. I’d write a program during the day, and at night I’d sit there and walk through it line by line and find bugs. I’d go back the next day and, sure enough, it would be wrong.

— Ken Thompson


Figure 1: Wikipedia Photo of "Seven of Nine".

Figure 1: Wikipedia Photo of "Seven of Nine". (Source)

I work for a small company that has 13 employees named "Mark", which is my first name. It is common for me to be in meetings with three "Marks" present. I have had as many as six "Marks" in a meeting. This makes using our first names difficult, and I usually go by my last name. This system works, but does sound a bit gruff to people from outside the company who here our meetings.

Most engineers are familiar with Star Trek and one of the managers at our company has proposed that we use the Borg naming convention for engineers named “Mark”. The Borg would give their individuals names like “Seven of Nine” (Figure 1).

The proposal at my company is to assign “Marks” their Borg name based on their placement in the company email list, which is alphabetical. Because I am the first "Mark" in the list, my Borg name is "1 of 13". The "Mark" sitting next to me is "9 of 13". The "Mark" a few cubes down from me is "13 of 13".

Not all Borg characters used this naming convention, like Locutus (Figure 1).

Figure 2: Wikipedia Photo of Locutus of Borg.

Figure 2: Wikipedia Photo of Locutus of Borg.

We do occasionally use these names in meetings, but just for fun. It is amazing that we have so many "Marks". However, we have quite a few engineers who are in the 50 to 60 year age group and "Mark" must have been a very popular name back in the 1950s.

Posted in Personal | Comments Off on My Borg Name is "1 of 13"

T-Shirt and Cotton Fiber Math

Quote of the Day

Do something instead of killing time. Because time is killing you.

— Paulo Coelho


Introduction

I was listening to a radio program called Planet Money today that was discussing how how intricate the infrastructure is for making something as simple as a T-shirt. They were making a T-shirt that they would sell to raise money to help the garment workers in Bangladesh (see Figure 1). The money required for this effort was raised on Kickstarter.

Figure 1: Planet Money T-Shirt.

Figure 1: Planet Money T-Shirt.

Here are some basic facts presented presented during the presentation:

  • The cotton used to make the T-shirt comes from very high-tech farms in the United States.

    The farms were operated by a small number of workers who really were technicians that supervised the operation of large machines.

  • The best cotton must never be touched by human hands.

    The highly automated US farms are ideal for producing high quality cotton.

  • The cotton fabric was made into T-shirts by workers in Bangladesh.

    These are very low paid workers -- $80 per month.

  • The T-shirts are shipped around the world on container ships.

    The development of container ships has enabled the efficient shipping of all sorts of products around the world -- including T-shirts.

I found the information about cotton to be the most very interesting. Here is a video from the SmartPlanet web site on the cotton-portion of the story.


I know NOTHING about clothing and fabric. Let's see if I can use some math and a little information from the web to learn something about fabric and T-shirts.

Background

There were four key numbers that I need to know my analysis:

  • It takes 6 miles of cotton thread to make a T-shirt (Source).

    I have seen this number quoted from a number of sources, including the radio broadcast we are discussing here.

  • The density of cotton is 1.55 gm/cm3 (Source).

    I calculated the average of the minimum and maximum values listed.

  • Making a T-shirt requires about 1 square yard of material (Source).

    The pattern I am looking at uses 1.25 yards. I am assuming that there is some waste and the T-shirt ends up containing about 1 square yard of fabric.

  • A typical T-shirts weighs 6 ounces.

    I just put my T-shirt on a food scale. The T-shirt was a short sleeve, V-neck style.

Let's see what we can learn from these numbers.

Analysis

Thickness of the Thread

A Little History

I deal with glass fiber all today, which is very fine (~125 μm). I show a cross-section of optical fiber in Figure 2 (Source). Fortunately, I have all sorts of fancy instruments that allow me to measure tiny things.

Figure 2: Cross-Section of Optical Fiber.

Figure 2: Cross-Section of Optical Fiber.

This has not always been the case for people who work with fiber. With respect to textiles, people have been producing fabric for thousands of years, but they had no way to accurately measure the thickness of the tiny thread they were using. As people do, they developed a workable substitute metric. They simply would run off a specified length of thread and weigh it (or determine its mass). As long as the thread was constructed consistently, this weight (mass) would be proportional to the diameter of the fiber and could be used as a substitute for the diameter. The metric standard is mass in grams for 9000 meters of fiber, which is called the denier. The Imperial unit is called the yield and is expressed in term of yards of thread per pound. Note how the dimensions (mass, length) of yield are the inverse of those for denier.

Modeling Fiber As A Cylinder

If we model a thread as a cylinder, we can estimate the thickness of the thread to be 0.12 mm, as shown in Figure 3.

Figure 3: Estimating the Thickness of the T-Shirts Cotton Thread.

Figure 3: Estimating the Thickness of the T-Shirts Cotton Thread.

Threads thickness is not normally specified in terms of a linear dimension like centimeters, but instead in units of denier or yield. A simple unit conversion shows us that T-shirt fiber has a thickness of 158 denier (Equation 1).

Eq. 1 \displaystyle \frac{\text{6 oz}}{6\text{ mile}}\cdot \frac{1\text{ mi}}{1609.3\text{ m}}\cdot \frac{28.3\text{ gm}}{\text{oz}}\cdot \frac{9000}{9000}=158.5\cdot \frac{\text{ gm}}{9000\ \text{m}}=158.5\ \text{denier}

On a related note, one can convert from a thread thickness specified in denier to centimeters. The Wikipedia give Equation 2 for performing the conversion.

Eq. 2 \displaystyle \varnothing =\sqrt{\frac{4.444\times {{10}^{-6}}\cdot \text{d}}{\pi \rho }}

where

  • \varnothing is the diameter of the thread [cm].
  • \rho is the density of the fiber material [gm/cm3].
  • d is the thickness of the fiber expressed in denier [gm/9000 m].

I derive this formula in the Appendix.

Thread Count of the T-Shirt Fabric

The thread count of fabric is defined as:

Thread count or threads per inch (TPI) is a measure of the coarseness or fineness of fabric. It is measured by counting the number of threads contained in one square inch of fabric, including both the length (warp) and width (weft) threads.

Figure 4 shows how I can estimate the number of threads per inch of T-shirt fabric.

Figure 4: Estimate of the Threads Per Inch for a T-Shirt.

Figure 4: Estimate of the Threads Per Inch for a T-Shirt.

The Wikipedia says that "standard" cotton fabric has a thread count of 150 threads per inch. So the number I came up with here is reasonable.

Conclusion

Just a quick post showing how the fabric for a T-shirt is measured.

Appendix:Wikipedia Thread Thickness Formula Derivation

Figure 5 shows the derivation of Wikipedia's formula for converting denier to centimeters.

Figure 5: Derivation of Conversion Formula Between Denier and Centimeters.

Figure 5: Derivation of Conversion Formula Between Denier and Centimeters.

Posted in General Mathematics | 3 Comments

Determining the Bandwidth of A Pulse

Quote of the Day

The odds are good, but the goods are odd.

— Statement made by an Alaskan woman after she told me that there were 1.6 men for every woman in her town.


Introduction

Figure 1: Power Lines are Hazardous for Low-Flying Airplanes.

Figure 1: Power Lines are Hazardous for Low-Flying Airplanes.

I was reading an article in Photonics Spectra magazine about the use of a laser radar system to assist pilots in detecting wires while flying low (Figure 1), and I saw two commonly used bandwidth estimation formulas that most engineers do not think much about. I have worked on laser radar systems in my past and the bandwidth of these systems drives their cost and performance. I thought it would be useful to review how engineers estimate the bandwidth required for the pulse detection circuits used in these systems.

I should point out that electrical circuits process signals that vary in time, but these circuits are usually designed based on the frequency content of the signals being processed. A co-worker once told me that

Electronic systems are hard to design using time-based approaches (e.g. differential equations), but are relatively easy to design using frequency-based approaches (e.g. Bode plots). Unfortunately, it is relatively hard to test electronic systems using frequency-based approaches, but these same systems are usually easy to test using time-based methods (e.g. oscilloscopes). Thus, we need to become proficient at switching between both time and frequency points of view.

I have found this bit of wisdom to often be true.

Background

Why do we care about bandwidth?

The bandwidth of a circuit tells us the range of signals that the circuit must process in order to meet the performance requirements of the system. Unfortunately, narrowband problems are easier and cheaper to solve than wideband problems.

Definitions

The words "narrow" and "wide" are relative terms. Thus, the definitions of narrowband and wideband are relative terms as well.

Narrowband
A narrowband circuit is a circuit that can process the signals through it as if were a single frequency.
Wideband
A wideband circuit is a circuit that must process a range of frequencies that cannot be treated as if they were a single frequency.

My Objective

In practice, all real signals have infinite bandwidth. Because real circuit cannot process infinite bandwidth signals, we need to approximate infinite bandwidth signals with finite bandwidth signals. In general, wider bandwidth implies a greater percentage of the signal energy or power being processed by the circuit. For the discussion that follows, I will be working with energy because a pulse has finite energy and this radar detects pulses. The same type of analysis holds for random pulse sequences or continuous waveforms, which have a finite power level. You use power for signals that exist for long times (theoretically, infinitely long) and energy for signals that exist for finite times.

There are two common bandwidth approximations used in electrical engineering.

  • \displaystyle BW=\frac{1}{2\cdot \tau }\text{ }\to {{E}_{Circuit}}=77.4\%\cdot {{E}_{Signal}}
  • \displaystyle BW=\frac{1}{\tau }\text{ }\to {{E}_{Circuit}}=90.3\%\cdot {{E}_{Signal}}

where

  • ESignal is the total energy in a pulse.
  • ECircuit is the total pulse energy processed by the circuit.
  • τ is the pulse width.
  • BW is the circuit bandwidth.

Analysis

I will approach the problem in two stages:

  • Derive the Fourier Transform of a Pulse Signal

    This is a direct application of the Fourier Transform to a normalized pulse waveform (unit width and height).

  • Derive a Formula for the Percentage of Signal Energy As a Function of Bandwidth

    Higher fidelity systems process a higher percentage of the total signal energy. I will derive an expression for the signal energy percentage as a function of circuit bandwidth and will plot it.

Derivation

Figure 2 shows my derivation of the Fourier transform of a unit pulse.

Figure 2: Definition of the Fourier Transform of a Pulse.

Figure 2: Definition of the Fourier Transform of a Pulse.

Energy Percentage Versus Bandwidth

Figure 3 shows my derivation of a formula for the percentage of signal energy as a function of bandwidth.

Figure 3: Energy Versus Bandwidth

Figure 3: Energy Versus Bandwidth

Conclusion

While reading the article mentioned in the introduction, I started to wonder what fraction of the signal energy was being processed by the circuit. I could not remember the energy fractions, so I decided to re-derive them here.

Posted in Electronics | 2 Comments

Sahara Water Math

Quote of the Day

The greatest of faults, I should say, is to be conscious of none.

— Thomas Carlyle, historian and philosopher


Introduction

Figure 1: The Sahara Desert is a the world's largest hot desert.

Figure 1: The Sahara Desert is a the world's largest hot desert. (Source)

I was watching an episode called "Sahara" of the series "How the Earth Was Made" and they had a very good discussion of the history of the Sahara Desert and how it formed. During the presentation, they discussed how ground water can be found that is very old and very hot. I thought I would look into this a bit.

Background

Here is the video I was watching. I find this material interesting and I want to see if I can dig up some additional information.

http://www.youtube.com/watch?v=3B40SkM8cxE&w=640&h=360

Analysis

Water Temperature

At 38:40 in the video, they begin talking about the temperature of the water coming out of deep wells in the Sahara. They said that the temperature can be as high as 66 °C and is heated geothermally (see this blog post for more details). The geologist also mentions that the wells can be as deep as 0.75 mile (~1200 meters). Let's do some rough figuring.

  • Except close to the surface, the ground temperature increases at ~25 °C for every 1000 meters (the number varies by location)
  • Let's assume that the Sahara ground temperature starts off around 30 °C (I am using the Sahara's mean temperature for the initial ground temperature).
  • Going down 1200 meters would mean that the ground temperature would increase over the surface temperature by \text{30}{{\text{ }}^{{}^\circ }}\text{C=1200 m}\cdot \frac{{\text{25}{{\text{ }}^{{}^\circ }}\text{C}}}{{\text{1000 m}}}.
  • I would expect the water temperature at 1200 meters down to be about 60 °C = 30 °C (surface temp) + 30 °C (geothermal temperature rise).

This is similar to what they said in the video.

As a boy, I used to work in fields that were irrigated. The water was cold. It came out of pumps that drew water from 60 meters underground. This also makes sense to me because the average temperature of our soil here is only about 10 °C. The Sahara does not have a cold winter and I would think their ground would never have an opportunity to get very cold.

Dating Water

The show talked about "ancient water" or "fossil water". How does one determine the age of water? Whenever I hear someone mention the age of a material, I usually start to think of some form of isotope dating. I quickly discovered that was the case here as well. I have discussed isotope dating methods before and the method is basically the same here (see the Appendix for details). The basic idea is simple.

  • Cosmic rays strike the atmosphere and hit water molecules in the air creating Krypton-81.
  • Some of this Krypton-81 dissolves in the rain that falls to the ground.
  • Once on the ground, the rain begins its descent into the water table.
  • The Krypton-81 has a half-life of 229,000 years. Scientists can sample the water from the Sahara aquifers and determine its age based on the amount of Krypton-81 it carries.

Note that while the approach is simple, measuring the tiny quantity of Krypton-81 atoms is not easy.

Conclusion

This is just a quick note to describe a science program on television that I found interesting. The idea that the Sahara has a massive amount of ancient water underneath it is amazing. I have always wondered why the pyramids were erected out in the desert. It appears that the area around the pyramids were not always a desert – even during the relatively recent time of man.

Appendix

Here is a link to a great description of using Krypton isotopes to date water. The original paper is available here. I will include a brief screenshot here (Figure 2) in case any of these references move.

Figure X: Nice Discussion on How to Date Water.

Figure 2: Nice Discussion on How to Date Water.

Save

Save

Save

Posted in General Science, Geology | Comments Off on Sahara Water Math

A Problem Solved in Excel and Mathcad

Quote of the Day

All of the animals except man know that the principal business of life is to enjoy it."

- Samuel Butler


Introduction

FIgure 1: HMS Dreadnought, the ship that changed naval gunnery.

FIgure 1: HMS Dreadnought, the ship that changed naval gunnery.

I use both Excel and Mathcad in my daily work. Most people would consider me very proficient in both. Though it has taken me a long time to become a master of Excel as there weren't as many resources around when I started learning.

In any case, I frequently get asked, "Which tool is better?" Like all other interesting questions in Engineering, the answer is "it depends". Both have their strengths and Microsoft has other uses that can connect with Excel, like Azure Virtual Desktop for cloud usage which can make it easier when moving about, but it is all a case of preference.

As an example, I decided to work a simple problem in both Excel and Mathcad. A number of the advantages and disadvantages of both tools can be seen in this example. The key problem with Excel is its cell-oriented approach. While the cell-oriented approach works for small problems, it has major issue with large problems. As you know, MS Excel is one of the best alternatives to Google products but, there are also other software manufactures that offer an Excel alternative. You could take a look at Web Safety Advice for more information.

Background

My Example

I am reading the book Dreadnought Gunnery and the Battle of Jutland and it presents an interesting fire control example from the Battle of Jutland in the form of Table 1.

Table 1: Original Table of Fire Control Information from the"Run to the South" Engagement.
Table 1: Original Table of Fire Control Information from the

I want to verify that I understand what I have read by duplicating the results shown in Table 1. This problem is most easily approached as a vector analysis problem. There is also some unit conversion involved. My interest in this problem is driven by my desire to code a naval warfare simulation and I want to make sure that I understand the fire control issues involved.

Engagement Geometry

Figure 2 is an illustration of the critical variables in this problem. This is a very common type of fire control situation from World War 1. Here are the details:

  • There are two ships: SMS Lützow and HMS Lion.

    HMS Lion was the flagship of the Grand Fleet's battlecruisers. SMS Lützow was a battlecruiser with the German Imperial Navy.

  • Both ships are on headings given in terms of the points of the compass.

    Historically, compass readings were given in terms of 32 compass points. Each of the points was evenly spread over a 360 ° circle -- each point represents an 11.25 ° increment.

  • The fire control example is from the standpoint of the HMS Lion.

    This means that the target bearing reading is given from the standpoint of HMS Lion. Note that target bearings are given with respect to the ship's heading and not the compass.

  • Two fire control examples are listed in Table 1. Figure 2 only illustrates one example. The second is similar.
Figure 1: Engagement Geometry.

Figure 2: Engagement Geometry.

Analysis

I go through some of the basic fire control equations in this blog post, so I will not review them here.

Excel Version of My Analysis

Here is my approach to duplicating Table 1 in Excel:

  • Table 1 is row-oriented. I decided that for a column-orientation would be a bit easier to work with in Excel.
  • Inputs to the problem are tan-colored. Over time, I intend to add additional cases to the table and I want to highlight which cells need to filled with information.
  • I show the Excel formulas I used in the comments column. One of the issues with Excel is that the formulas get complex and difficult to read. There are things you can do to minimize that, but you will often see formulas in Excel that are difficult to figure out.
  • You need to explicitly handle unit conversion in Excel. This is one of my biggest gripes with Excel.

I wrote up the Excel solution and I did not get the results of Table 1. Unfortunately, I made a unit error. However, I eventually did get it right.

Table 1: Screenshot of My Excel Version of the Jutland "Run to the South" Rate Table.

Here is how I see the advantages and disadvantages of Excel.

  • (Advantage) Repeating simple formulas over and over is very simple in Excel.

    This is why Excel is so popular with accountants. They do not tend to have complex formulas, just lots of them.

  • (Disadvantage) Complex formulas are a pain in Excel.

    I cannot tell you how many hours I have spent trying to figure out some complex array formula in Excel. That same formula in Mathcad would be simple.

  • (Disadvantage) You must handle unit conversions yourself.

    This is painful -- especially in the US where I need to convert between unit systems all the time.

  • (Advantage) Power tabular data display capabilities.

    Excel is really good at displaying and analyzing tabular data.

  • (Advantage) Everyone has access to Excel.

    Most folks can get access to Excel one way or another (e.g. use it online with Microsoft Live). I frequently solve problems in Excel that really would be more appropriately done in Mathcad simply because my customers do not all have Mathcad. In these cases, I use Mathcad to help me verify my Excel solution.

Mathcad Version of My Analysis

Figure 3 shows my version of this analysis using Mathcad, which was correct the first time I went through it. The key to this success was that Mathcad handles the units automatically. To be completely honest, when I had the unit problem in Excel, I decided to write up the problem in Mathcad. Seeing the correct unit conversions in Mathcad allowed me to easily see the error in my Excel. Note that I only solved one engagement scenario in this example. I could easily take this work and put it into a Mathcad program that would allow me run as many scenarios as I wish. Here are the advantages and disadvantages of Mathcad:

  • (Advantage) Math-like notation.

    If you are familiar with mathematical notation, you can pick up the Mathcad syntax pretty quickly.

  • (Advantage) Automatic unit handling.

    I use this capability all the time. It does take a bit of getting used to -- especially for temperatures and decibels. However, it is a powerful feature.

  • (Advantage/Disadvantage) Requires using a Mathcad program to repeat the analysis steps with different parameters.

    I actually like putting things into Mathcad programs. I usually solve one case and get it right, then I put the equations into a program. That is what I would do here. However, it is an extra step. Excel makes it easy to repeat your calculations in adjacent rows/columns.

  • (Disadvantage) Does not display tabular data as cleanly as Excel.

    Getting a nice tabular display really requires inserting an Excel component into the Mathcad worksheet. This is not difficult, but native Mathcad does not do it well.

Figure 3: Mathcad Version of My Analysis.

Figure 3: Mathcad Version of My Analysis.

Conclusion

I have decided not to choose between Mathcad and Excel -- I use them both and frequently on the same problem. Each has their strengths and I want to use these strengths to solve my problems. In this case, I thought I would blog about a common situation for me.

  • I wanted to use Excel to make a clean looking table and to allow others to work with the data.
  • I had some trouble getting my Excel formulas correct.
  • I solve one case in Mathcad and use that solution as a guide in getting my Excel to work.
Posted in Ballistics, History of Science and Technology, Military History, Naval History | 2 Comments

Snake Venom Math

Quote of the Day

It is easier to exclude harmful passions than to rule them, and to deny them admittance than to control them after they have been admitted."

— Seneca


Introduction

Figure 1: View from My Hotel.

Figure 1: View from My Hotel.

I was a recently in Barbados doing some field work. Before going anywhere in the field, I like to check to see if there is anything in the area I will be visiting that could hurt me. I have become more careful since a trip to Florida a few years ago where I was warned that an installer had seen a coral snake in one of our enclosures the week before. There are no poisonous snakes in Barbados -- I had nothing to worry about. While doing the research, however, I encountered an interesting table going through the lethality of the most dangerous snake venoms. I thought this table would be interesting to discuss here.

Before I dive into the topic of snake venom, I do want to share a photo from Barbados (Figure 1). It was fun working there and the people could not have been friendlier. If I get to go again, I will bring my wife.

Background

I stumbled upon Table 1 in the Wikipedia on this page. To discuss this table, I first need to define the column headings.

  • "Species" is actually the common name of the snake.
  • LD50 SC is the subcutaneous dose that will kill 50% of the subjects tested.
  • "Dose" is the amount of venom delivered in a strike.
  • "Mice" is the number of mice that could be killed per dose, based on number of mice that would be killed by this dose if it was 100% lethal and equally divided between the mice.
  • "Humans" is the number of humans that could be killed per dose, based on the number of humans that would be killed by this dose, assuming that the dose was
    • equally divided between the humans.
    • the human have the same venom sensitivity as the mice.
    • the dose is 100% lethal.

I find this table interesting for a number of reasons:

  • You can see that there is a wide variation in the lethality of the different types of snake venom.
  • The amount of venom injected varies widely between species.
  • A little bit of calculator work showed that the "Mice killed Per Dose" and "Humans Killed Per Dose" were scaled versions of the LD50 value. The mouse was assumed to have a mass of 20 grams and the human a mass of 75 kg. We can determine the number of animals killed per dose (N) with the equation N=\frac{\frac{Dose}{L{{D}_{50}}}}{m}, where m is the mass of the animal in question.

There were a number of assumptions involved in making this table. I thought it might be interesting to investigate those assumptions.

Table 1: Wikipedia Table on Snake Lethality.

Species

LD50 SC (mg/kg)

Dose (mg)

Mice

Humans

Inland taipan

0.010

110.0

1,085,000

289

Black mamba

0.050

400.0

400,000

107

Forest cobra

0.225

1102.0

244,889

65

Eastern brown snake

0.030

155.0

212,329

59

Coastal taipan

0.106

400.0

208,019

56

Mainland tiger snake

0.190

336.0

138,000

31

Caspian cobra

0.210

590.0

135,556

27

Russell's viper

0.162

268.0

88,211

22

King cobra

1.090

1000.0

45,830

11

Cape cobra

0.400

250.0

31,250

9

Gaboon viper

5.000

2400.0

24,000

6

Saw-scaled viper

0.151

72.0

23,841

6

Fer-de-lance

3.100

1530.0

24,380

6

Jameson's mamba

0.420

120.0

12,709

4

Many-banded krait

0.090

18.4

10,222

3

Analysis

Variations in the Amount of Venom Injected

I assume that the amount of snake venom injected in a strike can vary widely. In fact, some other snake lethality charts actually list the variations. On one web page, they actually show a chart with how the amount of venom injected varies with subsequent strikes. The amount of venom and its potency also varies with the age and size of the snake.

Figure 2: Venom Injection Variation.

Figure 2: Venom Injection Variation.

 

Differences in Mouse and Human Lethality Levels

I have always wondered how well any test results translate from mice to humans. In the case of Table 1, assuming that the mouse and human lethality concentrations are the same is the simplest approach. However, I have no basis on which to believe this assumption is accurate. As a counter-example, consider the case of the Sydney funnel-web spider. It has a venom that is deadly to humans (and other primates), but does not affect other mammals. So I guess I do not really believe the column on human lethality is accurate -- the venom could be more or less lethal than indicated.

Calculation of the Number of Mice and Humans Killed

The lethality of the venom is referred to as LD50, the dose that is lethal to 50% of the subjects exposed. Yet the calculations seem to assume that 100% of those exposed to an equal division of the venom would die. That does not seem correct.

Conclusion

Everyday I see examples of data presented that seem to create more questions than they answer, which is just what Table 1 did for me.

Posted in General Science | Comments Off on Snake Venom Math