Temperature Limits for Handling Electronics

Quote of the Day

The man who moves a mountain begins by carrying away small stones.

— Confucius. Whenever I think of moving mountains one rock at a time, I think of the Crazy Horse Memorial.


Introduction

Figure 1: Telecommunications Outdoor Electronics Temperature Stackup.

Figure 1: Telecommunications Outdoor
Electronics Temperature Stack Up.

I regularly receive questions on the handling requirements for Printed Circuit Boards (PCBs). In a previous blog post, I stated that I recommend that service personnel always wear gloves when handling outdoor electronics because electronics in an outdoor enclosure are required to function with an internal ambient temperature of 85 °C. The PCBs themselves usually operate a 10°C above the internal ambient temperature. So a maintenance technician could have to handle a PCB that is 95 °C (203 °F) – just short of the temperature of boiling water. I have measured PCB temperatures at Fort Mojave, AZ, and I can confirm the 95 °C value is real. Figure 1 illustrates how the temperature "stack up" works for a typical outdoor installation.

Today, I was asked "What is a reasonable maximum touch temperature for a consumer product's enclosure?" I thought I would do a quick survey of what the different industry standards recommend. I am looking for two things – a recognized industry standard with a measurable requirement.

Survey Results

OSHA

I did find a good forum discussion that mentions OSHA and gave a link to an interesting OSHA response on surface temperature requirements. I quote from this response below.

1910.132(a): Protective equipment, including personal protective equipment for eyes, face, head, and extremities, protective clothing, respiratory devices, and protective shields and barriers, shall be provided, used, and maintained in a sanitary and reliable condition wherever it is necessary by reason of hazards of processes or environment, chemical hazards, radiological hazards, or mechanical irritants encountered in a manner capable of causing injury or impairment in the function of any part of the body through absorption, inhalation or physical contact.

There is nothing measurable here. They do call out ASTM C 1055-92, which I consider below.

ASTM International

ASTM C1055 has a table that describes how surface temperature affects skin.

FIgure 2: Surface Temperature and Skin Sensation.

Figure 2: Surface Temperature and Skin Sensation.

I found a web page that paraphrases the ASTM approach as follows:

ASTM C1055 (Standard Guide for Heated System Surface Conditions that Produce Contact Burn Injuries) recommends that pipe surface temperatures remain at or below 140°F. The reason for this is that the average person can touch a 140°F surface for up to five seconds without sustaining irreversible burn damage.

ASTM C1055 determined that five seconds is the most probable contact time in an industrial setting. In high ambient temperature environments or where there is an elevated risk to the worker, many process engineers will use 120°F as the maximum safe-to-touch temperature to further reduce the risk to workers.

These standards are measurable and reasonable, but focused more on an industrial setting where you have trained people.

Telcordia

Telcordia GR-63 is the "gold standard" for telecommunications requirements. Figure 3 show their approach, which I prefer because it distinguishes between materials and contact time.

Figure M: Telcordia Surface Temperature Limits.Figure M: Telcordia Surface Temperature Limits.

Figure 3: Telcordia Surface Temperature Limits.

An excellent industrial standard, but I am a bit concerned about applying it to non-industrial applications.

NASA

I generally find NASA's work definitive. They have an excellent paper that goes into the tradeoffs,  and this paper contains Figure 4, which models "hot to touch" limits. The x-axis represents the surface thermal characteristics, with aluminum being shown by the point on the left-hand side. They define infinite time as 600 seconds.

Figure 4: NASA Hot To Touch Chart.

Figure 4: NASA Hot To Touch Chart.

The paper also states that

Given that the data on pain converge around the same value, the 44°C (111.2°F) epidermal/dermal interface temperature derived by Hatton and Halfdanarson from Stoll et al.'s data should be used as the upper limit for contact with hot objects.

This is a measurable standard for surface temperature limits. It basically is the same temperature as hot bathwater.

Conclusion

I like the NASA standards for surface temperature limits, and I will recommend following their lead. It actually is the most conservative of all the recommendations I found.

Posted in Electronics | 2 Comments

Apparent Visual Magnitude of Binary Stars

Quote of the Day

Once you know one programming language, you pretty much know them all.

— George Winsky. I often refer to this quote as 'Winsky's Law'. I used to work with George at Hewlett-Packard. He said this after I mentioned how learning Basic, Pascal, C, etc. was all easy after having FORTRAN in school.


Introduction

Figure 1: Sirius and its CompanionStar (Source).

Figure 1: Sirius and its Companion (Source).
Binary stars are very common in the universe.

I was reading a Wikipedia article on the star Iota Apodis (Figure 1), which is a binary star, and noticed that three apparent visual magnitudes were listed for the two stars:  5.41 (5.90/6.46). The visual magnitudes listed represented the combined and individual brightness of the two components (in parentheses). I became curious as to how the magnitudes were summed.

In this post, I will look at how to determine the combined visual magnitudes of two stars. I will also discuss the limitations of this sort of calculation.

This calculation is almost identical to the calculation that electrical engineers do when they must sum powers that are expressed in dB. We first must "un-dB" the values, sum them, then "re-dB" the sum. As with dB, I do not consider visual magnitudes to be a dimensional unit – they really are a scaling.

For those who are interested, I include my Mathcad source and a PDF here.

Background

Definitions

Apparent Magnitude (m)
The apparent magnitude of a celestial object is a number that is a measure of its brightness as seen by an observer on Earth. The brighter an object appears, the lower its magnitude value (i.e. inverse relation). The Sun, at apparent magnitude of −27, is the brightest object in the sky. It is adjusted to the value it would have in the absence of the atmosphere.

The magnitude scale is logarithmic: a difference of one in magnitude corresponds to a change in brightness by a factor of \sqrt[5]{100} or about 2.512.(Source)

Absolute Magnitude (M)
Absolute magnitude is the measure of intrinsic brightness of a celestial object. It is the hypothetical apparent magnitude of an object at a standard distance of exactly 10 parsecs (32.6 light years) from the observer, assuming no astronomical extinction of starlight. This places the objects on a common basis and allows the true energy output of astronomical objects to be compared without the distortion introduced by distance.
As with all astronomical magnitudes, the absolute magnitude can be specified for different wavelength intervals; for stars the most commonly quoted absolute magnitude is the absolute visual magnitude, which uses only the visual (V) band of the spectrum (UBV system). Also commonly used is the absolute bolometric magnitude, which is the total luminosity expressed in magnitude units that takes into account energy radiated at all wavelengths, whether visible or not. (Source)

Apparent Magnitude and Luminosity

Equation 1 shows the key relationship between luminosity and visual magnitude (Source).

Eq. 1 \displaystyle {{m}_{{\text{star}}}}={{m}_{\odot }}-2.5{{\log }_{{10}}}\left[ {\frac{{{{L}_{{\text{star}}}}}}{{{{L}_{\odot }}}}{{{\left( {\frac{{{{d}_{\odot }}}}{{{{d}_{{\text{star}}}}}}} \right)}}^{2}}} \right]

where

  • mStar is the apparent magnitude of a star.
  • m is the apparent magnitude of the Sun (it could be that of any reference star).
  • LStar is the visual luminosity of a star.
  • L is the visual luminosity of the Sun (it could be that of any reference star).
  • dStar is the distance to the star.
  • d is the distance to the Sun.

Analysis

Derivation

Figure 2 shows my derivation for the magnitude of the sum of magnitudes. The process is straightforward:

  • convert each magnitude to a luminosity (i.e. optical power).
  • sum the luminosities
  • convert the luminosities back to a magnitude.
Figure 2:Derivation of the Apparent Magnitude of Two Stars.

Figure 2: Derivation of the Apparent Magnitude of Two Stars.

Test Cases

Figure 3 shows how I tested this formula. The process I used was to:

  • randomly grabbed  eight test cases from the Wikipedia's list of binary stars.
  • apply the formula
  • check the formula against the Wikipedia's value for the combined magnitude.

I should note that the agreement is not perfect because the Wikipedia shows measured values, not computed values. This means that there will be some random variation.

Figure 3: Check the Formula Against the Wikipedia Values.

Figure 3: Check the Formula Against the Wikipedia Values.

My Cygni Iota Apodis Pi Aquilae Meissa Phi Andromedae Lambda Cassiopeiae Sigma Cassioeiae Psi Sagittarii

Conclusion

The agreement seems reasonable. One issue that I see with the calculation is that it is not always clear whether total luminosity or visual luminosity is being used. For example, Spica emits most of its energy outside of the visual band. As shown in Figure 4, the total luminosity and visual luminosity can quite different (Figure 4).

Figure 4: Quick Look at Spica.

Figure 4: Quick Look at Spica.

After I finished writing this post, I found a Wikipedia paragraph that calls out the magnitude summation formula derived here.

Posted in Astronomy | 2 Comments

Working While Standing

Quote of the Day

I have had a delightful month - building a cottage and dictating a book: 200 bricks and 2000 words per day.

— Winston Churchill to Stanley Baldwin, 1928. Churchill viewed bricklaying as relaxing. Some people lay bricks for fun, others play with mathematics software ☺


Figure 1: Commonly Used Adjustable Desk.

Figure 1: Commonly Used Adjustable Desk (Source).

For the last two years, I have been working from the standing position. While I would like to say I work standing because it is supposed to be healthier, I actually work while standing because I was having some issues with numbness in one of my feet while sitting, and working from a standing position eliminated this problem. While it took a bit of time to get used to, I now prefer working from the standing position. In fact, I am now setting up my garage-based shop area for standing work.

I do not work from a variable workstation  – like a Varidesk (Figure 1). Two years ago, I grabbed a socket wrench and elevated my cube's work surface to a height comfortable for me to work at when standing. I have never adjusted it down.

I became curious as to the history of doing desk work while standing. It turns out that I have some good company. For example, Winston Churchill worked from a standing position (Figure 1).

Figure 1: Churchill Standing At His Desk Working.

Figure 1: Churchill Standing At His Desk Working (Source).

I also discovered that Ernest Hemingway worked from a standing position (Figure 2).

Hemingway1 hemingway-desk-600x906
Figure 2(a) Hemingway Standing at His Desk (Source). I have read that he wore down seven pencils a day. Figure 2(b) Hemingway Standing While Typing (Source).

Because Jefferson lived prior to photography, we have no photographs of him standing at work. However, he designed his own version of a Varidesk that allowed him  to worked either standing or sitting (Figure 3).

Bricklayer

Figure 3: Jefferson Used a Variable Height Desk.


While not about working standing up, I have included Figure 4 because I love the idea of Winston Churchill doing bricklaying for recreation. From what I have read, he was a very good bricklayer.

Bricklayer

Figure 4: Prime Minister Winston Churchill – Bricklayer.

One of my favorite books is "Cheaper by the Dozen" – the book is MUCH better than any of the movies that used the title. In that book, Frank Gilbreth became one of the world's fastest bricklayers as part of his research on efficient work methods.

Posted in Personal | 3 Comments

A Quick Look at Wavelength Crosstalk

Quote of the Day

The stock market is a device for transferring money from the impatient to the patient.

— Warren Buffet. I have worked to pass this wisdom on to my sons. The patient investor will sleep much better than the impatient investor.


Introduction

Figure 1: Chart from NGPON2 Standard That Bothered Me.

Figure 1: Chart from NGPON2 Standard That
Bothered Me.

I was reviewing an industry standard when I saw Figure 1, which clearly looked wrong – the asymptotes seem like they are in the wrong place. I decided to take a closer look at this figure and see if I could determine what the correct version of this chart would be.

The chart shows how adding multiple wavelengths (i.e. colors) onto a fiber will impact the performance of a system. The metric used to measure this impact is called the power penalty, which is a function of the number of wavelengths and their level of isolation from each other. In this post, I will using a Mathcad model to show how to generate a clearer version of this chart. It was an interesting diversion (~15 minutes) from my usual workload of budgeting and planning.

There were several reasons why I feel this work is worth documenting here:

  • It shows the kind of challenges being faced by those who are putting multiple wavelengths on a Passive Optical Networks (PONs).
  • It shows how one can easily compute the location of asymptotes using a computer algebra system.
  • I used Mathcad scriptable components to help annotate my version of Figure 1.

For those who are interested, my Mathcad source and its PDF are included here.

Background

PON Background

Figure 2 shows how a current generation Gigabit Passive Optical Network (GPON) works. Here are the key points to understand:

  • Every home has an Optical Network Terminal (ONT). Every home in the network has an ONT for converting the optical signal into voice, video, and data services.
  • The ONT connect to a central office through an Optical Line Terminal (OLT).
  • The ONT and OLT are connected to each other over a set of components called the Optical Distribution Network (ODN) – the ODN requires no power as it is composed entirely of glass components.
  • The central office connects every home to the internet backbone.

 

Figure 2: Current Generation Gigabit Passive Optical Network (GPON).

Figure 2: Current Generation Gigabit Passive Optical Network (GPON).

While today's PONs only use two wavelengths (1490 nanometer[nm] and 1310 nm) for digital services, future PONs (e.g. NGPON2) will use many wavelengths. The use of multiple wavelengths will allow us to enormously increase the amount of data carried over existing fiber optic cables. Unfortunately, each wavelength added will impair the performance of the other wavelengths on the fiber. This post calculates the magnitude of this impairment.

Definitions

Wavelength Division Multiplexing (WDM)
In fiber-optic communications, WDM is a technology which multiplexes a number of optical carrier signals onto a single optical fiber by using different wavelengths (i.e., colors) of laser light. This technique enables bidirectional communications over one strand of fiber, as well as multiplication of capacity. (Source)
Channel
A range of wavelengths assigned to a particular beam of light, which is generated by a laser. The wavelength of laser moves as a function of the laser's temperature and laser. Feedback control systems are put in place to modulate the laser's temperature to ensure that its wavelength stays in its define channel range.
Extinction Ratio (ER)
The ratio of the logic "1" optical power to the logic "0" optical power. ER is a measure of how hard you are driving the laser. Large ER values will cause the laser to have wider spectral width, which will increase dispersion and co-channel interference.
Wanted Channel (WC)
The channel for which we want to detect the performance degradation attributable to the presence of other channels.
Disturbing Channel (DC)
The channels that are degrading the performance of the WC.
Adjacent Channel (AC)
The DCs occupying the channels on either side of the WC. ACs usually cause more degradation in the WC because they are closer and the effective isolation levels are lower.
Non-Adjacent Channel (NAC)
These are DCs that are not adjacent to the WC. Because their greater wavelength separation from the WC, their effective isolation levels tend to be higher.
Channel Crosstalk (CC)
Sometimes called inter-channel crosstalk, it is the ratio of total power ingress from the DCs into the WCs. In general, the receivers cannot distinguish between photons of different wavelengths and the DCs appear as Gaussian noise to the WC's receiver.
Power Penalty (PP)
For this discussion, power penalty is the reduction in signal power (and SNR) due to a specific impairment. Some people define power penalty as the amount of signal power increase needed to compensate for a specific impairment.
Isolation (I)
The amount of power reduction between the signal power in the DCs and the WC. This reduction is normally provided by optical filtering using thin films.

CC Model

In a WDM system, there are multiple wavelengths (i.e. colors) traveling on the fiber. Each wavelength contains a separate data transmission – this means that each wavelength needs a separate receiver (Figure 3). We use a device, called a demultiplexer, to separate the colors and direct each wavelength to its receiver. The demultiplexer contains a filter that reduces  power of the DCs by an amount IA or INA, which we call the isolation. If the filter was perfect, the isolation would be infinitely large.

Figure M: Illustration of Demultiplexing Operation.

Figure 3: Illustration of the Wavelength Demultiplexing Operation. Observe how we try to send a single color to each receiver, but others colors always leak in. These other colors interfere with the receiver's ability to correctly read the data in its assigned color.

Figure 4 shows how we will model the wavelength separation process using four parameters (all expressed in dB):

  • Each ONT will transmit at a slightly different power because of imperfections in their laser power control systems (ΔPONT).
  • Each ONT can be at a different range, which means the light received from each ONT can incur a different amount of distance-dependent attenuation (dMax).
  • Each AC will be attenuated by IA .
  • Each NAC will be attenuated by INA .
Figure M: Power Relationships Between ONTs.

Figure 4: Power Relationships Between ONTs.

Given these assumptions, we can express the channel  crosstalk as shown in Equation 1.

Eq. 1 \displaystyle {{C}_{C}}=\Delta {{P}_{{ONU}}}+{{d}_{{Max}}}+10\cdot \left[ {2\cdot {{{10}}^{{-\frac{{{{I}_{A}}}}{{10}}}}}+\left( {N-3} \right)\cdot {{{10}}^{{-\frac{{{{I}_{{NA}}}}}{{10}}}}}} \right]

where

  • CC is total cross-channel interference power (dB).
  • N is number of wavelengths in our WDM system.

Power Penalty

Equation 2 is the model used for the power penalty. The derivation of Equation 2 is not simple, and I will not derive it here.

Eq. 2 \displaystyle P{{P}_{C}}=-5\cdot \left[ {1-\frac{{{{{10}}^{{\frac{{2\cdot {{C}_{c}}}}{{10}}}}}}}{{N-1}}\cdot {{Q}^{2}}\cdot {{{\left( {\frac{{ER+1}}{{ER-1}}} \right)}}^{2}}} \right]

where

  • CC is total cross-channel interference power (dB).
  • ER is extinction ratio of the WDM signals (linear). The extinction ratio is defined as
    ER\triangleq \frac{{{{P}_{1}}}}{{{{P}_{2}}}}, where P1 is the power of logic one and P0 is the power of logic zero.
  • BER is the desired bit error level.
  • Q is the Q-function evaluated at the BER level ,i.e. Q\left(BER\right)=\sqrt{2}\cdot erf{{c}^{{-1}}}\left( {2\cdot BER} \right).

One interesting aspect of Equation 2 is that it has an asymptote where the argument of its logarithm goes to zero. The asymptote tells us that there is a threshold level of crosstalk above which we will not be able to communicate. This makes sense – have you ever been in a noisy, crowded room and not be able to understand the people next to you?

Analysis

Determine the Asymptotes

Determining the locations of the power penalty asymptotes is simple enough.

  • The asymptote will occur when Equation 2's logarithm has an argument of zero.
  • Take the argument of the logarithm of Equation 2 and determine the value of Cc that makes it zero.

Figure 5 shows how I determined the asymptote locations on the crosstalk axis. My results are different than are shown in the specifications – the standard shows asymptotes at Cc = 0.4 dB (N=4) and Cc = 1.8 dB (N=8).

Figure M: My Calculation of the Asymptote Positions.

Figure 5: My Calculation of the Asymptote Positions.

In Figure 6, I show values of isolation (adjacent and non-adjacent) that will produce a crosstalk value with  infinite power penalty.

Figure M: Isolation Levels at the Asymptotes.

Figure 6: Isolation Levels at the Asymptotes.

I did try to determine what math error the ITU might have made. While the Cc values they compute are theoretically possible, it could only be an asymptote for a negative ER, which is not physically possible (Figure 7).

Figure M: My Quick Look at their Asymptotes.

Figure 7: My Quick Look at their Asymptotes.

My Version of Figure 1

Figure 8 shows what I believe to be the correct plot. It shows Equation 2 graphed over the same Cc range as used in the standard, and  I get the same curves shown in the standard. However, my asymptotes look much more reasonable. This graph was generated in Mathcad, with the text boxes done using scriptable components. I added the lines using a graphic editor (PicPick).

Figure M: My Recommendation for the Chart.

Figure 8: My Recommendation for the Chart.

Conclusion

I frequently am asked to review specifications. In this case, I did all my analysis in Mathcad. When I was done, I had a complete report ready to go. All that I needed to add were a few graphic lines to show key relationships.

Posted in optics | 1 Comment

Optimum Resistor Ratio Approximation Using Standard Components

Quote of the Day

Being defeated is often a temporary condition. Giving up is what makes it permanent.

— Marilyn vos Savant


Introduction

Figure 1: Differential Amplifer With Its Output Proportional to a Resistor Ratio.

Figure 1: Differential Amplifier With Its Output
Proportional to a Resistor Ratio (Source).

Many analog circuits are designed so that their critical performance characteristics are a function of the ratio of resistances (e.g. Figure 1). For example, I worked on analog pacemakers as a student intern. Those pacemakers were hybrid analog circuits with a number of parameters that were set using laser-trimming. In fact, a goal of many analog designers is for their creations to be ratiometric.

While you can laser trim values in production, this operation is expensive and may not be necessary if you can achieve a sufficiently accurate resistor ratio value using standard E-series resistors.

This post presents an algorithm, implemented in Mathcad and Excel, for determining the optimum resistor ratio approximation using standard resistor values. I also include a comparison of the output of my algorithm with that of a web-based application I use as a reference for determining resistor ratio approximations.

Background

All the background you need on standard resistor values is included in this post.

Analysis

Source

My Mathcad (source and PDF) and Excel routines are included here.

E-Series Resistors

Figure 2 shows my array that holds the E-series resistor values. Note that the table does not show the all the resistor values – only the top few values are shown.

Figure M: E-Series Resistor Values.

Figure 2: E-Series Resistor Values.

Algorithm

Figure 3 shows the algorithm. There is nothing sophisticated here – it is pure brute force. The routine returns a vector representing two resistor values, which I call R1 and R2.

Figure M: Resistor Ratio Approximation Algorithm.

Figure 3: Resistor Ratio Approximation Algorithm.

Test Cases

Test Case Evaluation

Figure 4 shows the test cases that I substituted into the reference web page and the resistor values it produced. I also show the utility functions that I used to take this data and compare it to the output from my algorithm.

Figure 4: Ratio Test Cases Evaluated on Web Site.

Figure 4: Ratio Test Cases Evaluated on Web Site.

Comparison

Figure 5 shows the comparison of my algorithms output and the output from the reference web page. Most of the results are identical. I found five exceptions (colored in red on the right-side of the table):

  • Two of my resistor choices are different than produced by the web page, but have the identical resistor ratio (no issue).
  • Three of the results are distinctly different, and my results are more accurate than produced by the web paged – I believe that the web page has a bug.
Figure M: My Algorithm Versus Web Page.

Figure 5: My Algorithm Versus Web Page.

Conclusion

I wrote this post because I was recently working with an analog designer who asked if I had a routine for determining the resistors required for an optimum ratio approximation. I have used this routine for years and it has worked well.

Posted in Electronics | 2 Comments

Admiral Ackbar for President

Quote of the Day

Life can only be understood backwards; but it must be lived forwards.

— Soren Kierkegaard. The older I get, the more I understand this quote.


Introduction

Figure 1: "Admiral Ackbar for President" Poster.

Figure 1: "Admiral Ackbar for President" Poster.

It seems like the presidential selection process has gone on forever, and we still have months to go before it is over. To show my unhappiness about the whole process, I have hung an election poster for Admiral Ackbar outside of my cube. His one word campaign slogan is a simple one – "Trap". I have always thought it was telling that the admiral leading the rebel alliance was the last one to know that he had led his troops into a trap.

There are numerous web sites dedicated to Admiral Ackbar (example). There are also numerous version of Ackbar campaign posters. I received this one from my son and his girlfriend.

Here is an Ackbar demotivational poster that  I like.

Figure 2: Admiral Ackbar Demotivational Poster.

Figure 2: Admiral Ackbar Demotivational Poster (Source).

Save

Posted in Humor | 4 Comments

Mathcad Program for Selecting Best Resistor Approximation

Quote of the Day

Experience is inevitable; learning is not.

Paul Schoemaker. Expert in the process of decision-making. Everyone in corporate America understands this quote and has lived it.


Introduction

Figure 1: E12 Series of Resistor Values.

Figure 1: E12 Series of Resistor Values (Source).

I gave a seminar today on the use of Mathcad 15 in an engineering organization. The discussion was mainly on Mathcad basics, plus my exhortations on properly documenting your math work so that it can be understood and supported by others – and years from now, YOU. I have given this presentation before, and it went well. During these seminars, I like to include examples of my standard process for doing engineering mathematics using a computer algebra systems.

My personal development process includes a number of simple Mathcad functions to assist me with performing  common tasks. One simple Mathcad function that I wrote years ago helps me select resistor values from among the standard offerings from manufacturers. I keep this function in my electrical design template. I have used this function hundreds of times. Since other engineers also use it, I have to believe that it has been used thousands of times.

Background

I beat the standard resistor values to death in this post. In order to keep the algorithm simple, I ignored special cases like 0 Ω resistances and sub-1 Ω resistances.

Analysis

Source

I am including both Mathcad and Excel versions of this algorithm (Source). For the discussion below, I will focus on the Mathcad version – both versions work identically.

Basic Approach

Figure 2 shows  my Mathcad function for the best standard resistor match to my design value using E series resistors. Because the E series values only approximately correspond to geometric series values (i.e. you cannot compute the values easily from a function), I put all the standard resistor values into a table. Using this table, I computed all absolute differences between my desired resistor value and the E series values. I then selected the E series value corresponding to the minimum absolute error.

Figure 2: Algorithm for Selecting the Nearest Standard Resistor Value.

Figure 2: Algorithm for Selecting the Nearest Standard Resistor Value.

Conclusion

Over the next few weeks, I plan on reviewing a number of the utility functions that I use for designing electronic circuits. While none of these functions are complicated, they do speed your design work.

Posted in Electronics | Comments Off on Mathcad Program for Selecting Best Resistor Approximation

Realizing a Non-Standard Resistance Value

Quote of the Day

The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt.

— Bertrand Russell. This quote describes my experience in the engineering world. I recently discovered that this observation is called the Dunning-Kruger effect, a form of cognitive bias.


Introduction

Figure 1: Three Resistor Configurations Used to Realize a Non-Standard Resistance Value.

Figure 1: Three Resistor Configurations Used to
Realize a Non-Standard Resistance Value
(Source).

I recently needed to realize a non-standard resistance value precisely in production. Historically, I have used potentiometers (mechanical or electronic) to trim these precision circuits. Potentiometers are undesirable in production because they tend to be expensive, unreliable, mechanical units that require adjustment accessibility, and they can drift.

While I was researching options, I ran across an interesting approach that uses three resistors to realize any resistance from 10 Ω to 1 MΩ within 0.1 % from a set of 70 standard resistance values plus short and open values.

This approach was developed by W. Stephen Woodward, who is a master of analog design. His original article included software to select the resistors, but I was unable to find that code. I have implemented my own version of the code in both Excel VBA and Mathcad, which I discuss below.

Background

Definitions

Brute Force Algorithm
In computer science, brute-force search or exhaustive search, also known as generate and test, is a very general problem-solving technique that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement (Source).
Trimming
The act of setting a circuit parameter to a specific value by adjusting a component value – usually a resistance. This can be done through a number of means, such as a trim potentiometer, laser trimming, or manual resistor selection from a large stock of values.
Standard Resistor Values
Some electronic components (resistors, capacitors, inductors, and Zener diodes) are available from manufacturers in standard values that were determine using the Renard system. The specific values used are referred to as the E-series. For a complete discussion of the E-series of standard component values, see this post.
Select-In-Test Resistor
During a calibration activity, a specific resistor value is determined to be required for proper circuit operation. This resistor is then selected from a set of discrete resistors. This approach has the disadvantage of requiring a large set of slightly different resistor values – this is not a practical approach in most cases.

Operational Use

Many analog circuits require calibration. Often, this calibration involves determining a specific resistor value to ensure that a device meets its requirements. While potentiometers are easy to design in, they can be unreliable in practice. Woodward's circuits (Figure 1) allow three discrete resistors with standard values to be used to set a specific resistance within 0.1%. This is impressive – it allows you to have a small set of standard parts satisfy a requirement for a precise, non-standard resistance value.

Analysis

For those of you that want to experiment with this routine, I have implemented it in Excel and Mathcad (Source). My discussion below will focus on the implementation in Mathcad, but the Excel implementation is similar. The Excel implementation uses VBA, so you will receive security warnings when you open it.

Utility Functions

Figure 2 shows some of the utility functions that I used to realize the algorithm.

Figure 2: Utility Functions Used By My Resistor Selection Algorithm.

Figure 2: Utility Functions Used By My Resistor Selection Algorithm.

Brute Force Resistor Selection Algorithm

Figure 3 shows the algorithm that I used to try every resistor combination. It may not be very efficient, but this algorithm does not need to execute often.

Figure M: Brute Force Algorithm for Determining Resistor Values.

Figure 3: Brute Force Algorithm for Determining Resistor Values.

Test Cases

Figure 4 shows the results of my limited testing.

Figure M: Test Cases for Checking Out My Algorithm.

Figure 4: Test Cases for Checking Out My Algorithm.

Conclusion

This was a good exercise for demonstrating how you can code the same algorithm in both Excel VBA and Mathcad – I thought about using Python, but more electrical engineers use Excel than Python. In any case, I often prototype my algorithms in Mathcad – I find it very easy to experiment in Mathcad.

From a mathematical standpoint, I am not sure how to prove Woodward's claim that you can find a three standard resistor combination to realize any resistor value from 10 Ω to 1 MΩ within 0.1%. I need to think about that for a bit.

Posted in Electronics | 12 Comments

Standard Resistor Values

Quote of the Day

Never interrupt someone doing something you said couldn't be done.

— Amelia Earhart


Introduction

Figure 1: Plot of the E12 Series of Resistors (Source).

Figure 1: Plot of the E12
Series of Resistors (Source).

I have been designing circuits with resistors since I was a kid working on science fair projects – I still remember building my first Radio Shack photocell project. While I have always thought of resistors as simple devices, I recently discovered that I have been laboring under a misconception about the standard resistor values.

Until last week, I believed that the values of the E-series standard resistor values were selected to ensure that if I needed a resistor within x% of a specific value, I simply needed to choose a resistor from the  x% tolerance set. For example, Figure 1 shows the E12 series (i.e. ±10%) values – notice that each tolerance range overlaps the adjacent ranges. This means that you can always find an E12 resistor value within 10% of your required value.

I was a bit surprised that I could not find an E48 value (±2% tolerance) within 2% of my design value. I was so surprised that I stopped what I was doing and learned how the E-series of standard resistor values were determined. It was an interesting side trip that I thought was worth discussing here.

When can you find an x% resistor within x% of a specific value? The answer is "it depends …"

  • For 20% (E6), 10% (E12), and 5% (E24) resistors, you can always find a standard resistor value within 20%, 10%, or 5%, respectively, of the value you want.
  • For the 2% (E48), 1% (E96), and 0.5% (E192) resistors, you will NOT always be able to find a standard resistor value within 2%, 1%, or 0.5%, respectively, of the value you want.

My objective here is to demonstrate the issue and provide a couple of ways of dealing with it. This is not a big deal because I can just specify a 1% or 0.5% resistor to get closer to value that I need. I was just surprised that the E series standard allowed these gaps. The tolerance on a resistor value simply means that the manufacturer guarantees that a resistor value is within the tolerance % of that specific value. For a given series, it does not mean that you can find a specific resistor value within the tolerance range of a standard resistance value.

Background

Definitions

Tolerance
In engineering, tolerance is the permissible limit or limits of variation in some parameter of a system or component (Source). Tolerance is often, but not always, expressed as the percentage of variation permitted from a specified value. All system parameters are subject to random variation, and the designer must deal with it.
Relative Percent Error (AKA Approximation Error)
The relative percent error (symbol δ) in the percentage discrepancy between an exact value and some approximation to it (Source). We normally compute the relative percent error with the equation \delta =100\cdot \left| {\frac{{{{x}_{{approx}}}-x}}{x}} \right|, where x is the desired value and xapprox is the approximate value.
Preferred Number
Preferred numbers are standard guidelines for choosing exact product dimensions within a given set of constraints (Source).
Renard Numbers
Renard's system of preferred numbers, adopted in 1952 as international standard ISO 3, divides the interval from 1 to 10 into 5, 10, 20, or 40 steps. The factor between two consecutive numbers in a Renard series is approximately constant (before rounding), namely the 5th, 10th, 20th, or 40th root of 10 (approximately 1.58, 1.26, 1.12, and 1.06, respectively), which leads to a geometric sequence. This way, the maximum relative error is minimized if an arbitrary number is replaced by the nearest Renard number multiplied by the appropriate power of 10 (Source).
E-Series
In electronics, IEC 60063 defines a preferred number series for resistors, capacitors, inductors, and Zener diode voltages that subdivides the interval from 1 to 10 into 6, 12, 24, 48, 96, and 192 steps (similar in approach to the Renard numbers). These subdivisions ensure that when some arbitrary value is replaced with the nearest preferred number, the maximum relative error will be on the order of 20%, 10%, 5%, 2%, 1%, 0.5% (Source).

The yellow highlighted text is the important thing here – the maximum relative error only roughly corresponds to the tolerance – you are not guaranteed to have a preferred number within the tolerance range of the resistor series.

I should also mention that the actual E-series values do not always follow the geometric relation {{x}_{i}}={{\left( {\sqrt[N]{{10}}} \right)}^{i}}, where N is the series number and i=0 … N-1. The E6, E12, and E24 series have had some their values moved around a bit (Appendix A). The E48 and E96 series exactly match the geometric series. The E192 series has only one discrepancy –  9.20 instead of the geometric series value of 9.19 (Appendix B).

Analysis

Absolute Error Versus Tolerance

Figure 2 shows the maximum relative error you will see for a given manufacturer's tolerance specification. Observe that the E48, E96, and E192 series have tolerances less than the maximum relative error.

Figure 2: Difference Between Relative % Error and Man. Tolerance.

Figure 2: Difference Between Relative % Errors and Manufacturing Tolerances.

Graphical View

E6 Example Showing Full Coverage From One to Ten

Figure 3 shows a set of bars that illustrate the range of values covered each resistor value in the E6 series. Note that each resistor range overlaps with the adjacent resistor ranges. This means that any value in the range 1 to 10 can be covered by a E6 value within 20%.

Figure 3: Plot of E6 Series Value Ranges.

Figure 3: Plot of E6 Series Value Ranges.

E48 Example Showing Gaps For Some Numbers Between One and Ten

Figure 4 shows graph similar to Figure 3, but for the E48 series (±2%). It is difficult to see at this scale, but there is not a standard value for every value from one to ten within 2%.

Figure 4: Plot of E48 Value Ranges.

Figure 4: Plot of E48 Value Ranges.

We can magnify the scale on Figure 4 and show an example of the gaps that exist. For a concrete example, consider the number 8.455. It is 2.5% away from 8.2 and 2.4 % away from 8.66, the two nearest E48 values.

Figure 5: Illustration of Gaps in the E48.

Figure 5: Illustration of Gaps in the E48.

Again, this is not a big deal because we can work around this issue. However, I was just surprised to learn this after all these years.

Workarounds

The easiest workaround is just to use a resistance series with a finer resolution. In my case here, I wanted to use the E96 series. I could also have used the E192 series, which would have solved the problem.  I should mention that some folks use multiple resistors to "tweak" a value. Here are some example circuits (Figure 6) from W. Stephen Woodward. I have put out a blog post on how to select the correct standard resistor values to bring you with 0.1% of any resistor value between 10 Ω and 1 MΩ.

Figure 6: Realizing a Specific Value Using Multiple Resistors.

Figure 6: Realizing a Specific Resistance Value Using Multiple Resistors (Source).

Conclusion

The Renard numbers and their E-series variants are used for all sorts of components, including capacitors, inductors, and Zener diodes. This exercise was useful because it showed me that there is stuff to learn even in something that I have been using for years.

Appendix A: E6, E12, E24 Series Geometric Variances

Figure 7 shows the differences (marked with red ovals) between the E6, E12, and E24 values and the associated geometric series.

Figure M: Red Circles Mark Differences Between E-Series and Geometric Series.

Figure 7: Red Circles Mark Differences Between E-Series and Geometric Series.

I do not know why the E6, E12, E24 values were not set equal to the geometric series values. I speculate that modifying the geometric values slightly improved some characteristic important to manufacturing. For example, the total overlap between adjacent values is larger for the E series values than for the geometric series values. This likely reduces the relative error and may improve yields for difficult to control parameters like Zener diode voltages that also use the E series.

Figure 8: Total Overlap is Larger with E Series Values Than Geometric Series Values.

Figure 8: Total Overlap is Larger with E Series Values Than Geometric Series Values.

Appendix B: E48, E96, E192 Series Geometric Variances

In Figure 9, I show that among the series values for E48, E96, E192, there is only one discrepancy between the standard values and the associated geometric series (9.19 vs 9.20).

Figure 8: One Discrepancy Between E192 and Geometric Series.

Figure 9: One Discrepancy Between E48, E96, E192 and the Associated Geometric Series.

Save

Posted in Electronics | 8 Comments

Planets Around An Ultra-Cool Dwarf Star

Quote of the Day

Mistakes are a great educator when one is honest enough to admit them and willing to learn from them.

— Alexander Solzhenitsyn


Introduction

Figure 1: Artist's Conception of the Trappist-1 System.

Figure 1: Artist's Conception of the Trappist-1 System.

I have been interested in the possibility of their being habitable regions around stars that are smaller and dimmer than the Earth. I saw an article this week on a solar system about a star, Trappist-1, that is  not much larger than Jupiter and that is quite cool for a star (2550K).

My plan here is to look at some measured data for Trappist-1 and see if I can derive some of the other parameters for this star and its system. I will use information available from the Wikipedia and the Open Exoplanet Catalog.

Planetary scientists have discovered three planets around Trappist 1 (Figure 1), and I think I can duplicate their estimates of the planet's orbital radius and solar insolation. I will add an estimate for their effective temperature.

Background

Definitions

Insolation
Insolation is the solar radiation striking Earth or another planet. It is the rate of delivery of solar radiation per unit of horizontal surface (Source).
Brown Dwarf
A brown dwarf is a substellar object not massive enough to ever fuse hydrogen into helium, but still massive enough to fuse deuterium—less than about 0.08 solar masses and more than about 13 Jupiter masses (Source).
Ultra-Cool Dwarf
A dwarf star of spectral type M7 or later – the warmest kind of brown dwarf. These stars are at the high-mass, high-temperature end of the brown dwarf scale (Source).

Exoplanet Background

The exoplanet background for this post is covered in an earlier post.

Analysis

My Mathcad source and its PDF are included here.

Star Characteristics

Figure 2 summarizes my

  • Analysis setup
  • Trappist-1's measured star characteristics (light green highlight)
  • My comparison of Trappist-1 with Jupiter in terms of diameter and mass (yellow highlight).
    • Trappist-1's diameter is only 13% larger than Jupiter
    • Trappist-1's mass is 83.8 times that of Jupiter.
Figure 2: Definitions and Star Characteristics.

Figure 2: Definitions and Star Characteristics.

Planet Orbital and Insolation Characteristics

Figure 3 shows my application of Kepler's 3rd law to determine each planet' orbital radius. I also determine the insolation of each planet by Trappist-1. Two of the planets are much warmer than the Earth. The data for the 3rd planet had a wide range of values – for computational ease, I chose the shortest possible orbital period. Using the shortest period means that I am assuming that the planet's radius is the minimum possible consistent with the observations.

In Figure 3, measured period data has a light green highlight, while my derived characteristic are highlighted in yellow.

Figure 3: Planet Orbit and Insolation Characteristics.

Figure 3: Planet Orbit and Insolation Characteristics.

Estimated Planetary Temperatures

Figure 4 shows my estimates for the planet temperatures. One of the planets has an effective temperature similar to that of Earth (-21 °C), and the others are much hotter.

Figure 4: Estimate Planet Temperatures.

Figure 4: Estimate of Planet Temperatures.

Conclusion

I find these small stars and their planets interesting. Even though these stars are relatively cool, it is possible to have a planets close enough to these stars to receive significant heat.

In this case, the star is close to the Earth (~40 light-years). I hope we will be able to gather more data about this planetary system in the near future.

Posted in Astronomy | Comments Off on Planets Around An Ultra-Cool Dwarf Star