- AuthorSearch Results
- 2018-10-11 at 12:17 AM#865370
Topic: The Bachelor Reality TV Show Goes MGTOW
in forum MGTOW CentralI don’t watch reality TV but apparently here in Australia The Bachelor reality TV dating show has gone MGTOW! 25 girls tried to snag The Bachelor Australia 2018, Nick ‘The Honey Badger’ Cummins. In a piece of MGTOW GOLD, Nick decided the juice was not worth the squeeze and chose none of the girls!
.
.
For those not familiar Nick ‘The Honeybadger’ Cummins, he seems like a top bloke with a wicked sense of humor. Here’s some of his TV interviews –
.
.
Apparently there’s been an outcry from the feminazis and media industry libtards incredulous that Nick took a look at the Aussie women on offer, and threw ’em all back into the dating pool. Well played sir …
.
.
As expected, the butt-hurt finalists took to the airwaves to bitch and moan about being rejected. Check out the princess mentality and sense of entitlement –
.
.
Note how the beta-simp radio host Kyle Sandilands throws out conjecture that Nick might be gay. Typical white-knight, MGTOW shaming move.
.
.
The Bachelor was filmed 6 months ago and finished airing in Australia last week, but the media have been following the poor bloke on his holidays in Papua New Guinea –Honey Badger arrives back in Sydney following explosive Bachelor finale
To quote the article –
“The Honey Badger, 31, flew out of the country 10 days ago, just before the jaw-dropping episode aired during which he dumped both finalists and stayed single — sparking an uproar.
Not since Blake Garvey dumped Sam Frost after the season two finale and shacked up with second runner-up Louise Pillidge has a Bachelor angered so many fans of the show.
Cummins, 31, was escorted by security through Brisbane Airport on Wednesday as he returned home after completing the Kokoda Trail and then spending a few days holidaying in Port Moresby.
Cummins, 31, refused to be drawn into giving a proper answer as he was grilled by a waiting photographer.
“I think you owe the (Bachelor) girls an explanation, don’t you?” the photographer asked him.
“It was six months ago mate … I think we’re all over it … I think you need to (get over it),” Cummins replied.@
Welcome to MGTOW ladies, where more and men are refusing to be entrapped by the Vagina Monetization Scam. This is the world you have created. MGTOW for the win!
.
.
#ManOut
2018-10-07 at 8:40 PM#864497In reply to: Let The Anger Go
what is it that intrinsically drives you?
Engaging in hardcore anal with death when jumping out of planes and surviving.
Turning death into my bitch.
I will in fact die someday.
But prior to that day – That ONE time that death finally got what it wanted, I’ll be laughing over how many times I f~~~ed it.
F~~~ed it hard. Teased it first and it came up empty every time leading up to final time when it gets me.
But it will be funny because I cheated it so many times prior.
Never fear something. Once you fear it then it owns you. Every minute of every day.
Embrace fear. Grab it’s hair, pull back hard. Whisper in it’s ear that she’s a slut. Then f~~~ fear.
2018-10-03 at 11:15 AM#863121In reply to: Gargamel/Gravel Pit Wedding RSVP now!
RESEARCH Open Access
Effects of radiation emitted by WCDMA mobile
phones on electromagnetic hypersensitive
subjects
Min Kyung Kwon1,2, Joon Yul Choi1,2, Sung Kean Kim2,3, Tae Keun Yoo4 and Deok Won Kim1,2,3,4*
Abstract
Background: With the use of the third generation (3 G) mobile phones on the rise, social concerns have arisen
concerning the possible health effects of radio frequency-electromagnetic fields (RF-EMFs) emitted by wideband
code division multiple access (WCDMA) mobile phones in humans. The number of people with self-reported
electromagnetic hypersensitivity (EHS), who complain of various subjective symptoms such as headache, dizziness
and fatigue, has also increased. However, the origins of EHS remain unclear.
Methods: In this double-blind study, two volunteer groups of 17 EHS and 20 non-EHS subjects were
simultaneously investigated for physiological changes (heart rate, heart rate variability, and respiration rate), eight
subjective symptoms, and perception of RF-EMFs during real and sham exposure sessions. Experiments were
conducted using a dummy phone containing a WCDMA module (average power, 24 dBm at 1950 MHz; specific
absorption rate, 1.57 W/kg) within a headset placed on the head for 32 min.
Results: WCDMA RF-EMFs generated no physiological changes or subjective symptoms in either group. There was
no evidence that EHS subjects perceived RF-EMFs better than non-EHS subjects.
Conclusions: Considering the analyzed physiological data, the subjective symptoms surveyed, and the percentages
of those who believed they were being exposed, 32 min of RF radiation emitted by WCDMA mobile phones
demonstrated no effects in either EHS or non-EHS subjects.
Keywords: Provocation, Physiological changes, HRV, Subjective symptoms, EMF perception
Background
With the increasing use of third generation (3 G) mobile
phones, social concerns have arisen concerning the possible
health effects of radio frequency-electromagnetic
fields (RF-EMFs) emitted by mobile phones in humans
[1]. On the basis of limited evidence from both human
and animal studies, the World Health Organization has
classified RF-EMFs as possibly carcinogenic to humans
(Group 2B) [2]. A number of people have self-reported
electromagnetic hypersensitivity (EHS), characterized by
a variety of non-specific symptoms that differ from individual
to individual. Cross-sectional survey studies in
different countries have reported that EHS subjects experience
non-specific subjective symptoms (e.g., headache,
dizziness, fatigue, sleep disorder) associated with
EMF exposure: 1.5% in Sweden [3], 3.2% in California
[4], and 5% in Switzerland [5]. For some individuals, the
symptoms can have lifestyle-changing consequences [6].
Although numerous studies have examined the effects
of Global System for Mobile Communications (GSM) on
humans between EHS and non-EHS groups, only a few
provocation studies involving WCDMA have simultaneously
evaluated physiological changes, subjective symptoms,
and EMF perception. Furubayashi et al. measured
psychological and cognitive parameters during pre- and
post-exposure [7]. They also monitored physiological
parameters, such as skin temperature, heart rate and
local blood flow, and asked participants (EHS and non-
EHS women) to report on their subjective perception of
* Correspondence: kdw@yuhs.ac
1Brain Korea 21 Project for Medical Science, Yonsei University College of
Medicine, Seoul, South Korea
2Department of Medical Engineering, Yonsei University College of Medicine,
Seoul, South Korea
Full list of author information is available at the end of the article
© 2012 Kwon et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative
Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Kwon et al. Environmental Health 2012, 11:69
http://www.ehjournal.net/content/11/1/69
EMF emitted by WCDMA devices. They concluded that
EHS and non-EHS groups did not differ in their
responses to real or sham EMF exposure with respect to
any psychological, cognitive, or autonomic parameter.
Electromagnetic sensibility in the context of subjective
symptoms and perception refers to the ability to perceive
EMF without necessarily developing non-specific health
symptoms attributable to EMF exposure [8]. Mueller et al.
reported no significant differences in the ability to detect
EMF between EHS and non-EHS groups [9]. In a study by
Hietanen et al., in which EHS subjects were examined for
their ability to perceive EMF, none of the subjects could
distinguish real EMF exposure from sham exposure [10].
Kwon et al. reported that there was no evidence to indicate
that EHS subjects could detect EMF exposure [11].
However, Leitgeb et al. reported that a subset of EHS subjects
with significantly increased electromagnetic sensibility
could be differentiated from non-EHS groups [8].
Therefore, a comprehensive study is necessary to understand
whether EHS is actually caused by exposure to
RF-EMFs.
Methods
Subjects
Because determination of EHS subjects was crucial to
this provocation study [5], we utilized the EHS screening
tool developed by Eltiti et al. [12]. We adopted the following
criteria to identify EHS individuals: (1) a total
symptom score greater than or equal to 26 out of a maximum
score of 228 (57 symptoms, each ranked from 0
for “not at all” to 4 for “a great deal”); (2) individuals
who explicitly attribute their symptoms to exposure to
only 3 G mobile phones; and (3) individuals whose
current symptoms cannot be explained by a pre-existing
chronic illness.
The experiment was performed as a double-blind
study with a total of 45 subjects. Initially, 19 EHS and 26
non-EHS subjects were screened; however, two EHS
subjects and six non-EHS subjects were excluded. The
two EHS subjects were excluded because they were considered
outliers in respiration rate, which was greater
than two standard deviations from the median (extreme
outlier) or 20 beats per min higher than normal without
exposure. In the non-EHS group, one subject was
excluded because of some drowsiness and motion artifacts
during the experiment; three subjects were
excluded because they were outliers with respect to
heart rate; and two subjects were eliminated because of
abnormal electrocardiogram (ECG). None of the EHS or
non-EHS subjects failed to attend the second day after
attending the first day. Therefore, data from a total of 37
subjects—17 EHS and 20 non-EHS—were analyzed in
this study. As shown in Table 1, there were no significant
differences in male–female ratio, age, height,
weight, body-mass index, nonsmoker-smoker ratio,
computer usage time, TV viewing time, or mobile phone
usage between the two groups.
The subjects were advised not to consume caffeine,
smoke or exercise, and to sleep enough before the experimental
day in order to minimize confounding factors.
All subjects, who were recruited by advertisements
at the Yonsei University Hospital System (YUHS), were
informed of the purpose and procedure of the experiment
and were required to give written consent to participate
in this study. The Institutional Review Board of
the YUHS approved the protocol of this study (project
number: 1-2010-0030).
Experimental setup
The laboratory was used exclusively for this experiment,
and all other electrical devices were unplugged except
for our instruments in order to minimize background
field levels. Background extremely low frequency (ELF)
fields at the level of the head in the laboratory were
measured to ensure that they did not influence the subjects.
The average ELF electric and magnetic fields were
determined to be 1.8 ± 0.0 V/m and 0.02 ± 0.01 μT, respectively,
measured using an electric and magnetic field
analyzer (EHP-50C, NARDA-STS, Milano, Italy). The RF
field was determined to be 0.05 ± 0.00 V/m with a microwave
frequency range from 1920 to 1980 MHz, measured
using a radiation meter (SRM 3000, Narda GmbH,
Pfullingen, Germany).
To achieve better control over exposure, we used
WCDMA modules with Qualcomm chipsets (baseband:
MSM6290, RF: RFR6285, power management: PM6658,
San Diego, CA) to generate WCDMA RF-EMFs instead
of a regular smart phone. The WCDMA modules continuously
transmitted at a mean output power of 24
dBm at 1950 MHz, which was measured using a wireless
communication test set (E5515C, Agilent, Santa Clara,
CA). The modules were inserted into a dummy phone
[13], and the location of the module was varied to meet
the recommended general public specific absorption rate
(SAR)1g of 1.6 W/kg according to the IEEE Standard
[14]. The SAR measurements were made with a DASY 4
measurement system (SPEAG, Zurich, Switzerland), and
a Twin SAM (specific anthropomorphic mannequin)
phantom was filled with head tissue-equivalent liquid
(mass density, 1000 kg/m3) as specified by the Federal
Communications Commission. The measured dielectric
properties of the liquid were σ = 1.41 S/m and Er = 39.7
for the WCDMA frequency range. When the antenna of
the module was positioned 67.5 mm from the ear reference
point (ERP) of the dummy, the averaged peak
spatial SAR1g was determined to be 1.57 W/kg at 1950
MHz at the left cheek position [15]. The electric field
and power drift at the ERP were 6.9 V/m and −0.001 dB,
Kwon et al. Environmental Health 2012, 11:69 Page 2 of 8
http://www.ehjournal.net/content/11/1/69
respectively. The measured SAR distribution is shown in
Figure 1.
The module was connected via a 5-m USB cable and a
USB type ammeter to a portable laptop computer
(X-note R500, LG Electronics, Korea), which controlled
the module and monitored electrical current to check exposure
conditions (Figure 2). The laptop computer was
remotely controlled from another outside desktop computer
to satisfy the double-blind study design. The
dummy phone was attached to the subject’s head using
an earplug and headset to fix it at the ERP next to the
cheek [16]. The phone was held at a distance of 3 mm
from the ear using a piece of wood for insulation to prevent
battery-generated heat from providing subjects with
an indication that the phone was working. The apparatus
was constructed from plastic and rubber only, without
any metal [16,17].
Experimental procedures
No information was given to the subjects except that
they would be asked about symptoms and RF-EMFs
perception at the beginning of the first experimental
day. Sham and real sessions were conducted as a
double-blind test to minimize any test bias resulting
from a subject and an experimenter recognizing the operational
state of the WCDMA module. The experiment
was performed for two days, one day for a real session
and a second day for a sham session (or vice versa). No
matter which came first, sham or real exposure, the second
session was always conducted at approximately the
same time of the day as the first session in order to
maintain the subjects’ physiological rhythm. The order
of sham and real sessions for each subject was randomly
assigned and counterbalanced on our automatic exposure
control program using MATLAB 2008a (Mathworks
Inc. Natick, MA) to minimize experimental bias. Nine
subjects in the EHS group and 11 in the non-EHS group
received sham exposure session first. Time duration between
sessions was a minimum of one day and a maximum
of ten days.
Room temperature and relative humidity, which could
considerably affect outcomes, were recorded and maintained.
For the non-EHS group, room temperature
showed no significant differences between real (24.4°C ±
0.9°C; Min = 23°C, Max = 26°C) and sham (24.5°C± 0.8°C;
Min = 23°C, Max= 26°C) sessions (P=0.627). Humidity
also showed no significant differences between real
(40.0% ± 2.2%; Min = 35%, Max = 45%) and sham (40.8% ±
3.3%; Min = 35%, Max = 45%) sessions (P=0.161). For the
EHS group, room temperature showed no significant
differences between real (24.1°C ± 0.9°C; Min = 23°C,
Max = 26°C) and sham (24.2°C ± 1.1°C; Min = 23°C,
Max = 27°C) sessions (P=0.682). Humidity also showed
no significant differences between real (40.0% ± 2.4%;
Min = 32%, Max = 45%) and sham (39.7% ± 2.7%; Min =
36%, Max = 46%) sessions (P=0.732).
Physiological measurements
The duration of each exposure session was 64 min, as
shown in Figure 3. Before the experiment, subjects were
instructed to rest in a sitting position for at least 10 min.
Physiological data were collected for 5 min each for four
different stages: pre-exposure (stage I), after 11 min of
exposure (stage II), after 27 min of exposure (stage III),
and post-exposure (stage IV). At each stage, ECG and
respiration were simultaneously measured for 5 min (the
minimum data requirement for HRV) [18]. Heart rate,
HRV, and respiration rate were obtained with a computerized
polygraph (PolyG-I, Laxtha, Daejeon, Korea) with
a sampling frequency of 512 Hz. The data were transferred
to a nearby laptop computer (LG Electronics) and
analyzed using data acquisition (Telescan 0.9) and analysis
(Complexity software) software (Laxtha). The
PolyG-I recorded ECG through Ag-AgCl electrodes
Table 1 Demographic data of subjects
EHS Non-EHS P-value
No. of subjects (n) 17 20 –
Male: female 8: 9 11: 9 0.75
Age (yr) 30.1 ± 7.6 29.4 ± 5.2 0.87
Height (cm) 167.9 ± 7.5 167.6 ± 8.0 0.71
Weight (kg) 63.2 ± 11.9 60.3 ± 11.5 0.44
BMI (kg/m2) 22.3 ± 2.9 21.3 ± 2.3 0.24
Nonsmoker: smoker 15:2 18:2 1.00
Computer usage time (h/d) 4.4 ± 2.9 5.0 ± 3.8 0.99
TV viewing time (h/d) 1.6 ± 1.3 1.5 ± 1.1 0.96
Mobile phone usage periods (yr) 10.9 ± 3.0 11.6 ± 2.6 0.33
Figure 1 The measured SAR distribution of the WCDMA
module on the left side.
Kwon et al. Environmental Health 2012, 11:69 Page 3 of 8
http://www.ehjournal.net/content/11/1/69
(2223; 3 M, St. Paul, MN) placed on both arms and the
right leg of participants.
Some studies have indicated that EHS subjects may
exhibit abnormal autonomic nervous system regulation
[19,20]. Therefore, we first obtained heart rate from
ECGs and then acquired HRV and the power spectrum
of HRV. High-frequency power (HFP) is reflective of the
effects on respiratory sinus arrhythmia, an index of parasympathetic
nerve activity, whereas low-frequency power
(LFP) is reflective of the effects on both sympathetic and
parasympathetic nerves [21]. In this study, the LFP/HFP
ratio was used as an index of autonomic nerve activity
balance. Respiratory inductance plethysmography, with
an excitation frequency of 3 MHz, was used to measure
respiration rate. Subjects wore a coiled band around
their upper abdomen for measurement of inductance
changes resulting from cross-sectional change.
Subjective symptoms and perception of EMF
The four shaded areas in Figure 3 denote periods during
which subjects were questioned regarding the eight
symptoms; each period lasted approximately 1 min. The
eight subjective symptoms of throbbing, itching,
warmth, fatigue, headache, dizziness, nausea, and palpation
were evaluated through verbal surveys, which were
graded on a 4-point scale ranging from 1 (no sensation)
to 4 (strong sensation) [22]. In addition, perception of
EMF exposure was investigated every 5 min throughout
the entire session, denoted by an “o” in Figure 3.
Subjects were asked to answer the question “Do you believe
that you are exposed right now?” nine times during
each session. Percentages of those who believed they
were being exposed were calculated for pre-exposure,
exposure, and post-exposure periods. The total number
of inquiries was 185 (5 × 37) during real exposure and
481 (13 × 37) during non-exposure; the total number of
subjects was 37 (17 + 20).
Data analysis
A repeated two-way analysis of variance (ANOVA) was
performed using SPSS software (SPSS 18, SPSS, Chicago,
IL) to investigate differences in heart rate, respiration
rate, and relative change in LFP/HFP with exposure and
stage for EHS and non-EHS groups. A P-value < 0.05
was considered statistically significant. Subjective symptoms,
which are ordered paired data, were analyzed
using a non-parametric Wilcoxon signed-rank test. A
total of 64 P-values (4 stages × 8 symptoms × 2 groups)
were obtained for the real and sham exposure sessions
for the eight symptoms at four stages in both groups.
The significance level was adjusted to 0.0125 (0.05/4) because
testing was performed in four stages.
There were two exposure sessions for each participant,
and nine perception inquiries for each session, as shown
in Figure 3. For each session, there was one inquiry during
pre-exposure, five inquiries during sham or real exposure,
and three inquiries during post-exposure. In
both groups, the percentages of those who believed they
Figure 2 Block diagram of exposure setups.
Resting
Exposure (Real or Sham)
o
Stage
I o o o o o o o o
0 5 10 15 16 21 26 27 32 37 42 43 48 53 58 59 64 (min)
Stage
II
Stage
III
Stage
IV
Pre-Exposure
Real or Sham Session
Post-Exposure
Figure 3 Experimental procedures for measuring physiological changes and investigating symptoms and perception. The four shaded
areas are periods during which the subjects were questioned regarding the eight symptoms. “o” indicates timing of the inquiries for perception.
Kwon et al. Environmental Health 2012, 11:69 Page 4 of 8
http://www.ehjournal.net/content/11/1/69
were being exposed were obtained and evaluated for significant
differences between real and sham sessions
using the McNemar test. The pre-exposure period of the
sham sessions was compared with that of the real sessions
to test whether the conditions before sham and
real exposures of subjects were the same. The sham exposure
period was compared with the real exposure
period to test whether the subjects could detect the
fields. The post-exposure period after sham exposure
was compared with the post-exposure period after real
exposure to test whether the real exposure influenced
the perception of exposure in the post-exposure period.
The Chi-square test was applied to evaluate differences
in the percentages of those who answered “yes”, which
were ordinal data, as shown in Figure 4.
Results
EHS and non-EHS groups
The symptom scores for EHS and non-EHS groups
obtained using the Eltiti scale were 53.9 ± 28.5 and
9.3 ± 7.4 (mean ± S.D), respectively, and they were significantly
different (P < 0.001). The most typical symptoms
reported in the EHS group (n = 17) among 57 subjective
symptoms on the questionnaire (multiple answers
allowed) were fatigue (n = 17), headaches (n = 17), heaviness
in the head (n = 17), exhaustion (n = 15), migraine
(n = 15), sleep disturbance (n = 15), vertigo (n = 14), and
difficulty in focusing attention (n = 14). The most typical
symptoms reported in the non- EHS group (n = 20) were
fatigue (n = 14), blurry vision (n = 10), difficulty in concentration
(n = 10), heaviness in the head (n = 9),
difficulty in focusing attention (n = 8), headaches (n = 6),
migraine (n = 6), and pain/warmth in the head (n = 6).
Physiological variables
Heart rate, respiration rate, and LFP/HFP ratios of the
non-EHS and EHS groups during real and sham exposure
are shown in the top section of Table 2. For analysis
of the relative changes in LFP/HFP, LFP/HFP
values for real and sham were expressed relative to the
corresponding stage I values (defined as 100%) because
of large individual variation. A repeated two-way
ANOVA showed no significant differences in heart rate,
respiration rate, or LFP/HFP for stage and exposure in
either group, except for LFP/HFP for stage in both
groups, as shown in the bottom section of Table 2. For
the non-EHS group, LFP/HFP showed no significant
difference between real and sham exposures (P=0.552),
but did show a significant difference among stages
(P=0.001). For the EHS group, LFP/HFP was also not
significantly different between real and sham exposures
(P=0.079), but was significantly different among stages
(P=0.048).
Subjective symptoms
Neither the EHS nor the non-EHS group showed significant
differences in any of the eight subjective symptoms
surveyed (throbbing, itching, warmth, fatigue, headache,
dizziness, nausea, and palpitation) between sham and
real sessions in any of the four stages.
Figure 4 Percentage of belief of being exposed in EHS and non-EHS groups for sham (A) and real (B) exposure sessions. Asterisks
indicate statistical significance in perception percentages between EHS and non-EHS groups. Bars indicate standard errors.
Kwon et al. Environmental Health 2012, 11:69 Page 5 of 8
http://www.ehjournal.net/content/11/1/69
Perception percentages
Table 3 shows the percentage of subjects who believed
they were being exposed during pre-exposure, exposure
(real or sham), and post-exposure in the EHS and non-
EHS groups. To compare the percentages of those perceiving
exposure during experimental sessions, we applied
the McNemar test and found no significant
difference between real and sham exposures in the EHS
(P=0.572) or non-EHS (P=0.375) groups. To test
whether there were any delayed effects of real exposure
on post-exposure perception, we applied the same test
and found no significant difference in the percentages of
those who believed they were being exposed following
real and sham exposures in the EHS (P=1.000) or non-
EHS (P=1.000) groups. There was also no significant
difference during pre-exposure between real and sham
exposures in EHS (P=1.000) and non-EHS (P=1.000)
groups, indicating that the conditions experienced by
subjects before real and sham exposures were the same.
Similarly, Kruskal-Wallis tests showed that the percentages
of those who believed they were being exposed
among pre-, sham exposure, and post-exposure were not
significantly different in the EHS (P=0.263) or non-EHS
(P=0.426) groups, demonstrating that conditions were
the same for subjects throughout sham-exposure
sessions.
Figure 4 shows the percentages of subjects in the EHS
and non-EHS groups for each inquiry number who
believed they were being exposed in sham (Figure 4A)
and real (Figure 4B) exposure sessions. Although there
were significant differences between EHS and non-EHS
groups during the real exposure period in Figure 4B,
there were also significant differences during the sham
exposure period (Figure 4A), suggesting that the significant
differences between EHS and non-EHS groups during
the real exposure period were not actually caused by
exposure. The same reasoning applies to the significant
differences during pre- and post-exposure in both sham
and real exposure sessions. These higher percentages in
the EHS group during both the sham and real sessions
probably resulted from a bias of EHS individuals, who
believe they can feel EMF, as described in our previous
reports [23,24]. Therefore, there is no evidence that individuals
in the EHS group perceived the radiation emitted
by WCDMA mobile phones better than those in the
non-EHS group.
Discussion
Neither the EHS nor the non-EHS group showed significant
differences in heart or respiration rate between real
and sham exposures or among stages. In the case of
LFP/HFP, however, there were significant differences
Table 2 Descriptive and statistical tests for heart rate, respiration rate, and LFP/HFP among stage, exposure, and
interaction
Heart rate (bpm) Respiration rate (bpm) LFP/HFP (%)
Non-EHS EHS Non-EHS EHS Non-EHS EHS
Real Sham Real Sham Real Sham Real Sham Real Sham Real Sham
Stage: mean (standard error)
I 76.0 (1.7) 75.6 (2.5) 77.0 (2.8) 77.2 (2.8) 17.2 (0.6) 17.3 (0.6) 17.4 (0.6) 18.0 (0.8) 100.0 (0.0) 100.0 (0.0) 100.0 (0.0) 100.0 (0.0)
II 75.5 (1.6) 75.3 (2.6) 77.8 (2.9) 77.2 (2.8) 17.3 (0.7) 17.9 (0.5) 17.6 (0.6) 17.0 (0.7) 143.9 (27.0) 165.6 (12.8) 133.8 (15.0) 122.7 (17.0)
III 75.2 (1.7) 74.4 (2.2) 76.4 (2.7) 77.6 (2.9) 16.9 (0.7) 17.6 (0.5) 17.5 (0.6) 17.3 (0.6) 151.0 (31.5) 167.6 (23.4) 198.3 (32.8) 110.6 (13.7)
IV 75.1 (1.6) 73.3 (2.1) 76.9 (2.8) 77.6 (2.9) 18.4 (0.7) 17.7 (0.5) 17.1 (0.7) 17.5 (0.7) 131.3 (23.5) 178.0 (19.9) 178.5 (31.0) 141.5 (23.5)
Factor (P-value)
Exposure 0.629 0.815 0.772 0.754 0.552 0.079
Stage 0.166 0.727 0.205 0.614 0.001* 0.048*
Interaction
(exposure &
stage
0.621 0.226 0.518 0.431 0.428 0.055
* P < 0.05, bpm; beats per min.
Table 3 Percentages of those who believed they were being exposed during pre-exposure, exposure and postexposure
periods, and P-values for sham and real exposures in EHS and non-EHS groups
Group Session Pre-exposure (%) P-value Exposure (%) P-value Post-exposure (%) P-value
EHS (n = 17) Real 47.1 1.000 65.9 0.572 62.8 1.000
Sham 41.2 61.2 62.8
Non-EHS (n = 20) Real 0.0 1.000 5.0 0.375 6.7 1.000
Sham 0.0 8.0 6.7
Kwon et al. Environmental Health 2012, 11:69 Page 6 of 8
http://www.ehjournal.net/content/11/1/69
between some stages during both real and sham exposure
sessions in both groups. One disadvantage of the LFP/
HFP analysis is that it is considerably influenced by stress,
which can increase or decrease LFP/HFP [25]. Hjortskov
et al. reported that psychological stress could result in
increased LFP/HFP [26]. Nam et al. reported that LFP/
HFP monotonically increased at each exposure stage in
both EHS and non-EHS groups during 30 min of sham
exposure [23]. In a subsequent study, Nam et al. also confirmed
that LFP/HFP significantly increased over time in
the absence of exposure, an effect the authors attributed
to acute sleep deprivation resulting from awakening subjects
with a noise when they exhibited drowsiness [27].
An additional potential source of stress was the requirement
that subjects not move during a 64 min experiment.
In fact, the “no-movement” requirement was the factor
that drew the most complaints by subjects.
In the current study, neither the EHS nor non-EHS
group showed significant differences in any of the four
stages between real and sham sessions for any of the eight
symptoms surveyed. Wilén et al., reported that exposure
to RF-EMFs cannot explain perceived mobile phone
attributed symptoms in EHS or non-EHS subjects [28].
Koivisto et al. also reported that RF exposure did not produce
any consistent subjective symptoms or sensations
such as headache, dizziness, and fatigue in non-EHS subjects
[22]. Therefore, most likely, subjective symptoms
resulted from a nocebo effect, meaning adverse symptoms
occurred due to negative expectations [29].
There were no significant differences in the percentages
of perception in either group who believed they
were being exposed during pre- or post-exposure periods
between real and sham exposures. There were also
no significant differences in the perception percentages
for either the EHS or non-EHS group during the sham
exposure session (pre-exposure, sham exposure, post-exposure).
Therefore, our experimental protocol seems
minimally biased since we confirmed that there were no
delayed effects, no differences in pre-exposure condition,
and no difference in the percentage of those who
believed they were being exposed among the pre-exposure,
sham exposure, and post-exposure periods. With regard
to the outliers, we included subjects who were
outliers in the analyses and tested again to see whether
their inclusion actually changed statistical tests for the
physiological variables, symptoms, and perception.
These results including the outliers were not significantly
different from those excluding the outliers.
We used the EHS screening tool developed by Eltiti
et al. to identify individuals who were sensitivity to RFEMFs
[12]. There is no objective diagnostic criterion for
classifying someone as EHS at present. In the future, the
statistical weighing of people’s self-reported hypersensitivity
should substantiate their EHS claim.
Conclusions
In both the EHS and non-EHS groups, there were no
significant differences in heart rate, respiration rate, or
LFP/HFP between sham and real exposure to a
WCDMA module (average power, 24 dBm at 1950
MHz; specific absorption rate, 1.57 W/kg) attached inside
a dummy phone for 32 min. There was no association
between eight subjective symptoms and RF-EMFs
exposure in either group. There was also no indication
that EHS subjects could detect exposure. Therefore,
considering the physiological data analyzed, the subjective
symptoms surveyed, and the percentages of those
who believed they were being exposed, no effects were
observed in EHS or non-EHS subjects as a result from
32 min of RF radiation emitted by WCDMA mobile
phones.
Abbreviations
ANOVA: Analysis of variance; ECG: Electrocardiogram; EHS: Electromagnetic
hypersensitivity; ELF: Extremely low frequency; ERP: Ear reference point;
GSM: Global System for Mobile Communications; HFP: High-frequency
power; HRV: Heart rate variability; h/d: Hour per day; IEEE: Institute of
Electrical and Electronics Engineers; LFP: Low-frequency power;
Max: Maximum; Min: Minimum; n: Number; RF-EMFs: Radio frequencyelectromagnetic
fields; SAR: Specific absorption rate; SD: Standard deviation;
WCDMA: Wideband code division multiple access; yr: Year; YUHS: Yonsei
University Hospital System; 3G: Third generation.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
MKK recruited subjects, collected experimental data, and performed
statistical analyses. JYC and SKK collected experimental data. TKY analyzed
experimental data. DWK contributed to the development of the study
protocol and editing of the manuscript. All authors read and approved the
final manuscript.
Acknowledgements
This research was supported by a grant from the Basic Science Research
Program through the National Research Foundation of Korea (NRF) funded
by the Ministry of Education, Science and Technology (2010–0022374), and
Power Generation & Electricity Delivery of the Korea Institute of Energy
Technology Evaluation and Planning(KETEP) grant funded by the Korea
government Ministry of Knowledge Economy (No. 2011101050018D).
Author details
1Brain Korea 21 Project for Medical Science, Yonsei University College of
Medicine, Seoul, South Korea. 2Department of Medical Engineering, Yonsei
University College of Medicine, Seoul, South Korea. 3Graduate Program in
Biomedical Engineering, Yonsei University, Seoul, South Korea. 4Department
of Medicine, Yonsei University College of Medicine, Seoul, South Korea.
Received: 30 April 2012 Accepted: 17 September 2012
Published: 21 September 2012
References
1. Heinrich S, Thomas S, Heumann C, vonKries R, Radon K: Association
between exposure to radiofrequency electromagnetic fields assessed by
dosimetry and acute symptoms in children and adolescents: a
population based cross-sectional study. Environ Health 2010, 9:75.
2. Baan R, Grosse Y, Lauby-Secretan B, El Ghissassi F, Bouvard V,
Benbrahim-Tallaa L, Guha N, Islami F, Galichet L, Straif K: Carcinogenicity of
radiofrequency electromagnetic fields. Lancet Oncol 2011, 12:624–626.
Kwon et al. Environmental Health 2012, 11:69 Page 7 of 8
http://www.ehjournal.net/content/11/1/69
3. Hillert L, Berglind N, Arnetz BB, Bellander T: Prevalence of self-reported
hypersensitivity to electric or magnetic fields in a population-based
questionnaire survey. Scand J Work Environ Health 2002, 28:33–41.
4. Levallois P, Neutra R, Lee G, Hristova L: Study of self-reported
hypersensitivity to electromagnetic fields in California. Environ Health
Perspect 2002, 110(Suppl 4):619–623.
5. Schröttner J, Leitgeb N, Hillert L: Investigation of electric current
perception thresholds of different EHS groups. Bioelectromagnetics 2007,
28:208–213.
6. Proceedings of an International Workshop on EMF Hypersensitivity:
25-27 October 2004. In Edited by Mild KH, Repacholi M, van Deventer E,
Ravazzani P. World Health Organization; 2006.
7. Furubayashi T, Ushiyama A, Terao Y, Mizuno Y, Shirasawa K, Pongpaibool P,
Simba AY, Wake K, Nishikawa M, Miyawaki K, Yasuda A, Uchiyama M,
Yamas~~~a HK, Masuda H, Hirota S, Takahashi M, Okano T, Inomata-Terada S,
Sokejima S, Maruyama E, Watanabe S, Taki M, Ohkubo C, Ugawa Y: Effects
of short-term W-CDMA mobile phone base station exposure on women
with or without mobile phone related symptoms. Bioelectromagnetics
2009, 30:100–113.
8. Leitgeb N, Schrottner J: Electrosensibility and electromagnetic
hypersensitivity. Bioelectromagnetics 2003, 24:387–394.
9. Mueller CH, Krueger H, Schierz C: Project NEMESIS: perception of a 50 Hz
electric and magnetic field at low intensities (laboratory experiment).
Bioelectromagnetics 2002, 23:26–36.
10. Hietanen M, Hamalainen AM, Husman T: Hypersensitivity symptoms
associated with exposure to cellular telephones: no causal link.
Bioelectromagnetics 2002, 23:264–270.
11. Kwon MS, Koivisto M, Laine M, Hamalainen H: Perception of the
electromagnetic field emitted by a mobile phone. Bioelectromagnetics
2008, 29:154–159.
12. Eltiti S, Wallace D, Zougkou K, Russo R, Joseph S, Rasor P, Fox E:
Development and evaluation of the electromagnetic hypersensitivity
questionnaire. Bioelectromagnetics 2007, 28:137–151.
13. Croft RJ, Leung S, McKenzie RJ, Loughran SP, Iskra S, Hamblin DL, Cooper
NR: Effects of 2 G and 3 G mobile phones on human alpha rhythms:
Resting EEG in adolescents, young adults, and the elderly.
Bioelectromagnetics 2010, 31:434–444.
14. IEEE Standard 1528–2003: Recommended Practice for Determining the Peak
Spatial-Average Specific Absorption Rate (SAR) in the Human Head From
Wireless Communications Devices: Measurement Techniques.: IEEE Standard
Coordinating Committee 34; 2003.
15. Beard BB, Kainz W, Onishi T, Iyama T, Watanabe S, Fujiwara O, Wang J, Bit-
Babik G, Faraone A, Wiart J, Christ A, Kuster N, Ae-kyoung L, Kroeze H,
Siegbahn M, Keshvari J, Abrishamkar H, Simon W, Manteuffel D, Nikoloski N:
Comparisons of computed mobile phone induced SAR in the SAM
phantom to that in anatomically correct models of the human head. IEEE
Trans Electromagn Compat 2006, 48:397–407.
16. Haarala C, Takio F, Rintee T, Laine M, Koivisto M, Revonsuo A, Hamalainen H:
Pulsed and continuous wave mobile phone exposure over left versus
right hemisphere: effects on human cognitive function.
Bioelectromagnetics 2007, 28:289–295.
17. Unterlechner M, Sauter C, Schmid G, Zeitlhofer J: No effect of an UMTS
mobile phone-like electromagnetic field of 1.97 GHz on human
attention and reaction time. Bioelectromagnetics 2008, 29:145–153.
18. Marek MJ, Thomas BA, John C, Bobert EK, Alberto M, Arthu JM, Peter JS:
Heart rate variability: standards of measurement, physiological
interpretation, and clinical use. Eur Heart J 1996, 17:354–381.
19. Lyskov E, Sandström M, Mild KH: Neurophysiolgocial study of patients
with perceived ‘electrical hypersensitivity’. Int J Psychophysiol 2001,
42:233–241.
20. Sandström M, Lyskov E, Hörnsten R, Mild KH, Wiklund U, Rask P, Klucharev V,
Stenberg B, Bjerle P: Holter ECG monitoring in patients with perceived
electrical hypersensitivity. Int J Psychophysiol 2003, 49:227–235.
21. Parazzini M, Ravazzani P, Tognola G, Thuroczy G, Molnar FB, Sacchettini A,
Ardesi G, Mainardi LT: Electromagnetic fields produced by GSM
cellular phones and heart rate variability. Bioelectromagnetics 2007,
28:122–129.
22. Koivisto M, Haarala C, Krause CM, Revonsuo A, Laine M, Hamalainen H: GSM
phone signal does not produce subjective symptoms. Bioelectromagnetics
2001, 22:212–215.
23. Nam KC, Lee JH, Noh HW, Cha EJ, Kim NH, Kim DW: Hypersensitivity to RF
fields emitted from CDMA cellular phones: a provocation study.
Bioelectromagnetics 2009, 30:641–650.
24. Kim DW, Choi JL, Nam KC, Yang DI, Kwon MK: Origins of electromagnetic
hypersensitivity to 60 Hz magnetic fields: A provocation study.
Bioelectromagnetics 2012, 33:326–333.
25. Akselrod S, Gordon D, Ubel FA, Shannon DC, Berger AC, Cohen RJ: Power
spectrum analysis of heart rate fluctuation: a quantitative probe of beatto-
beat cardiovascular control. Science 1981, 213:220–222.
26. Hjortskov N, Rissen D, Blangsted AK, Fallentin N, Lundberg U, Søgaard K:
The effect of mental stress on heart rate variability and blood pressure
during computer work. Eur J Appl Physiol 2004, 92:84–89.
27. Nam KC, Kwon MK, Kim DW: Effects of posture and acute sleep
deprivation on heart rate variability. Yonsei Med J 2011, 52:569–573.
28. Wilén J, Johansson A, Kalezic N, Lyskov E, Sandström M:
Psychophysiological tests and provocation of subjects with mobile
phone related symptoms. Bioelectromagnetics 2006, 27:204–214.
29. Oftedal G, Straume A, Johnsson A, Stovner LJ: Mobile phone headache: a
double blind, sham-controlled provocation study. Cephalalgia 2007,
27:447–455.
doi:10.1186/1476-069X-11-69
Cite this article as: Kwon et al.: Effects of radiation emitted by WCDMA
mobile phones on electromagnetic hypersensitive subjects.
Environmental Health 2012 11:69.
Submit your next manuscript to BioMed Central
and take full advantage of:
• Convenient online submission
• Thorough peer review
• No space constraints or color figure charges
• Immediate publication on acceptance
• Inclusion in PubMed, CAS, Scopus and Google Scholar
• Research which is freely available for redistribution
Submit your manuscript at
http://www.biomedcentral.com/submit
Kwon et al. Environmental Health 2012, 11:69 Page 8 of 8
http://www.ehjournal.net/content/11/1/69In July of 2018, this honey pot forum was sold out to an unidentified NPC sock puppet and troll organization. Most independent thinkers and writers migrated to other MGTOW forums as a result of the never-ending infighting and deliberate trouble starting caused by members who were given "carte blanche" by the admin to do whatever they want. Before my departure, I only left a few thousand cat pics here to comfort and ridicule the feminist owners who now run this place. Their background agenda is to make MGTOW look like a club of losers the public eye. And during the course of 2019, they actually managed to destroy almost all other MGTOW venues as well. Here is the truth about "theindependentman.org" aka "TIM" which was created as an extended workbench to further divide the community. When you register, they install a spyware Zombie cookie on your browser that does all kinds of things the user does not know of: http://www.filedropper.com/essay-on-the-removal-of-malware-cookies-used-by-tim
2018-10-03 at 10:17 AM#863098In reply to: Excavating Sand from Duchess Gargamella's Vagina
An Introduction to Long Distance Medium Wave Listening
by Steve Whitt (UK) & Paul Ormandy (New Zealand).
Up-dated 11th March 2006 Version 2.1
1. Introduction: What is Medium Wave DXing?
2. Who Goes There?: What sorts of signals can you hear on MW?
3. Getting Started: You don’t have to be rich – or even awake!
4. The Identification Question: Don’t make assumptions
5. DXpeditions: Giving up home comforts in pursuit of rare signals
6. Equipment: The tools of the trade
7. Antennas: To the MW DXer, a beverage isn’t something you drink!
8. Propagation: How Medium Wave signals reach your ears
9. Interference: The sounds you don’t want to hear
10. Reception Reports: How to verify your reception
11. The Digital Dimension: The Future of MW DX in a Digital World
12. The Wetter The Better: How the weather affects MW signals
13. Clubs and Pubs: Further sources of information for the MW DXer
14. Longwave Radio: Activity on the Longwave Band
This article was originally published on the Radio Netherlands Website
© Copyright Medium Wave Circle 2006
An Introduction to Long Distance Medium Wave Listening
by Steve Whitt
This is a guide to long distance listening (DXing) on the Medium Wave (MW) band. If you’ve never
tried listening to anything other than your local radio station on Medium Wave then these pages are
intended to give you an insight into the stations you could hear, and how to identify them. Also covered
are the types of receivers and aerials you should use and an introduction to signal propagation.
With the imminent arrival of digital broadcasting on Medium Wave, we look at how this will affect
the hobby. We also take a detailed look at DXpeditions, where keen listeners go to remote and electrically
quiet locations to hear the most difficult catches.
Of course once you’ve caught an interesting station you will probably want a QSL card, so we also include
information that should make this task easier. Naturally a guide like this can only scratch the surface of
MW DXing so it needs to act as a pointer towards more information and indeed you’ll find up to date listings
of suitable books, clubs and sources of specialist equipment.
MW DXing opens up another dimension not covered by most of the shortwave (SW) bands. Although a
few MW broadcasts are also available on SW the vast majority are unlikely to be heard on SW frequencies.
Indeed many countries (mostly island nations) have no SW operations and only broadcast on MW which
means that MW DXing is the only way of logging these elusive parts of the world. Furthermore most MW
stations are local in nature and thus can give an interesting insight into what is going on locally; one can
hear farm news from the US mid-west, obituaries on Jamaican radio; war reports from the former Yugoslavia,
religious salvation from many stations and adventures from ship bourne broadcasters on the high seas.
The choice is yours!
Good listening.
Next Section: Who goes There?
An Introduction to Long Distance Medium Wave Listening
by Steve Whitt
International Broadcasters
Mostly found in Europe and South East Asia, these stations are designed to target audiences in countries
distant from the transmitter and studios. Good examples are high power (500kW and upwards) transmitters
operated by the Voice of America, BBC and Voice of Russia. Over the years these stations have been
joined by a selection of religious broadcasters such as Vatican Radio, PJB in Bonaire, TWR in Monaco and
HLAZ in Korea.
Regional Broadcasters
These used to be best exemplified by the Clear Channel stations in North America. These 50000 Watt stations
had the exceptional privilege of having a MW channel virtually to themselves. This was a deliberate
move to ensure that the vast rural interior could get good radio reception at night when coverage of half the
continental USA is typical. Although some stations still operate as de facto clear channel stations, they no
longer have an automatic right to use their frequency exclusively.
Synchronised Networks
These are networks of several stations all on one frequency carrying the same programme set up specifically
to provide national coverage. Such networks need care in their design and operation to avoid problems
with carrier frequency synchronisation and variable delay in the distribution of audio to each transmitter.
The most notable examples of synchronised networks are run by the BBC in the UK for distribution of
Radio 5 Live, the news and sports network. The network operated by Virgin 1215, also in the UK, is less
successful.
Local Broadcasters
By far the majority of stations on the MW band fall into this category, characterised by the co-location of
the station, its studios and transmitter and its audience. Transmitter powers can range from as little as 1W to
100kW or more, depending on the coverage area and the degree of co-channel interference. Many local
broadcasters in North America operate only during daylight hours in order to avoid problems with cochannel
interference brought in by the night-time sky-wave.
Navigational Beacons
Radio signals are an essential means of navigating for most ships and aeroplanes and the frequencies between
the Long Wave band and the MW band have been allocated for this purpose. More surprisingly, a
sizeable number of aeronautical beacons operate in the MW band interspersed amongst the broadcasters.
Over 100 beacons are known, of which the majority are in the former Soviet Union. Beacons are usually
low power and intended to give accurate navigational information over 50-100 nautical miles range, how-
Medium Wave Stations
The Medium Wave (MW) band is an internationally agreed band of frequencies
primarily set aside for the purpose of broadcasting. It is also
known as the AM or MF band or the Broadcast Band (BCB) in various
parts of the world. Indeed there are stations using this band in every continent
(except Antarctica) and to appreciate the sheer numbers of broadcasters
just consider that in the USA alone there are about 5300 stations
on 106 channels in the AM band. For many years the MW band has
stopped at approximately 1610kHz but it has been extended to 1705kHz
in North America. What sorts of signals can you hear on the MW band?
ever long distance listeners (DXers) will be able to detect the repetitive Morse code identification messages
from beacons over much greater distances.
Clandestines, Pirates and Jammers
Clandestine stations are usually politically motivated broadcasters often supported by a covert government
operation. Operations are complex since stations can pretend to be what they are not and there is always the
possibility of the double bluff! These stations come and go, reflecting political changes in their ‘host’ country.
Currently the bulk of known clandestines are operating in the Middle East. Long serving examples include
National Radio of the Saharan Democratic Republic (since 1975) and the Voice of Iraqi Kurdistan
(since 1965).
Clandestine stations, as well as official broadcasters, are often the target of jamming if the government in the
area targeted by the broadcaster feels threatened by the message being carried. Jamming usually takes the
form of a powerful transmitter broadcasting irritating noise and interference on the same frequency as the
station beaming into the country. Jamming on MW has fallen dramatically since the late 1980s and now is
confined mainly to the Middle East and the Korean peninsula.
Long Wave Stations
Though the Long Wave (LW) band (148.5 – 283.5kHz) is not strictly part of the MW Band, many listeners
have common interest in the two bands. Long Wave is only used by broadcasters in Europe, North Africa,
Mongolia and the Asian part of the former Soviet Union. Elsewhere in the world these frequencies are used
mainly by navigational beacons similar to those found between 283.5 and 525kHz. One exception is in the
USA where 160-190kHz is also used by experimenters who are allowed just 1W of power to tiny antenna!
For more information about Long Wave, check out our page of Longwave Links.
Next Section: Getting Started
Most signals described so far are more or less permanent features in the
MW landscape. However, there are several more transient types of signals
which can be heard, the most prominent of which are pirates and clandestine
operations. Pirates are stations operating openly with out official licences
from any country. They range from one time hobby pirates operating
from someone’s bedroom to fully financed operations broadcasting
from ships at sea. The heyday of the seaborne pirate seems to be gone
since the likes of Radio Caroline, Voice of Peace and Radio New York
International have passed into the history books. Just one radio ship is currently
operational, in the Mediterranean.
GETTING STARTED
by Steve Whitt
Let’s take a brief look at what’s needed to become a MW DXer and how you get started. Firstly, it is important
to realise that the MW DXer can start listening with very cheap and simple equipment; any domestic
radio will tune the MW band and it’s quite easy to hear 50 – 100 different stations at night using just an internal
antenna. However, it is probably preferable to use a better quality domestic radio, or a good car radio
to get started. With this type of equipment, stations from up to 1500km away will be regularly heard at
night. If radio conditions are favourable and you listen at the right time, reception of some stations over
3000km away should be possible. In this way you can have a go at DXing the MW band before committing
yourself to any more sophisticated (or expensive) equipment.
On the other hand, if you are already an active shortwave listener, all that is needed to get going on MW is
a change of waveband. Indeed many SW listeners tend to overlook the fact that their radios can usually
tune the MW band and that their outdoor antennas are often effective in picking up distant MW signals. For
the SW listener who has grown tired of the megawatt international stations, a fresh challenge can be found
on the MW band.
Round The Clock
It is possible to DX on the MW band 24hrs a day (provided you don’t need to sleep!) but the band has two
distinctly different ‘personalities’ according to the time of day. During daylight hours MW radio signals are
absorbed in the lower layers of the ionosphere and only ‘ground-wave’ signals propagate; these signals radiate
away from the transmitter rather like ripples in a pond and allow reception at distances up to about 500
km. Daytime is a good time to listen for low power local radio stations since very few distant signals are
audible and therefore interference is at a minimum.
At night the ionosphere tends to reflect, rather than absorb, MW signals and thus energy
radiated upwards from a transmitter is refracted back down to earth at some point
far away from the transmitter. These are known as skywave signals. It is quite possible
for night time signals to under go multiple hops with alternate reflections occurring in
the ionosphere and off the earth’s surface. This mechanism allows reception to take
place many thousands of km away from a transmitter. For example Radio Globo in Rio
de Janeiro, Brazil is regularly heard in Europe, its signal having to cross 9500km of ocean on the way. You
will of course notice that night time sky-wave propagation fills your radio dial up with hundreds of powerful
signals, so how is it possible to hear the weak DX signals?
Over the years international broadcasting organisations have agreed a band plan arrangement on the MW
band which requires all stations in an area to operate on fixed frequency channels. This has been arranged
to maximise the number of broadcasters who can operate and to minimise the degree of interference affecting
the listener. Fortunately for the DXer, international agreement is not perfect and as a result different
MW band plans are operated in different continents; most European, African and Asian stations use channels
that are exact multiples of 9kHz, whereas in the Americas channels are assigned as multiples of
10kHz. This means that, in Europe for instance, by tuning between the 9kHz channels reception of trans-
Atlantic stations becomes possible. For example;
● 1008kHz (112 x 9kHz) Radio 10 Gold, Flevo, Holland
● 1010kHz (101 x 10kHz) WINS New York, USA
● 1017kHz (113 x 9kHz) SWF Baden Baden, Germany
This particular example also illustrates the value of knowing a station’s timetable. Although reception of
WINS is technically possible as soon as a path of darkness exists between New York and Europe, NOS is a
pretty powerful signal and will cause interference. However NOS signs off for the night at the unusually
early time of 2230 UTC, and knowing this it is possible to tune a virtually interference free signal from
WINS before midnight.
Stations on split frequencies are usually the easiest to hear over long distances since they suffer less cochannel
interference. Obviously in any part of the world reception conditions vary and different stations are
heard. In order to gauge what you are likely to hear or what you are going to have real difficulty catching it
is worth joining a local club that includes MW in its remit; even better would be joining a specialist MW
club. You will have access to lots of information on MW listening and in particular you’ll be able to see
what other enthusiasts are hearing.
DXing in Your Sleep!
The easiest way to identify MW DXers is if they fall asleep during the day. Since the fundamental characteristics
of the ionosphere favour long distance MW radio reception at night this hobby
will be the province of the shift worker, the insomniac or the outright fanatic. There is
one solution and that is to DX in your sleep!
All you need apart from the standard antenna and receiver is a tape recorder and a timer
and a fairly methodical approach to listening. Neither the tape recorder nor timer should
be expensive and indeed I don’t know any serious DXer (SW or MW) who doesn’t already
use a recorder. Depending on your selection of equipment there are two ways of DXing in your sleep.
If you have an ordinary radio and a separate cassette recorder you’ll need to buy a mains timer unit (get one
with a digital display since these can be set precisely to the minute) which will cost about £15 – £20
(US$20-40). With such a timer connected in series with the mains lead of the recorder you are able to make
a recording at any time of the day or night when you’re not around. Just make sure that your radio is tuned
to the frequency of the station you want to hear. Unfortunately such remote control is trickier for really
tough DXing, since in these circumstance you will want to be making continuous adjustments to your receiver
or antennas to improve reception. However, for less marginal conditions this technique is very valuable
particularly for night after night monitoring of one frequency. In this case it would be impossible for
me to be awake every night and I soon would get put off by the DX-less nights. Indeed taping for an hour
each night allows me to quickly find the nights that are particularly good for DX (just 5-10% say) and to
then examine the tape more closely for DX signals. In this way I’ve heard several North American and a
couple of Latin American stations that I would not have otherwise heard.
If you have a receiver with a built in programmable timer (e.g. SONY 2001D) you do not necessarily need
a separate mains timer. It might be possible to activate the cassette recorder from the radio or if this is not
directly practical an external unit called a VOX or voice activated switch might be the answer. This piece
of equipment connects in the audio lead from radio to cassette and detects when audio starts i.e. when the
internal timer has turned on the radio. It then switches on the recorder for as long as sound is present. So if
you have the equipment but have not tried this before why not give it a go.
Next Section: The Identification Question
MW DXing: The Identification Question
by Steve Whitt
If you tune in to your local radio station it soon reveals it identity through a number
of clues; its strength, frequency, programme style and most importantly its on air ID
(callsign, jingle etc.) which is easily heard since there is no interference. We now need
to ask what happens when you are trying to decipher a weak, fading signal from a
distant station that may well be using an unfamiliar language. The fundamental
question is, at what point is a station identified and how should a station that is not
fully identified be described.
The process of identifying stations should be viewed as a broad spectrum of probability. At one end is the
completely unidentified station, an example of which is the open or blank carrier with no modulation – although
you may have quite a good idea about its identity such a signal really is unidentified. At the other
end of the spectrum is the positively (100% probability) identified (e.g. ‘…the powerful missionary outreach
station, the Atlantic Beacon 50000 Watts at 15-70, broadcasting from the beautiful Turks and Caicos Islands
in the West Indies…’ leaves little doubt about this station’s identity!).
Many DX stations fall somewhere between these two extremes; for example you may hear only part of a
callsign perhaps in a poorly understood language, or maybe in the midst of heavy interference or jamming.
Or perhaps no identification is heard but certain characteristics of the signal or programme content point in
the direction of one particular station. Generally speaking, the longer you listen to a station, on one date or
over many days, the more clues there are to help achieve successful identification. If you can’t ID a station
keep listening!
The factors which contribute to the identification of a station are almost without limit. Among them are
time of reception, frequency, quality of signal, and programming style. The latter is usually one of the most
important clues since valuable information can be gleaned from the languages used and music played, as
well as from advertising, weather reports, time checks and so on. It should be appreciated that one’s ability
to identify a station depends mostly on the ability to interpret what is being heard. And, rather like a detective
investigating a crime, it takes experience as a DXer to reach a correct conclusion based upon the limited
clues available. Even the most experienced DXer will not be able to identify everything heard, so there
needs to be some way of indicating how certain (or uncertain) a particular identification is. Hence the following
shorthand expressions have developed as a solution to this problem.
Identified
Implies that the listener is 100% certain of a station’s identity since a full announcement
by the station was clearly heard.
Presumed
When a station is listed as presumed it means that the listener has had sufficient clues to the station’s identity
to be almost (90-99% probability) certain of its true identity. About all that is missing is a formal ID announcement.
Tentative
This term usually describes a situation where the listener is fairly certain that a particular station is being
heard – indeed that the probability is substantially greater than 50%, typically from 75%-90%. It is important,
however, to note that a tentative logging is not just a pure guess since there still have to be a number of
clues pointing in the right direction.
Unidentified
Anything short of tentative is called ‘unidentified’ and the DXer should resist the temptation to classify loggings
as tentative if there is insufficient evidence. When there is any doubt about a logging, it is wise to err
on the side of caution and list it as unidentified; however it may be worth indicating which station you think
it might have been if you have an idea.
At this point a word of caution is probably in order with regards to station listings. All DXers use lists of
one sort or another to help them in their hobby (e.g. WRTH, club bulletins etc.) but it is dangerous to rely
on a list (even the most up to date) as the sole means of identifying a station. That is not to say that lists
should not be part of a DXers ‘tools of the trade’, but just that caution should be exercised in their use.
Lists are invaluable to help narrow down the range of possibilities when it comes to indentifying a mystery
station; they can also guide a listener to the right place on the dial to possibly hear a particular station, but
they cannot actually identify a station – only the station itself can do that.
Over reliance on lists and a bit of related ‘wishful thinking’ results in the practice known as ‘list logging’
which can be sometimes observed as anomalous loggings reported in the DX logs of some magazines and
club bulletins.
Next Section: DXpeditions
Medium Wave DXpeditions
by Paul Ormandy
DXpeditions are the solution for urban DXers trapped in a noisy environment,
lacking the real estate for erecting long aerials and
yearning after a DX-fix. The solution is to find an electrically quiet
environment (preferably near the sea) where aerials can be installed
without threatening life (human or fauna). And some DXers go to
extreme lengths to obtain these requirements, for example the Scandinavians
who frequent Lemmenjoki in the Finnish Arctic Circle
and suffer the extremes of weather conditions.
Because DXpeditions are often in remote, electrically quiet conditions, a mains power supply may not be
available, with the nearest power pole some distance away (hence the need for battery-operated receivers and
tape recorders). Other non-existent luxuries like hot water, telephone (& Internet) connections and nearby
shops make planning important. In such circumstances the Dxer’s ingenuity is challenged as everything from
cooking meals to charging batteries to staying clean and staying in touch becomes a consideration.
Sleep deprivation, bad weather, lengthy journeys and all the other ‘negatives’ are usually more than offset by
the quality of the reception, the chance to experiment with aerials, receivers, splitters etc and above all by the
comradeships formed. Days are spent collecting firewood, the odd aerial maintenance chore, recounting the
previous night’s experiences, tales of ‘the one that got away’ or sleep (MW DX is best at night so the candle
often gets burned at both ends and in the middle too!).
And a successful DXpedition usually doesn’t end when you’ve arrived home . There will be numerous reception
reports to write and magazine articles to prepare (having a lap-top on the DXpedition is a great idea!).
Some famous DX-sites and links to articles follow:
● Lemmenjoki, Finland
● Grayland, Washington, USA
● Cappahayden, Newfoundland, Canada
● Sheigra, Scotland
● Coorong, Australia
● Waianakarua, New Zealand
Choosing A Site
There are numerous factors to consider when looking for a site. Ideally, it should be in a remote area, away
from nearby power lines, miles from the nearest MW/LW transmitter, close to the sea, having a comfortable
place to stay and large enough for long aerials.
Remote areas usually mean farmland. Ideally flat ground with the odd tree to assist erecting aerials. The presence
of fences can be handy though a note should be taken that even with all-wooden posts, the chemical treatments
to prevent the posts rotting away (eg. tanalising) leaves them conductive (albeit with a relatively high
resistance but a concern nonetheless). If you’re going to use a fence, either insulate the wire wherever it
touches the posts, or run a separate wire along the top.
In New Zealand, the preponderance of electric fence units can ruin an otherwise
promising site. A quick check with an AM radio listening for tell-tale clicking
is a good test. Electric fences are usually their noisiest during a dry period and a
good rain fall lowers the level of noise by improving their ground conductivity
and washing insulators tracking high voltage across dusty paths. Electric fences
are generally only found on stock farms so other rural land uses like extensive
horticulture may offer a lower noise level than in a dairying area,
Overhead power lines, particularly high voltage varieties should also be avoided. If the choice is between a
quiet, battery-powered DXpedition and an ‘all the comforts of home’ mains-run version, I’d take the former
any day! Noisy power lines also benefit from a good rain to clean insulators etc. Underground power supplies
are a good deal quieter as there is no exposure to the elements to cause noise (and they’re fairly deep as those
poles are quite long!).
And the further away from MW & LW transmitters the better as well. Even with directional aerials, locals will
be a pest though you’re invariably going to be better off than DXing from home. Other possible sources of RF
interference like non-directional beacons (marine and aeronautical NDBs), GPS stations and the like should be
checked too.
A site close to the sea is also a benefit. Absorption of signals by ground attenuation is more pronounced the
further inland you go. So saying, my Beverage site at Waianakarua is 7km from the coast and reception there
is very good. I’d tend to exclude anything more than 10 – 15km from the sea.
When To Go
My favourite months for DXpeditions are around the Autumnal and Vernal equinox, which are late March and
late September respectively in the Southern Hemisphere, the opposite in the Northern Hemisphere). Reception
from near-polar paths is best around these times though good signals on mid-latitude paths also occur in January/
February and October/November.
If you’re into long-term planning, the lowest point of the solar cycle is also the time to chase signals. That
means 2006 through 2008 will be prime DXpedition years and I’ll certainly be looking forward to then!
Next Section: Equipment
Medium Wave DXpeditions: Equipment
by Paul Ormandy
RECEIVERS
O.K, so it’s a pretty essential piece of equipment, but what do you need to make all
the effort you’ve put into antennas, finding a location, extra equipment, time off
work and away from loved ones worth the while? A receiver that will maximise your
chances of hearing those weak, distant signals.
There are numerous figures and specifications provided by manufacturers and test panels which will point you
in the right direction towards a DX dream-machine. If were to purchase a receiver for the prime intention of
MW DX, the factors I’d be most interested in are:
Selectivity
In the crowded MW band where stations are separated by as little as 1kHz, the ability of the receiver to discriminate
between adjacent stations and to provide loggable audio is essential. What’s more the width of the
filter’s skirt is as important as its quoted width and is often determined by the type of material the filter is manufactured
from. For example, ceramic filters have very wide skirts allowing interference to ingress whereas mechanical
or crystal filters provide very sharp skirts. Look for a receiver with a filter of around 2.5kHz at -6dB
and a skirt width of less than 5kHz at -60dB (the narrower the -60dB figure in relation to the -6db the better –
don’t worry too much what that means, just use it as a point of comparison between receivers).
If you’re tossing up options at purchase time, between a VHF converter, external speaker or a sharper filter, take
the latter!
Sensitivity
Some receivers aren’t designed to apply all of their features to MW and sensitivity is often affected by internal
attenuation to prevent strong local signals over-loading the receiver. In many receivers the attenuation can be
readily set to zero by a front-panel control, or a simple software fix. In others, (e.g. Kenwood R-5000) it’s a soldering
iron job which, given the complexity of modern receivers, not all will be keen to tackle.
Preamps
Generally useful devices that may be handy for giving a weak signal that extra nudge. Pre-amps that do not degrade
the signal-to-noise ratio are extremely useful, though often they have been disabled on MW or require a
user software fix to be enabled. Check to see if they will work on the MW band or can be adapted to do so.
Memories
Some may consider having a receiver with 400 memories as more than sufficient, though for the MW DXer, the
ability to program in every MW DX channel in the best mode, with the optimum filter setting etc. is a real bonus.
This allows swift tuning between channels which in a strong opening is very handy for analysing the best
frequency(s) to monitor.
Noise Floor
And there’s little point in erecting long antennas, spending heaps on coax, preamplifiers, tuners, baluns etc, if
the weak signals you’re chasing can’t be heard under the receivers internal noise! A simple test to see how noisy
a receiver is, remove the antenna and turn the volume right up – should be very quiet, a low-level background
noise, ideally the noise level would be near zero and you would hardly notice the volume had been increased.
I’m fortunate to have an ultra-quiet 25 year old Drake SPR-4 that has allowed reception of weak signals at loggable
levels, which have been buried on colleagues’ receivers.
Hash
Modern receivers with all their synthesisers, microprocessors and fluorescent displays can produce a fair
amount of internal noise. This can effect the use of indoor loop antennas near the receiver, as they’ll pick up the
noise radiated from the electronics. Another simple test is to hold a transistor radio about 50 cm from the set
and see how much hash it picks up.
Operating Voltage/Current Consumption
If battery operation is required (for example when running mobile or on DXpeditions) the amount of power
drawn by a receiver will dictate how long a battery will last (and if you can start the car after a nights DX!).
The Drake R-8A uses 2 amps when running (i.e. 30 hours operation on a fully charged car battery) though the
drop in voltage will see the set turning itself off well before the 30 hours are up and 1 amp switched off on the
front panel. The consequences of high battery drain mean that you’ll need to be prepared for long stints at the
dials by bringing extra batteries or charging between uses.
Most receivers are designed to work off 12 or 13.8 volts DC though the AOR 7030+ prefers 15 volts for optimum
performance, although it functions very well at 12 volts.
Antenna Switches
Most receivers come equipped with the facility for 1 x 50 ohm input and 1 x 600 ohm input antenna with frontpanel
switching. I’d prefer to see at least 3 x 50 ohm inputs to allow ready access to a range of antennas.
Strangely, the AOR7030+ doesn’t have an antenna switch so an external device is necessary.
Best Receivers
If you’re looking for a new receiver, there are a number of models currently available that are hailed by DXers
as good MW DX machines. Sadly I’m not in the position of having personally used all of these sets, with the
exception of the Drake SPR-4 (which I’ve owned for 15 years), R-8, R-8A and AOR-7030+, though general
consensus would be to seriously consider the following:
● Drake R8-B
● Japan Radio NRD-545
● AOR 7030+
For ‘pre-loved’ units, look for:
● Drake SPR-4, R4B, or R4C
● Icom ICR-71
● Drake R8
● Drake R8A
● Japan Radio NRD-535
● Watkins Johnson HF-1000 (if price is no object!)
And if mains operation is possible
● Collins R390A
● Racal RA-17 or RA-117
Another option worth exploring with many radios to make them really top performing sets is to purchase one
modified by Sherwood Engineering. For not a lot more than the purchase price, Sherwood can ‘hot-rod’ several
of the above.
Headphones
There are several types of headphone worth considering. Headphones manufactured with
the hi-fi market in mind reproduce high levels of bass and treble. They give excellent frequency
response and provide pleasant listening to high-quality music. Great for music and
often suitable for DXing, though there is another option.
A MW DXer generally isn’t too interested in high-fidelity, the frequencies covered by the
spoken world are more crucial and there are several headphones that emphasise that frequency
range. These are manufactured for the amateur radio and DX fraternity by the likes
of Kenwood and Icom. They reduce the amount of bass and therefore rumble and reduce the amount of treble,
which will soften heterodyne whistles and reduce hiss.
If you are DXing with others, and particularly on a DXpedition, you’ll need a pair of enclosed muff phones to
prevent what you’re listening to annoying others. There are noise-cancelling headphones which form an electronic
noise-barrier to prevent external noise affecting your listening. Sony is one manufacturer of these surprisingly
effective devices.
Audio Recorders
It’s also important that you have a good tape recorder to capture as many of those often fleeting moments of reception
as possible. Some DXers leave the recorder running non-stop, others hit the record button as soon as
something interesting pops up.
And recording media now comes in many forms and prices.
Audio Cassette Recorder No doubt the cheapest option, though a recorder
with external line-in (or microphone socket), tape counter and external DC power connection
are handy. These range from el-cheapo to the broadcast quality versions used by
media journalists.
Hi-Fi Video Definitely a mains voltage option and ideal for leaving running especially
with long-play modes which in some models will let you record audio for up to 12
hours. And it’s also high quality audio.
DAT Digital Audio Tape offers one of the highest quality portable tape-based recorders.
The tapes and recorders are considerably dearer than audio cassettes but this is offset by the
wide frequency response and dynamic range
Mini-Disc Another portable solution using a small CD and also expensive. What’s more,
some mini-disc models radiate a fair amount of hash and aren’t suitable in all situations.
MP3 The new kid on the block. No tape or disk is required as the audio is recorded onto a
computer chip. These are still in their infancy and the amount of time available for recording
is relatively short though I’d expect these units to become one of the tools of the DX-trade before
too long.
Personal Computer
In the last few years increasingly sophisticated software has appeared that can turn your PC into a powerful audio
recorder and processor. Astonishingly powerful software is available for under $50US that can record and
playback, that can speed up and slow down the audio, and that incorporates graphic equalisers and spectrum
monitors. Many listeners now use computers because you can easily programme recording times (10 hours over
night is easy!), and you can easily check the audio at any time with just a few mouse clicks – no
need to wait for a tape to rewind to an imprecise location.
It is also extremely easy to copy, cut and paste sounds just like text in a word processor. So it is very easy to
make a short audio clip to e-mail to other DXers for a second opinion or for translation, or even to include with
an e-mailed reception report.
Two software packages are well established and widely used by DXers:
Total Recorder by High Criteria http://www.highcriteria.com
RecAll Pro by Sagebrush http://www.sagebrush.com
The only drawback with this technology is the fact that many PC’s cause radio interference to reception and that
mains power is usually essential. It is possible to reduce interference but it takes careful choice of equipment and
wiring and some effort. It is also possible to use a battery powered laptop computer to reduce interference and
the need for mains power.
A little trick when using stereo recorders. Feed the audio from your main receiver into one channel and audio
from a second receiver tuned to WWV into the other. That way you’ll be able to have a time-base accessible by
playing the tape back through an amplifier with a balance control. (Thanks to Andy Gardner!)
ACCESSORIES
There are other items that the MW DXer may find handy in the pursuit of weak signals, particularly on DXpedition
when accompanied by colleagues.
When using modern sets with low-impedance inputs connected to high impedance antennas (e.g. Beverages)
a balun is required to maximise signal transfer between the two. Baluns are best installed some
distance from the receivers and fed with coax to prevent interaction. It is also a good idea to keep antennas
separate as they approach the listening site (e,g, not anchored on the same pole) for the same
reason.
When you’re fortunate enough to have a number of antennas to choose from, an antenna switch will be required.
Ideally they should be metal -encased and offer high isolation between antennas to prevent interaction.
On DXpeditions when DXers are sharing an antenna, a splitter is required to provide equal signal levels to all
sets. Standard splitters will cause a small signal loss so amplified versions are another option.
And when you’ve finally found that ultra-quiet environment, a preamplifier may help you drag a signal out of the
mud. It is important that the amplifier has a very low level of internal noise otherwise it will also bury the signal.
Make sure the preamplifier has excellent signal-to-noise figures. An good
unit would have a gain of 10 – 20dB and a noise figure of around 2dB. Also
make sure they work down to 0.5mHz (500kHz) as a lot of preamps are designed
for above 1.8mHz.
You’ll find most of these available from places like Universal Radio, Advanced Receiver, Stridsberg Engineering
or Kiwa all in the USA, Wellbrook in the United Kingdom or Paul Ormandy’s Equinox balun. Home –
brewers may wish to check Mark Connelly’s WA1ION DX Labs page
BATTERIES
If a mains-free DXpedition is planned, consideration to the type of battery is important.
For short duration trips, a standard 60 amp-hour car battery may suffice, though for longer
stays or with multiple receivers sharing the same power source, a deep-cycle battery is
highly recommended.
These units are often used for back-up power supplies or where reliability is crucial because
of their ability to provide a constant voltage under heavy current drain for a considerable period of time.
Whilst more expensive, they are undoubtedly good value given their suitability for running 3+ sets at once.
Ratings of at least 85 amp-hour would be the minimum and 120 amp-hour suggested
.
LIGHTING
If you’re operating on 12 volts, a lighting system that provides maximum light output and minimum battery
drain is essential. I’ve used standard incandescent (high drain, poor light), fluorescents (low drain, good light
but dreadful RF interference) and gas light (no battery drain, excellent light but very noisy). The answers to
my dilemma take the form of 12 volt halogen lights. A 20-watt unit over head gives excellent light and less
than 2-amp-hour drain though if you’re going to have a light mounted a short distance over-head, go for a 10
watt unit as the light output is quite brilliant.
Next Section: Antennas
An Introduction to Long Distance Medium Wave Listening: Antennas
by Steve Whitt
DIY MW Loop Antenna
This is the commonest ‘specialist’ antenna used by listeners to MW frequencies because it is usable indoors, readily
home-built and low cost. The loop possesses a very predictable directional receiving pattern which allows signals
from different transmitter locations to be selected by carefully rotating the antenna about its vertical axis. In
addition most loops are designed to be resonant on MW frequencies and therefore are tuneable. This is often a
very valuable introduction of selectivity before signals even reach the receiver. A good loop tuned to 1MHz will
easily reject most signals more than + or – 50kHz away from the desired frequency, thus virtually eliminating any
images or 2nd order intermodulation products generated within the receiver.
There are numerous designs for loops. Some are tuned, others are broadband; some are compact indoor models,
others are massive outdoor devices, but the commonest design is based on a 1m square wooden frame on which 7
turns of wire have been wound. This inductance is then parallel tuned by a variable capacitor with a maximum
value of about 400-500pF. The loop is mounted on a base supported by a wooden broom handle which acts as its
axis of rotation. For full constructional details of loops it is best to seek out some of the specialist publications
listed in Section 5 since there is not enough room here to cover the construction in detail.
Figure 1 illustrates the all important directional pattern of the loop, which clearly exhibits two symmetrical nulls
and peaks which can be directed to undesired or desired stations by physical rotation. In this way the loop is very
capable of separating two or more co-frequency stations provided the signals are not arriving from the same (or
directly opposite) directions. If the direction of arrival of two signals is separated by 60¡- 120¡ then the loop
really excels.
Figure 1: Directional properties of a Loop Antenna
DIY Beverage Antennas
The second key antenna, and probably the best, for MW listeners is the Beverage. This antenna is one of the
simplest and oldest designs around, having been developed by Harold Beverage in the 1920s. In fact an antenna
of this type, 12km long, was used by Beverage in 1922 for the reception in the USA of some of the first low
frequency (approx 1.2 MHz) transmissions from Europe.
For a Beverage to be reasonably effective it needs to be between 1 and 10 wavelengths long, which on the MW
band implies lengths between 200 and 5000 metres. The longer it is relative to the wavelength of interest, the
more directional the antenna becomes. Remember that a Beverage has its maximum signal pick-up along its
length and that the antenna should point along the great circle path towards the desired reception area (Figure
2). The Beverage is even cheaper than the loop to build. It is a broadband antenna (i.e untuned) and so effective
over the whole MW band, but by virtue of its size it always points in one direction. This means that its reception
nulls cannot be easily targeted on unwanted signals. Professional receiving installations (with bigger budgets
than DXers) often construct whole arrays of Beverage antennas radiating out like spokes on a wheel from
the listening site. The radio monitor of course is able to chose the antenna which gives best quality recap
Figure 2: Directional properties of a Beverage Antenna
Over the last half century considerable research into the Beverage has been conducted and detailed design rules
exist but for the radio enthusiast this is one antenna design that is very tolerant of design imperfections. Here’s
what you need to do to put your own Beverage antenna together and to have a go:
Location
Unfortunately the Beverage is a large antenna but it doesn’t really need much space and it can often be a ‘secret’
antenna erected unobtrusively. Ideally you need to have a large field or woodland at the back of your house but
a long straight fence can be used to support the wire. If you have lots of space you have the freedom to chose
the beam direction but if you are just taking advantage of local geography then you may have to accept the limitations
imposed on you. If you lack any significant space at home a good alternative is to find some open land
nearby.
Wire
Hard-drawn copper wire is best for a permanent antenna since it won’t break, but it is not cheap and is quite
heavy. I tend to use 7/0.2mm multi-stranded insulated wire for temporary DX-pedition type antennas. A continuous
barbed wire fence (galvanised steel) is OK also as long as it’s not too rusty to make good electrical connections.
If you want to put up a cheap and disguised antenna use thin transformer wire (eg 40 guage); you can
lay this along a hedge row. Whatever wire you chose you’ll need to be prepared for breaks and repairs;
‘chocolate block’ connectors are very useful accessories when working with Beverages.
Supports
Gardeners-style bamboo canes (4-6 feet tall) are cheap and good for the job. Just cut a slit at one end with a penknife
or junior hacksaw to hold the wire. Lightweight wire (eg 7/0.2mm) needs a support every 15metres or so.
If a straight hedge-row or fence runs in the desired direction you can dispense with the bamboo canes; likewise
it is possible to support wire in trees or bushes as long as a reasonably constant height (between 4 and 10 feet)
above ground can be maintained.
Earth stake and terminating resistor
If a Beverage is operated just as a long wire it will be directional but will pick up signals from both ends of the
wire but if the end of the wire furthest from the receiver and nearest the target reception area is terminated with
a non-inductive (e.g. carbon) resistor equal in value to the antenna’s characteristic impedance (usually about 500
– 600 ohms) the antenna becomes unidirectional. For best results it’s good idea to experiment with the resistor
value but even a fixed resistor of, say 560 ohms, connected between the antenna and the ground stake will do
the job. One good way to produce the terminating resistor is to solder in series a dozen 1watt 47ohm resistors
which are then encased in either heat -shrink plastic tubing or self-amalgamating tape. The use of many low
value resistors makes the whole combination less prone to moisture affecting the total resistance value. Do not
forget that for best results a good earth stake is needed at both ends of the antenna, one for the terminating resistor
and the other for the receiver.
Receiver
If you aren’t operating from a permanent home installation, or planning a full scale DX-pedition from, for example,
a farmhouse, you’ll need portable equipment. One good portable receiver that performs very well on the
MW band is the Sony ICF2001D. This radio can run off its internal batteries but alternatively a communications
receiver that runs off 12V could be used. To make the most of the 2001D (and many other receivers) it is usually
essential to place an antenna tuning unit between the Beverage and the radio to avoid overload problems
caused by strong local signals. Just imagine the simplicity of driving up to your antenna, parking in a lay-by off
the road, and then all that you need to do is pass the antenna wire through the car window, connect it to the receiver
and you are ready to go! With a bit of ingenuity you could be DXing with your very own Beverage antenna;
you certainly don’t need to own several acres of land.
In fact recently I erected a Beverage on a piece of waste land not far from my home. To find the location I did a
little browsing of local maps and then surveyed the sites by driving around the neighbourhood. I guess I was
lucky but I only had to visit four locations before I found an almost ideal site. Furthermore the site was derelict
and deserted so I put up a 330metre run of wire through the bushes. The receiver end terminates on a fence post
with some large nails to which I simply connect the receiver with crocodile clips. whilst at the other end I installed
the terminating resistor between the antenna wire and a copper earth stake driven deep into soft earth in a
ditch. In my case a good ATU is essential since I have a local MW transmitter on 1170kHz.
See also Paul Ormandy’s article Easy-Up Beverages.
Phasers
Phasing units, which allow the combination of signals from two antennas to
reduce/remove interference or noise and to increase gain by adding signals
together, have always piqued my interest. Any device able that will improve
the chances of hearing something new or rare is worth consideration and a
couple of home-brew phasers (inspired by Mark Connelly’s work) have
graced my shack over the years. Their performance was OK though they
were generally tricky to tune and almost always unstable. Enter the electronic
version!
A couple of manufacturers are supplying units, which have caught the attention of the DX-fraternity, namely
MFJ Enterprises MFJ-1025 and MFJ-1026 and Timewave’s ANC-4, are producing moderately priced units.
They even give you an advantage with shorter antennas (from home I get good results from two 30 metre slopers
running 180 degrees out of phase). Both are available from the suppliers or Universal Radio.
With these units deep, stable nulls on a variety of signals make it possible to totally remove interfering stations
and open up a whole new world of possibilities. I use mine as much for increasing signal gain by phasing two
antennas together as I do dealing with QRM.
These commercially available units were intended primarily to cancel noise, yet have shown their real value in
reducing interference and enhancing signals using signal inversion and phasing. For user reviews, check these
articles:
● MFJ-1025
● MFJ-1026
Next Section: Propagation
Long Distance MW Listening: Propagation
by Steve Whitt
To make the most of MW listening you’ll need to have a basic understanding of how a
radio signal arrives at the receiver from a distant transmitter. A great deal of scientific
work has been under-taken investigating the propagation of radio waves, but fortunately
for the MW DXer things can be greatly simplified by considering just two
dominant propagation modes. MW propagation takes place by means of two different
and distinct mechanisms, namely groundwaves and skywaves.
Groundwaves
The groundwave, as its name implies, travels along a path close to the earth’s surface. How far such a signal
goes is dependent on a number of factors, principally transmitter power, operating frequency and earth conductivity.
Groundwave propagation is heavily dependent on the frequency, with low frequency signals travelling
greater distances. In fact, every thing else being equal, groundwave signals from a station on 550kHz will travel
twice as far over land as those radiated by a station on 1500kHz. The earth conductivity is also a very significant
factor and it is found that the better the conductivity the further the signal travels. Sandy or rocky soil is the
worst terrain whilst sea water is best and in regions such as the Caribbean, where the sea is particularly saline
(and therefore more conductive), groundwave reception of stations up to 1000 miles distant is possible. In contrast,
a similar signal travelling over rocky terrain would carry only about one quarter of this distance. Groundwave
propagation is very stable resulting in consistent reception conditions. It is, however, usually only associated
with daytime (although equally present at night) since at night long distant reception is predominantly via
the sky wave. Because of its stable daytime behaviour, radio stations usually optimise their aerials to radiate as
much of their signal as possible via the groundwave in order to improve coverage.
Skywaves
There exists a rarefied region of the earth’s upper atmosphere that absorbs the intense solar ultra-violet radiation
thereby protecting life on the earth’s surface. This radiation results in a region of ionised gases known as the
ionosphere, which, depending on diurnal and seasonal variations, consists of several fairly distinct layers of high
ionisation (Fig. 1). These layers have a profound effect upon radio waves approaching them from transmitters
on the ground below. Under certain conditions refraction of waves occurs, resulting in the ‘reflection’ of signals
back down to the earth, whilst at other times signals can be totally absorbed by the ionised gases. During daylight
hours solar radiation penetrates the atmosphere far enough to form the lowest layer of ionisation, the ‘D’
layer roughly 60km above ground. The ‘D’ layer so completely absorbs signals on MW frequencies that any radio
signals radiated by a station other than those parallel to the earth’s surface are completely lost.
With the approach of sunset, however, the ‘D’ layer absorption decreases rapidly and within a few hours MW
signals are being reflected back to the ground from higher regions of the ionosphere; depending on circumstances
reflection occurs in the E region (about 100-120km up) or in the ‘F’ layer (225-300km).
Figure 3 & 4 illustrate this process and shows the skip distance which for MW frequencies turns out to be about
100 to 500 miles. Longer distance reception is possible when multiple reflections occur between the ionosphere
and the earth’s surface. This occurs with least signal loss over ocean paths hence the possibility of good reception
of Brazilian stations here in Europe.
Figure 3: The Ionosphere and MW Propagation
Whilst the skywave enables good MW DX at night, it also leads to a deterioration in reception quality for the
normal broadcast listener. Firstly there is a region about 50-100 miles from a transmitter (Figure 4) where the
groundwave and the skywave signals are received with roughly equal (but varying) strength, leading to severe
distortion. Additionally all skywave signals are affected by fading as a result of the continually changing characteristics
of the ionosphere.
Figure 4: Skywave / Groundwave Interference
Anomalous Propagation
A previous section examined some of the basic factors governing MW (and LW) reception, in particular the effect
of the ionosphere and the influence of solar radiation and ground effects. We deliberately restricted the subject
to effects of a regular or predictable nature; the sort of parameters that a planner takes into account when
planning the reception area for a new station.
There are however many other occurrences that have a bearing on radio propagation at these frequencies; each
with a greater or lesser degree of unpredictability. Although it is nice to be able to predict when good DX will be
heard on the MW band, it is the possibility of the unusual occurring that adds a touch of excitement to the DXing
hobby. One of the overriding features of MW propagation is the effect of solar radiation on the upper regions of
the earth’s atmosphere. Predictable effects of solar radiation can be seen as diurnal and seasonal variations in
MW propagation as well as in the influence of the 11 year sunspot cycle.
Less predictable events include ionosperic storms, shortwave fadeouts and polar disturbances. These somewhat
esoteric events result from disturbances occurring in the sun, which is, under such circumstances, referred to as
‘active’. The mechanisms behind such events are both complex and in some instances not yet fully understood
but fortunately the average DXer is likely to be more interested in knowing the effect rather than the cause. In
addition it could be very helpful to know when such an event was in progress and to be able to gauge its possible
effect on DXing. A number of institutes around the world keep a watch on the sun and the ionosphere but the
DXer is faced with the problem of obtaining (and interpreting) this extensive scientific information.
Fortunately the American National Bureau of Standards provides this information via the standard time and frequency
broadcasts of station WWV. This station, which is most likely to be heard on 5.0, 10.0 or 15.0 MHz,
transmits regularly up-dated radio propagation data during the 18th minute past every hour. It is also possible to
obtain the same message by phoning a pre-recorded announcement: the US phone number is +1-303-497-3235.
One piece of information transmitted via WWV that is particularly interesting, is the Fredericksburg ‘A’ Index
(more properly called the Fredericksburg Index of Geomagnetic Activity in the Earth’s Magnetic Field) which
can be used as a simple guide to propagation on the MW band. It is a simple matter to construct a daily graph of
the A indices from which basic propagation predictions can be made. High values (above 20) indicate that MW
signals in high latitude paths are likely to be absorbed, leaving signals propagating via paths closer to the equator
to dominate. Low values over a period of time indicate a likelyhood of improved reception via higher latitude
paths. Long periods of very low (6 or less) values are needed to raise the possibility of good high latitude reception
throughout the entire MW band.
Next Section: Interference
Medium Wave Interference
by Steve Whitt
Interference is a topic that affects not just the MW DXer but just about every radio
listener. In fact it is usually the level of interference rather than any other factor that
limits the reception of weak and distant stations on the MW band.
Interference is usually taken to mean any unwanted signal (or noise) that, by adding to the
desired signal, degrades reception of the wanted information. It is usually the case that the
interference most often encountered on MW is man-made in origin. Whereas there is very little one can do
about naturally occurring interference, it is possible, theoretically at least, to eliminate man-made sources of
interference. The first step to suppressing interference is in fact recognising it and identifying its origin. Having
identified a source of interference it is an unfortunate fact of life that it may prove impossible to do anything
about it. The following are the most common forms of man-made interference to affect MW reception:
Co-channel interference:
Since the MW band is operated in a channelised manner and because there is only 1080kHz of available MW
spectrum, there are inevitably several stations transmitting simultaneously on each channel. Normally the powers
and locations of stations allocated to a particular frequency are chosen to ensure that a low level of cochannel
interference occurs within the target area of each transmitter.
However, listeners outside the target area will experience this form of interference which generally gets worse
at night as interfering signals propagate further via the ‘sky wave’. In fact it is the acceptable limit of cochannel
interference (also known as the protection ratio) that defines the target area boundary for a particular
transmitter.
Modulation Splatter:
Splatter or adjacent channel interference can be recognised as unintelligible modulation or programmes heard
mixed with the desired programme with the interfering signal originating from a station transmitting on a channel
adjacent to that of the desired station. Given that stations are adhering to their local channel bandplan, there
are two main causes of modulation splash. Firstly, splash can be the result of a station not limiting the bandwidth
of its transmitted audio which results in components of the transmitted sidebands interfering with signals
on adjacent channels; this form of splash can also result from a poorly maintained or over-modulated transmitter.
Secondly a form of adjacent channel interference can be generated within a receiver with insufficient selectivity
when receiving very strong signals. To test if adjacent channel interference is in fact receiver generated
an aerial attenuator should be used to reduce the strength of the incoming signal; if the relative degree of interference
reduces, a receiver effect should be suspected but if no change is observed then it is possible that the
interference is actually being transmitted.
Heterodyne Interference:
A heterodyne is an audible beat note or whistle that is generated in a receiver when two signals on slightly different
frequencies are received simultaneously. In a perfect world where all MW stations operated exactly on their
allocated channels, heterodyne interference would not be a problem. However since different channel plans are
used in different parts of the world it is possible to hear heterodynes on the MW band. Occasionally, within one
radio planning region it is possible to find off channel stations either because the station has failed to conform
with planning guidelines or a technical problem has arisen in the transmitter. In 1978 the frequencies of the
European Asian and African channels were aligned to all be exact multiples of 9kHz, and every station was expected
to retune their transmitter to the new channels. However quite a few African stations did not make the
move and even today a number of off channel stations are audible. Their presence can cause heterodyne interference
to other stations but the keen DXer can use the presence of a heterodyne tone as a good guide to a weak distant
station. For example 1395kHz is an official channel used in Europe and Africa but Radio Lome; in Togo
never moved from the old frequency of 1394kHz. So if the listener is hunting Lome the presence of a strong heterodyne
interfering with stations on 1395kHz indicates that the path to West Africa is open.
Unwanted heterodynes are annoying but fortunately they are easily removed with an audio notch filter. DXers
often purchase such an accessory [see Section 6.2] since it improves reception and reduces listener fatigue.
Electrical Interference:
This title covers a multitude of interference sources which will tend to affect listeners living in built up areas,
particularly near industrial zones. Man-made electrical interference comes in all shapes and sizes but can be classified
as intermittent or long term. It can be difficult to track down intermittent sources of interference but fortunately
their nuisance value is not long lasting.
Common examples are engine interference from the poorly suppressed spark plugs of passing
cars, and arcing of electrical contacts in thermostats and switches. If the source is identified it
is generally not too difficult to suppress this sort of interference. Other examples are caused
by faulty street lights and faulty insulators on overhead power lines and in both these cases
the solution is to inform the relevant authority. The longer lasting variety is commonly due to
harmonic radiation from television (TV) and visual display unit (VDU) timebases. TV interference
is audible as a rough buzzing located at precise intervals across the MW band of
15.625kHz (in Europe) or 15.750kHz (in N. America). VDU interference can appear with a frequency separation
in the range of 14-18kHz. Unfortunately this form of interference often restricts any serious DXing to outside
TV hours. Generally as more and more electrical equipment enters the home and office the greater the level of
interference and the less chance there is of suppressing it. Among the more recent sources of (very potent) interference
are computers and electronic telephones and office exchanges. Regrettably there is usually little a DXer
can do cure this affliction unless they own the offending piece of equipment.
Jamming:
This is a deliberate attempt to interfere with reception and is usually a transmission of man-made noise intended
to blanket another programme to make it unintelligible. The amount of jamming present tends to reflect the degree
of political unrest in the world and today there is relatively little to bother the MW listener. The extensive
jamming associated with Eastern Europe and the former USSR is now consigned to history, but jammers are still
active in the Middle East and Korea.
Powerline Communications
A new threat has emerged that could affect all radio reception between 9kHz and 30MHz. Tests began in Germany
in early 2001 of a system called Powerline Communications. The system uses data communications to
control various devices commonly used in the home, with the signals being conducted through existing power
cables. But the use of radio frequencies with a proposed range of up to 300 metres, means that milllions of people
in urban areas are theatened by radio pollution from these devices. Imagine trying to DX in an apartment
block where dozens of different devices are operating simultaneously. There’s still hope that the intereference
levels, even to reception of strong domestic signals, will be so high that the whole concept will have to be rethought.
Otherwise for many urban dwellers of the future, a trip to a remote spot may be the only chance to enjoy
the sort of reception that has made mediumwave DXing such a fascinating hobby for so long.
Other Sources of Noise
Even if one lived in a world without any man-made interference, one would still notice a whole range of noises
that limit reception of very weak signals. Of these the least significant (for the MW listener) is the thermal noise
and other electrical noise components actually generated within the components of the receiver. This is because
the level of other naturally occurring noise sources picked up by the receiver’s aerial is many
times greater. Common examples of these types of interference are atmospheric static, which
manifests itself as a continuous crackling noise and lightning discharges which are heard as a
loud crashing noise. The distinguishing feature of these signals is their broadband nature;
namely the noise will be heard at all frequencies in the MW band although the intensity will
decrease at higher frequencies. It is interesting to note that the radio wave emitted by a lightning
flash behaves as any other radio wave and therefore can propagate over considerable
distance; in fact one of the great sources of interference worldwide is the noise generated by
the large numbers of daytime tropical thunderstorms. It is for this reason that many broadcasters in the tropics
choose frequencies between 3 and 6 MHz for local broadcasting where the effect of thunderstorms is much reduced.
Next Section: Reception Reports
Reception Reports to Medium Wave Stations
by Steve Whitt
These three letters, QSL, are probably a bit of a mystery to the newcomer, so
what do they mean? Let us suppose you’ve just heard Radio Fiji on your pocket
transistor radio how are you going to convince everyone that you weren’t just
dreaming? Wouldn’t it be good to have something from the radio station confirming
that you really did hear them? Well this is where QSL cards come into
the picture; a QSL card is usually a picture postcard (although it can also take
the form of a letter, a certificate, or a folder) sent to a radio listener by a radio
station confirming that reception actually took place.
In order to get a QSL card from a station there are several things you need to do, but firstly remember that you
have to hear the station and then convince station staff that you did hear their signal. Normally one is obtained
by sending a station a reception report giving details of how well their signal was received and of the programme
material heard, as proof of reception. Naturally you need to say when you were listening (date and
times should be in the station’s own local time).
Historically, the QSL originated in the days when stations relied entirely on reports from listeners to determine
their coverage area. In fact the letters ‘Q-S-L’ are based upon a radio operators shorthand code (Q code) system
that evolved during the early days of radio. Nowadays, however, many stations use reports from professional
monitoring stations and have more accurate coverage predictions available, and consequently the QSL survives
largely as a service, from the station’s point of view. Additionally there is a significant difference in QSL policy
between the international shortwave broadcaster, which issues QSL cards to maintain contact with and to
gauge the size of its audience, and the local medium wave station being heard outside its usual coverage area.
At best, the latter will treat a far off reception report with curiosity and will send out a QSL as a public relations
exercise. At worst, to a station with few staff and a limited budget, reception reports from DXers can be a
downright waste of time. It is therefore vital that MW DXers follow these top five tips when sending out reception
reports to stations.
Make Your Reports Really Work
If you are one of the many MW DXers who not only likes to hear a station but wants to collect a verification or
QSL to ‘prove’ that reception actually took place, then you’ll appreciate that hearing the station in the first place
is only half the problem. I’m sure that you’ve wondered why not every station replies to your letters or reception
reports. Perhaps only around 50% of MW stations reply; what can be done to increase this ratio ? Many MW
broadcasters (in contrast to their SW counterparts) are not interested in audiences in far flung places since their
double glazing advertiser is unlikely to extend his sales overseas!
Firstly imagine yourself in the position of the station engineer and then imagine you received a letter from a
faraway listener asking for a QSL card. Could you be bothered to reply if you’ve already received a hundred
similar items in your in-tray that week? I know of station engineers that have commented ‘… some of the reports
we get are terrible..’, ‘… we only now reply to reports containing IRCs as the postage was getting rather expensive..’,
and ‘… I always reply to DX reports but never know if my letters are received..’
What a listener needs to do is to convince the station that reception really took place and that the report is not
just being made up. In addition you need to make the station’s task in replying as simple as possible and it always
helps to make your reception report stand out from the crowd so that perhaps it won’t end up in the ’round
file’. Try these steps to good reception reports:
Convince the station
Include full details of commercials and public service announcements that you heard since virtually all stations
record these details in their logs. Station slogans won’t on their own convince anyone since they are often wellknown
and widely reported and also lists of records heard are not always very useful since details aren’t always
kept in station logs. Worst of all is something like ‘man talking..’ or ‘music’ which won’t help convince anyone!
The golden rule is the more detail the better.
Make their job easier
Use the station’s local time in reception reports so that they don’t have any tricky time zone conversions to do.
The only exception is if the station is an international broadcaster that has been announcing a different time zone
(e.g GMT or UTC) on air. It is often wise to note down the actual time announced in time checks rather than
what your watch says since many stations have somewhat inaccurate studio clocks! Send return postage with
your letter. Best of all include mint stamps from the station’s country but since this is easier said than done you
could send International Reply Coupons which are obtainable from the Post Office. Unfortunately some countries
do not accept IRCs for exchange into local postage stamps. For the USA and many other countries you can
instead send a US$1 bill since hard currency is often appreciated. Enclose a prepared sticky label with your return
address already on it. Write in the station’s natural language unless it is a big international broadcaster with
various language departments. The natural language may not be the main language of the country they are in
(e.g. Spanish speaking stations located in the USA).
Help the station
Local MW stations don’t need listeners thousands of miles away; certainly they don’t attract more advertising
because of this. So if you can help the station with constructive comment on programmes (what you liked and
disliked) and on technical quality (eg modulation, audio quality or frequency stability) or by identifying interference,
so much the better.
Make your letter stand out
BE POLITE and request a QSL card – never demand one. Introduce yourself and your location; maybe include a
local picture postcard or some stickers from your local radio stations. use commemorative or unusual stamps on
the envelope; there maybe a philatelist at the station. Unfortunately in some parts of the world this might also
make your letter attractive to thieving hands in the postal system. Give a realistic and detailed decription of reception
conditions in words that are not too technical (remember it’s not always the engineer reading your letter).
Never use SINPO style codes on their own.
If you follow some or all of these tips you should not only increase your chances of getting a reply from a station
but you will help contribute to good relations between DXers and broadcasters. Finally, if you receive a reply
from a station it is an often over-looked basic courtesy to thank them. It is simple and quick (and not too expensive)
to send a postcard direct to whoever wrote from the station letting them know that their letter arrived
safely and thanking them for their trouble. Research during a Radio Netherlands Media Network edition revealed
that very few bother to say ‘thanks’, and yet it makes all the difference.
Next Section: The Digital Dimension
The Digital Dimension
The Future of Medium Wave DXing in a Digital World by Paul Ormandy
Digital broadcasting on mediumwave is almost here! Widespread implementation will begin
in 2003 and while we can speculate about the ‘bells and whistles’ that digital will bring and
whether DXing as we know it will survive, how will we fair in the short term, DXing analogue
signals in the midst of the digital revolution?
For years now the news for New Zealand MW DXers hasn’t been good, which is a reflection of the scene globally
though with its own ‘Kiwi” twist. More and more stations entering the market (which seemed saturated as
it was), 24-hour broadcasting by all and sundry and national networks setting up stations in every city. These
factors along with the advent of ‘Access Radio’ stations offering an outlet to minority interest groups has resulted
in a plethora of stations inundating the MW band. And that’s without taking the high noise levels besieging
urban DXers into question!
The opening of the FM band in New Zealand two decades ago, and the hoped for mass exodus of MW stations
to VHF with the resultant gaps in the broadcast band never materialised. Existing AMers simply simulcasted or
ran alternate programming on each band, (despite the government’s intentions that any station wanting to move
to FM had to relinquish its AM channel) doing little to improve the lot of the MW DXer.
The only good news for New Zealand MW DXers in recent times was that our government would not allocate
channels in the extended band (1602 – 1701kHz) thus leaving it free for DXing foreign stations (and it has been
a great source of renewed interest in MW DXing too).
So, what will be the impact of digital broadcasting when endeavouring to DX analogue
signals in a dual-mode environment, and will our hobby survive in an all-digital world? It
is possible that initially digital broadcasting may allow for better reception of analogue
signals. The faintest suggestion that things may improve for the mediumwave DXer will
arouse my interest.
The Implications for DXers
Let’s look at the factors that make life more difficult for us diehard MW DXers and speculate as to what might
happen…
Interference
Digital signals will have a tighter bandwidth, so their signals will not cause as much adjacent channel interference
(“splash”) as existing AM broadcasts. As an example, if an analogue signal wipes out reception on channels
20kHz either side of its nominal frequency, a digital signal may only wipe out channels 5kHz either side.
What’s more, the level of splash from an AM signal varies with modulation (and there are some forms of audio
which cause more splash than others, notably applause and bagpipes!). Digital signals will not vary in modulation
so levels of splash will be constant (not sure whether that’s good or bad!).
While a digital receiver will be clever enough to discriminate between co -channel analogue and digital modes
(and even co-channel and adjacent channel digital signals thus rendering interference a thing of the past), reception
of the two mixed-mode signals on an analogue-only receiver will produce a new form of QRM to deal
with. Top-end “dual mode” receivers may allow reception of both at good levels by toggling between the two
modes.
Congestion
The narrower band space occupied by digital signals will also increase the number of channels available to
broadcasters without extending the existing band. In countries where channels are clogged, like in the USA
where there are 4,000 plus stations sharing the available 117 frequencies, restrictions (limited hours and lowpower
operation) imposed on a non-interference basis could be relaxed which would free up more channels to
be occupied.
The current spacing between channels in cities is around 30kHz. So over the band (530 – 1700kHz) there are 39
channels at 30kHz spacing’s. If 20kHz is a suitable spacing for digital signals, then 58 channels would become
available. So another 19 stations could be accommodated… and the implications increase if 10kHz is a suitable
spacing. Not only would this mean more local and semi-local stations, it would also have the effect of narrowing
the potential gap for DXing stations, which may not be good news for listeners living in those environments.
Then what if it was possible to transmit several program streams at once over the same transmitter? That could
lead to de-congestion as several stations owned by one company servicing the same area could all broadcast
over just one sender. Thus saving on transmitter running and replacement costs and capital outlay. We could
also have companies in the business of owning transmitters leasing available channel-space to broadcasters,
even the streaming audio ‘stations’ currently proliferating on the Internet!
A Move from FM back to MW?
For the last 2 decades, the attraction of the superior audio quality of FM has often seen
stations sacrificing the coverage of AM to obtain the better dynamic range and frequency
response that FM offers. Many new stations have gone straight to FM and given
MW little consideration. Also, it’s generally cheaper to establish an FM station from an
aerial point-of-view… no large tract of land is required to place a tower and radials
upon as an FM aerial needs to be little bigger than an average VHF TV aerial.
And the programming has been split too, with most music stations on FM and talk stations
on MW.
If the audio quality of digital is as good as FM, then stations may opt to use MW to increase coverage while retaining
audio quality. This could lead to MW once again becoming a multi-format mode with high-quality music
channels alongside the ‘talkers’.
At some stage, once receiver penetration into homes reaches a crucial point and digital AM broadcasters begin
to figure in listener polls, the increased audio quality and coverage on MW will be used as a tool to entice advertisers
to use MW instead of FM. Competitors eager to retain their advertising clientele will create a certain synergy,
leading to an explosion of digital MW stations as existing AM stations convert and FM broadcasters
switch to the new mode.
All of this could lead to congestion on a scale never imagined, and until we know how “DXable” digital signals
will be, we won’t know the shape of MW DXing in the future.
Being totally pessimistic, (which is not my nature) the death knell of MW DX may be ringing as this new technology
threatens. Being optimistic, there may be increased opportunities before digital
gains real momentum, with the narrower bandwidth, less splash and slow conversion by
many, particularly Third World nations to the new mode.
Next Section: The wetter the better?
The Wetter The Better?
by Andy Sennitt
On the Jan 28th 1999 edition of the Media Network radio show, Steve Whitt of the Medium
Wave Circle discussed the effect of the weather on Medium Wave reception. Steve says that
weather conditions at the receiving location could be playing more of a part in reception conditions
than we have previously thought. Although the ionosphere is much higher up in the atmosphere
than, for example, thunderstorms, local rainfall can affect the ground conductivity.
That item was of particular interest to me, as over the years I have experienced many examples of this phenomenon.
To take just one, in the mid 60’s when I lived in Eastern England, I used to be able to listen to the
offshore station Radio Scotland during the day when there was a lot of wet weather about. The station was
about 300 miles away using a 10kW transmitter. On hot sunny days in mid summer, the signal was very
weak, sometimes inaudible.
David Thorpe in the UK writes: “I can confirm your suspicions. Wet weather does indeed have an effect on
ground conductivity. In the past measurements have been made over a 50km path from a London mf tx site,
and the field strength of the received signal, measured at the same location, has been more than 2 db uV
greater during wet weather conditions”.
If you’re interested in studying this phenomenon, you need a reliable station list. Some are reviewed elsewhere
on this Web site. Continue to the Clubs and Pubs section for details.
Next Section: Further Information
Medium Wave Clubs and Publications
Specialist Clubs for MW/LW DXers
North America
National Radio Club
P.O. Box 164, Mannsville, NY 13661-0164, USA
E-mail: gnbc@wcoil.com
International Radio Club of America
P.O Box 1831, Perris, CA 92572-1831, USA
Europe
Medium Wave Circle
59 Moat Lane, Luton LU3 1UU, United Kingdom
E-mail: contact@mwcircle.org
Arctic Radio Club
Box 5050, 350 05 Växjö, Sweden
Umeå Kortvågsklubb
Box 117, SE-901 03 Umeå, Sweden
Oceania
Australian Radio DX Club
15 Olive Crescent, Peakhurst, NSW 2210, Australia
E-mail: dxer@fl.net.au
New Zealand Radio DX League
P.O. Box 3011, Auckland, New Zealand
E-mail: paul@radiodx.com
Other Clubs
Most other radio clubs that concentrate on shortwave radio also include sections on Medium and Longwave
listening. However not being specialists their coverage of this topic is generally not as comprehensive.
A good list of clubs can be found in the World Radio TV Handbook. The
following are only a small selection of established clubs with regular
publications containing good MW/LW columns:
Europe
British DX Club
126 Bargery Rd, Catford, London SE6 2LR, United Kingdom
E-mail: secretary@bdxc.org.uk
World DX Club
17 Motspur Drive, Northampton NN2 6LY, United Kingdom
E-mail: mikewb@dircon.co.uk
Play-DX
c/o Dario Monferini, Via Davanzati 8, I-20158 Milano, Italy
E-mail: playdx@hotmail.com
North America
Ontario DX Association
P.O. Box 161, Station ‘A’, Willowdale, ON M2N 5S8, Canada
E-mail: odxa@compuserve.com
South America
Grupo Radioescucha Platense
Casilla 465, 1900 La Plata, Argentina
E-mail: jaloy@netverk.com.ar
The Radio News
P.O.Box 65657, Caracas 1066-A Venezuela
Bibliography
Worldwide Frequency Lists
World Radio TV Handbook (WRTH)
WRTH Publications Ltd., PO Box 7373, Milton Keynes MK12 5ZL, United Kingdom. Fax: +44 1908
321030. Web (online ordering): http://www.wrth.com E-mail: editor@wrth.demon.co.uk
Note: Useful for Medium Wave but limited USA coverage, and not all sections are updated adequately. 600+
pages. Annual
Regional Frequency Lists
AM Radio Log
National Radio Club Publications Center, P.O, Box 164, Dept. M, Mannsville,
NY 13661-0164, USA. Web: nrcdxas.org/catalog/amlog/
Most comprehensive list of all US & Canada MW stations
Euro/African Medium Wave Guide
Herman Boel, Roklijf 10, B-9300 Aalst, Flanders (Belgium).
Tel: +32 53 711 244. Web: http://www.emwg.info/ E-mail: contact@emwg.info
PDF version: free download. Updated frequently
Ninety Nine Nights on Medium Wave
Wilhelm Herbst Verlag, Roggendorfstraße 4 D-51061 Köln, Germany. Tel: +49 221 9 66 16 42. Fax: +49 221
66 84 31. E-mail: whv-vertrieb@gmx.de. Web: http://www.wilhelm-herbst-verlag.de/index.htm
Other Specialist Publications
A DXers Technical Guide
IRCA Bookstore, 9705 Mary NW, Seattle WA 98117-2334, USA
Web: http://www.ircaonline.org
This 156 page book answers questions on receiver and antennas (the theory of their operation, and how to improve
their performance), how audio filters and loop antennas can improve DX (and hints on their construction),
how to build a beverage and phasing unit, and much more.
Next Section: Longwave Radio
Longwave Radio
Northern Star International Broadcasters AS has won the concession to operate a longwave service
from Norway on 216 kHz. It plans a Christian-oriented service in English, aimed at adults
aged 35-75 years. Its comprehensive Web site includes information for potential investors, coverage
predictions, and much more.
The Isle of Man International Broadcasting Company Ltd has been awarded a 10 year licence
to operate a longwave station from the Isle of Man on 279kHz. The service, provisionally
called MusicMann 279, will be music led, and will target an audience across Britain and
Ireland. It is expected to launch around Easter of 2006. IMIB plans to install the transmission
antenna on an offshore platform in Manx waters some 9 km northeast of Ramsey, Isle
of Man, near the spot Radio Caroline was anchored in the 1960’s. The station recently
opened a separate corporate website.
The official UK Longwave Atlantic 252 tribute site features many audio and picture
memories including anecdotes direct from the DJ’s and staff who made the station possible
between 1989 and 2001.
“The Distant Listener” is a chapter from the Internet book Four Corners, by Dan K. Phillips, describing the
Cape Cod site of Marconi’s first wireless station in the U.S. It sketches the station’s history, from site selection
in 1901, through the first regular transatlantic service in 1903, reception of the Titanic’s distress signals in
1912, to its dismantlement in 1920.
The Longwave Club of America is the world’s only listening club devoted entirely to signals below
500kHz. It recently opened an impressive new Website containing a wealth of information
about listening on this part of the dial.
The Longwave Home Page, has been put together by DXer Robert Kramer. As well as general information,
you will find the latest loggings and DX News.
For listeners in New Zealand, there’s a good article by DXer Paul Ormandy called How Low Can
You Go? on the Website of the New Zealand Radio DX League. You might be surprised at how much
they can hear in that part of the world!
The World Below 535kHz contains links to interesting WWW sites related to longwave and VLF radio. However
a note on the page warns that “Due to lack of time this page is no longer maintained actively and it may
contain obsolete links.”
Non-Directional Beacons
As well as broadcasting, the Longwave band is used for navigational beacons. Rock’s Longwave Beacon Log
is a list of stations heard by Ray Rocker in Mississippi. Canadian Nondirectional Radiobeacons was last updated
in November 2002. If you’re trying to find information about specific airports, AirNav provides an extensive
searchable database containing details of non-directional beacons.In July of 2018, this honey pot forum was sold out to an unidentified NPC sock puppet and troll organization. Most independent thinkers and writers migrated to other MGTOW forums as a result of the never-ending infighting and deliberate trouble starting caused by members who were given "carte blanche" by the admin to do whatever they want. Before my departure, I only left a few thousand cat pics here to comfort and ridicule the feminist owners who now run this place. Their background agenda is to make MGTOW look like a club of losers the public eye. And during the course of 2019, they actually managed to destroy almost all other MGTOW venues as well. Here is the truth about "theindependentman.org" aka "TIM" which was created as an extended workbench to further divide the community. When you register, they install a spyware Zombie cookie on your browser that does all kinds of things the user does not know of: http://www.filedropper.com/essay-on-the-removal-of-malware-cookies-used-by-tim
2018-10-03 at 10:15 AM#863097In reply to: Excavating Sand from Duchess Gargamella's Vagina
Failure Trends in a Large Disk Drive Population
Eduardo Pinheiro, Wolf-Dietrich Weber and Luiz Andr´e Barroso
Google Inc.
1600 Amphitheatre Pkwy
Mountain View, CA 94043
{edpin,wolf,luiz}@google.com
Abstract
It is estimated that over 90% of all new information produced
in the world is being stored on magnetic media, most of it on
hard disk drives. Despite their importance, there is relatively
little published work on the failure patterns of disk drives, and
the key factors that affect their lifetime. Most available data
are either based on extrapolation from accelerated aging experiments
or from relatively modest sized field studies. Moreover,
larger population studies rarely have the infrastructure in place
to collect health signals from components in operation, which
is critical information for detailed failure analysis.
We present data collected from detailed observations of a
large disk drive population in a production Internet services deployment.
The population observed is many times larger than
that of previous studies. In addition to presenting failure statistics,
we analyze the correlation between failures and several
parameters generally believed to impact longevity.
Our analysis identifies several parameters from the drive’s
self monitoring facility (SMART) that correlate highly with
failures. Despite this high correlation, we conclude that models
based on SMART parameters alone are unlikely to be useful
for predicting individual drive failures. Surprisingly, we found
that temperature and activity levels were much less correlated
with drive failures than previously reported.
1 Introduction
The tremendous advances in low-cost, high-capacity
magnetic disk drives have been among the key factors
helping establish a modern society that is deeply reliant
on information technology. High-volume, consumergrade
disk drives have become such a successful product
that their deployments range from home computers
and appliances to large-scale server farms. In 2002, for
example, it was estimated that over 90% of all new information
produced was stored on magnetic media, most
of it being hard disk drives [12]. It is therefore critical
to improve our understanding of how robust these components
are and what main factors are associated with
failures. Such understanding can be particularly useful
for guiding the design of storage systems as well as devising
deployment and maintenance strategies.
Despite the importance of the subject, there are very
few published studies on failure characteristics of disk
drives. Most of the available information comes from
the disk manufacturers themselves [2]. Their data are
typically based on extrapolation from accelerated life
test data of small populations or from returned unit
databases. Accelerated life tests, although useful in providing
insight into how some environmental factors can
affect disk drive lifetime, have been known to be poor
predictors of actual failure rates as seen by customers
in the field [7]. Statistics from returned units are typically
based on much larger populations, but since there
is little or no visibility into the deployment characteristics,
the analysis lacks valuable insight into what actually
happened to the drive during operation. In addition,
since units are typically returned during the warranty period
(often three years or less), manufacturers’ databases
may not be as helpful for the study of long-term effects.
A few recent studies have shed some light on field
failure behavior of disk drives [6, 7, 9, 16, 17, 19, 20].
However, these studies have either reported on relatively
modest populations or did not monitor the disks closely
enough during deployment to provide insights into the
factors that might be associated with failures.
Disk drives are generally very reliable but they are
also very complex components. This combination
means that although they fail rarely, when they do fail,
the possible causes of failure can be numerous. As a
result, detailed studies of very large populations are the
only way to collect enough failure statistics to enable
meaningful conclusions. In this paper we present one
such study by examining the population of hard drives
under deployment within Google’s computing infrastructure.
We have built an infrastructure that collects vital information
about all Google’s systems every few minutes,
and a repository that stores these data in timeseries
format (essentially forever) for further analysis.
The information collected includes environmental factors
(such as temperatures), activity levels and many of
the Self-Monitoring Analysis and Reporting Technology
(SMART) parameters that are believed to be good indicators
of disk drive health. We mine through these data
and attempt to find evidence that corroborates or contradicts
many of the commonly held beliefs about how
various factors can affect disk drive lifetime.
Our paper is unique in that it is based on data from a
disk population size that is typically only available from
vendor warranty databases, but has the depth of deployment
visibility and detailed lifetime follow-up that only
an end-user study can provide. Our key findings are:
• Contrary to previously reported results, we found
very little correlation between failure rates and either
elevated temperature or activity levels.
• Some SMART parameters (scan errors, reallocation
counts, offline reallocation counts, and probational
counts) have a large impact on failure probability.
• Given the lack of occurrence of predictive SMART
signals on a large fraction of failed drives, it is unlikely
that an accurate predictive failure model can
be built based on these signals alone.
2 Background
In this section we describe the infrastructure that was
used to gather and process the data used in this study,
the types of disk drives included in the analysis, and information
on how they are deployed.
2.1 The System Health Infrastructure
The System Health infrastructure is a large distributed
software system that collects and stores hundreds of
attribute-value pairs from all of Google’s servers, and
provides the interface for arbitrary analysis jobs to process
that data.
The architecture of the System Health infrastructure
is shown in Figure 1. It consists of a data collection
layer, a distributed repository and an analysis framework.
The collection layer is responsible for getting information
from each of thousands of individual servers
into a centralized repository. Different flavors of collectors
exist to gather different types of data. Much of
the health information is obtained from the machines directly.
A daemon runs on every machine and gathers
local data related to that machine’s health, such as environmental
parameters, utilization information of various
Figure 1: Collection, storage, and analysis architecture.
resources, error indications, and configuration information.
It is imperative that this daemon’s resource usage
be very light, so not to interfere with the applications.
One way to assure this is to have the machine-level collector
poll individual machines relatively infrequently
(every few minutes). Other slower changing data (such
as configuration information) and data from other existing
databases can be collected even less frequently than
that. Most notably for this study, data regarding machine
repairs and disk swaps are pulled in from another
database.
The System Health database is built upon Bigtable
[3], a distributed data repository widely used within
Google, which itself is built upon the Google File System
(GFS) [8]. Bigtable takes care of all the data layout,
compression, and access chores associated with a large
data store. It presents the abstraction of a 2-dimensional
table of data cells, with different versions over time making
up a third dimension. It is a natural fit for keeping
track of the values of different variables (columns) for
different machines (rows) over time. The System Health
database thus retains a complete time-ordered history of
the environment, utilization, error, configuration, and repair
events in each machine’s life.
Analysis programs run on top of the System Health
database, looking at information from individual machines,
or mining the data across thousands of machines.
Large-scale analysis programs are typically built upon
Google’s Mapreduce [5] framework. Mapreduce automates
the mechanisms of large-scale distributed computation
(such as work distribution, load balancing, tolerance
of failures), allowing the user to focus simply on
the algorithms that make up the heart of the computation.
The analysis pipeline used for this study consists of
a Mapreduce job written in the Sawzall language and
framework [15] to extract and clean up periodic SMART
data and repair data related to disks, followed by a pass
through R [1] for statistical analysis and final graph generation.
2.2 Deployment Details
The data in this study are collected from a large number
of disk drives, deployed in several types of systems
across all of Google’s services. More than one hundred
thousand disk drives were used for all the results presented
here. The disks are a combination of serial and
parallel ATA consumer-grade hard disk drives, ranging
in speed from 5400 to 7200 rpm, and in size from 80 to
400 GB. All units in this study were put into production
in or after 2001. The population contains several models
from many of the largest disk drive manufacturers and
from at least nine different models. The data used for
this study were collected between December 2005 and
August 2006.
As is common in server-class deployments, the disks
were powered on, spinning, and generally in service for
essentially all of their recorded life. They were deployed
in rack-mounted servers and housed in professionallymanaged
datacenter facilities.
Before being put into production, all disk drives go
through a short burn-in process, which consists of a
combination of read/write stress tests designed to catch
many of the most common assembly, configuration, or
component-level problems. The data shown here do not
include the fall-out from this phase, but instead begin
when the systems are officially commissioned for use.
Therefore our data should be consistent with what a regular
end-user should see, since most equipment manufacturers
put their systems through similar tests before
shipment.
2.3 Data Preparation
Definition of Failure. Narrowly defining what constitutes
a failure is a difficult task in such a large operation.
Manufacturers and end-users often see different
statistics when computing failures since they use different
definitions for it. While drive manufacturers often
quote yearly failure rates below 2% [2], user studies have
seen rates as high as 6% [9]. Elerath and Shah [7] report
between 15-60% of drives considered to have failed at
the user site are found to have no defect by the manufacturers
upon returning the unit. Hughes et al. [11] observe
between 20-30% “no problem found” cases after
analyzing failed drives from their study of 3477 disks.
From an end-user’s perspective, a defective drive is
one that misbehaves in a serious or consistent enough
manner in the user’s specific deployment scenario that
it is no longer suitable for service. Since failures are
sometimes the result of a combination of components
(i.e., a particular drive with a particular controller or cable,
etc), it is no surprise that a good number of drives
that fail for a given user could be still considered operational
in a different test harness. We have observed
that phenomenon ourselves, including situations where
a drive tester consistently “green lights” a unit that invariably
fails in the field. Therefore, the most accurate
definition we can present of a failure event for our study
is: a drive is considered to have failed if it was replaced
as part of a repairs procedure. Note that this definition
implicitly excludes drives that were replaced due to an
upgrade.
Since it is not always clear when exactly a drive failed,
we consider the time of failure to be when the drive was
replaced, which can sometimes be a few days after the
observed failure event. It is also important to mention
that the parameters we use in this study were not in use
as part of the repairs diagnostics procedure at the time
that these data were collected. Therefore there is no risk
of false (forced) correlations between these signals and
repair outcomes.
Filtering. With such a large number of units monitored
over a long period of time, data integrity issues invariably
show up. Information can be lost or corrupted along
our collection pipeline. Therefore, some cleaning up of
the data is necessary. In the case of missing values, the
individual values are marked as not available and that
specific piece of data is excluded from the detailed studies.
Other records for that same drive are not discarded.
In cases where the data are clearly spurious, the entire
record for the drive is removed, under the assumption
that one piece of spurious data draws into question other
fields for the same drive. Identifying spurious data, however,
is a tricky task. Because part of the goal of studying
the data is to learn what the numbers mean, we must be
careful not to discard too much data that might appear
invalid. So we define spurious simply as negative counts
or data values that are clearly impossible. For example,
some drives have reported temperatures that were
hotter than the surface of the sun. Others have had negative
power cycles. These were deemed spurious and
removed. On the other hand, we have not filtered any
suspiciously large counts from the SMART signals, under
the hypothesis that large counts, while improbable as
raw numbers, are likely to be good indicators of something
really bad with the drive. Filtering for spurious
values reduced the sample set size by less than 0.1%.
3 Results
We now analyze the failure behavior of our fleet of disk
drives using detailed monitoring data collected over a
nine-month observation window. During this time we
recorded failure events as well as all the available environmental
and activity data and most of the SMART
parameters from the drives themselves. Failure information
spanning a much longer interval (approximately five
years) was also mined from an older repairs database.
All the results presented here were tested for their statistical
significance using the appropriate tests.
3.1 Baseline Failure Rates
Figure 2 presents the average Annualized Failure Rates
(AFR) for all drives in our study, aged zero to 5 years,
and is derived from our older repairs database. The data
are broken down by the age a drive was when it failed.
Note that this implies some overlap between the sample
sets for the 3-month, 6-month, and 1-year ages, because
a drive can reach its 3-month, 6-month and 1-year age
all within the observation period. Beyond 1-year there is
no more overlap.
While it may be tempting to read this graph as strictly
failure rate with drive age, drive model factors are
strongly mixed into these data as well. We tend to source
a particular drive model only for a limited time (as new,
more cost-effective models are constantly being introduced),
so it is often the case that when we look at sets
of drives of different ages we are also looking at a very
different mix of models. Consequently, these data are
not directly useful in understanding the effects of disk
age on failure rates (the exception being the first three
data points, which are dominated by a relatively stable
mix of disk drive models). The graph is nevertheless a
good way to provide a baseline characterization of failures
across our population. It is also useful for later
studies in the paper, where we can judge how consistent
the impact of a given parameter is across these diverse
drive model groups. A consistent and noticeable impact
across all groups indicates strongly that the signal being
measured has a fundamentally powerful correlation with
failures, given that it is observed across widely varying
ages and models.
The observed range of AFRs (see Figure 2) varies
from 1.7%, for drives that were in their first year of operation,
to over 8.6%, observed in the 3-year old pop-
Figure 2: Annualized failure rates broken down by age groups
ulation. The higher baseline AFR for 3 and 4 year old
drives is more strongly influenced by the underlying reliability
of the particular models in that vintage than by
disk drive aging effects. It is interesting to note that our
3-month, 6-months and 1-year data points do seem to
indicate a noticeable influence of infant mortality phenomena,
with 1-year AFR dropping significantly from
the AFR observed in the first three months.
3.2 Manufacturers, Models, and Vintages
Failure rates are known to be highly correlated with drive
models, manufacturers and vintages [18]. Our results do
not contradict this fact. For example, Figure 2 changes
significantly when we normalize failure rates per each
drive model. Most age-related results are impacted by
drive vintages. However, in this paper, we do not show a
breakdown of drives per manufacturer, model, or vintage
due to the proprietary nature of these data.
Interestingly, this does not change our conclusions. In
contrast to age-related results, we note that all results
shown in the rest of the paper are not affected significantly
by the population mix. None of our SMART data
results change significantly when normalized by drive
model. The only exception is seek error rate, which is
dependent on one specific drive manufacturer, as we discuss
in section 3.5.5.
3.3 Utilization
The literature generally refers to utilization metrics by
employing the term duty cycle which unfortunately has
no consistent and precise definition, but can be roughly
characterized as the fraction of time a drive is active out
of the total powered-on time. What is widely reported in
the literature is that higher duty cycles affect disk drives
negatively [4, 21].
It is difficult for us to arrive at a meaningful numerical
utilization metric given that our measurements do
not provide enough detail to derive what 100% utilization
might be for any given disk model. We choose instead
to measure utilization in terms of weekly averages
of read/write bandwidth per drive. We categorize utilization
in three levels: low, medium and high, corresponding
respectively to the lowest 25th percentile, 50-75th
percentiles and top 75th percentile. This categorization
is performed for each drive model, since the maximum
bandwidths have significant variability across drive families.
We note that using number of I/O operations and
bytes transferred as utilization metrics provide very similar
results. Figure 3 shows the impact of utilization on
AFR across the different age groups.
Overall, we expected to notice a very strong and consistent
correlation between high utilization and higher
failure rates. However our results appear to paint a more
complex picture. First, only very young and very old
age groups appear to show the expected behavior. After
the first year, the AFR of high utilization drives is
at most moderately higher than that of low utilization
drives. The three-year group in fact appears to have the
opposite of the expected behavior, with low utilization
drives having slightly higher failure rates than high utilization
ones.
One possible explanation for this behavior is the survival
of the fittest theory. It is possible that the failure
modes that are associated with higher utilization are
more prominent early in the drive’s lifetime. If that is the
case, the drives that survive the infant mortality phase
are the least susceptible to that failure mode, and result
in a population that is more robust with respect to variations
in utilization levels.
Another possible explanation is that previous observations
of high correlation between utilization and failures
has been based on extrapolations from manufacturers’
accelerated life experiments. Those experiments are
likely to better model early life failure characteristics,
and as such they agree with the trend we observe for the
young age groups. It is possible, however, that longer
term population studies could uncover a less pronounced
effect later in a drive’s lifetime.
When we look at these results across individual models
we again see a complex pattern, with varying patterns
of failure behavior across the three utilization levels.
Taken as a whole, our data indicate a much weaker
correlation between utilization levels and failures than
previous work has suggested.
Figure 3: Utilization AFR
3.4 Temperature
Temperature is often quoted as the most important environmental
factor affecting disk drive reliability. Previous
studies have indicated that temperature deltas as low as
15C can nearly double disk drive failure rates [4]. Here
we take temperature readings from the SMART records
every few minutes during the entire 9-month window
of observation and try to understand the correlation between
temperature levels and failure rates.
We have aggregated temperature readings in several
different ways, including averages, maxima, fraction of
time spent above a given temperature value, number of
times a temperature threshold is crossed, and last temperature
before failure. Here we report data on averages
and note that other aggregation forms have shown similar
trends and and therefore suggest the same conclusions.
We first look at the correlation between average temperature
during the observation period and failure. Figure
4 shows the distribution of drives with average temperature
in increments of one degree and the corresponding
annualized failure rates. The figure shows that failures
do not increase when the average temperature increases.
In fact, there is a clear trend showing that lower
temperatures are associated with higher failure rates.
Only at very high temperatures is there a slight reversal
of this trend.
Figure 5 looks at the average temperatures for different
age groups. The distributions are in sync with Figure
4 showing a mostly flat failure rate at mid-range temperatures
and a modest increase at the low end of the temperature
distribution. What stands out are the 3 and 4-
year old drives, where the trend for higher failures with
higher temperature is much more constant and also more
pronounced.
Overall our experiments can confirm previously reFigure
4: Distribution of average temperatures and failures
rates.
Figure 5: AFR for average drive temperature.
ported temperature effects only for the high end of our
temperature range and especially for older drives. In the
lower and middle temperature ranges, higher temperatures
are not associated with higher failure rates. This is
a fairly surprising result, which could indicate that datacenter
or server designers have more freedom than previously
thought when setting operating temperatures for
equipment that contains disk drives. We can conclude
that at moderate temperature ranges it is likely that there
are other effects which affect failure rates much more
strongly than temperatures do.
3.5 SMART Data Analysis
We now look at the various self-monitoring signals that
are available from virtually all of our disk drives through
the SMART standard interface. Our analysis indicates
that some signals appear to be more relevant to the study
of failures than others. We first look at those in detail,
and then list a summary of our findings for the remaining
ones. At the end of this section we discuss our results
and reason about the usefulness of SMART parameters
in obtaining predictive models for individual disk drive
failures.
We present results in three forms. First we compare
the AFR of drives with zero and non-zero counts for a
given parameter, broken down by the same age groups
as in figures 2 and 3. We also find it useful to plot the
probability of survival of drives over the nine-month observation
window for different ranges of parameter values.
Finally, in addition to the graphs, we devise a single
metric that could relay how relevant the values of
a given SMART parameter are in predicting imminent
failures. To that end, for each SMART parameter we
look for thresholds that increased the probability of failure
in the next 60 days by at least a factor of 10 with
respect to drives that have zero counts for that parameter.
We report such Critical Thresholds whenever we are
able to find them with high confidence (> 95%).
3.5.1 Scan Errors
Drives typically scan the disk surface in the background
and report errors as they discover them. Large scan error
counts can be indicative of surface defects, and therefore
are believed to be indicative of lower reliability. In our
population, fewer than 2% of the drives show scan errors
and they are nearly uniformly spread across various disk
models.
Figure 6 shows the AFR values of two groups of
drives, those without scan errors and those with one or
more. We plot bars across all age groups in which we
have statistically significant data. We find that the group
of drives with scan errors are ten times more likely to fail
than the group with no errors. This effect is also noticed
when we further break down the groups by disk model.
From Figure 8 we see a drastic and quick decrease in
survival probability after the first scan error (left graph).
A little over 70% of the drives survive the first 8 months
after their first scan error. The dashed lines represent the
95% confidence interval. The middle plot in Figure 8
separates the population in four age groups (in months),
and shows an effect that is not visible in the AFR plots. It
appears that scan errors affect the survival probability of
young drives more dramatically very soon after the first
scan error occurs, but after the first month the curve flattens
out. Older drives, however, continue to see a steady
decline in survival probability throughout the 8-month
period. This behavior could be another manifestation of
infant mortality phenomenon. The right graph in figure 8
looks at the effect of multiple scan errors. While drives
with one error are more likely to fail than those with
none, drives with multiple errors fail even more quickly.
Figure 6: AFR for scan errors. Figure 7: AFR for reallocation counts.
Figure 8: Impact of scan errors on survival probability. Left figure shows aggregate survival probability for all drives after first
scan error. Middle figure breaks down survival probability per drive ages in months. Right figure breaks down drives by their
number of scan errors.
The critical threshold analysis confirms what the
charts visually imply: the critical threshold for scan errors
is one. After the first scan error, drives are 39 times
more likely to fail within 60 days than drives without
scan errors.
3.5.2 Reallocation Counts
When the drive’s logic believes that a sector is damaged
(typically as a result of recurring soft errors or a hard error)
it can remap the faulty sector number to a new physical
sector drawn from a pool of spares. Reallocation
counts reflect the number of times this has happened,
and is seen as an indication of drive surface wear. About
9% of our population has reallocation counts greater
than zero. Although some of our drive models show
higher absolute values than others, the trends we observe
are similar across all models.
As with scan errors, the presence of reallocations
seems to have a consistent impact on AFR for all age
groups (Figure 7), even if slightly less pronounced.
Drives with one or more reallocations do fail more often
than those with none. The average impact on AFR
appears to be between a factor of 3-6x.
Figure 11 shows the survival probability after the first
reallocation. We truncate the graph to 8.5 months, due
to a drastic decrease in the confidence levels after that
point. In general, the left graph shows, about 85% of the
drives survive past 8 months after the first reallocation.
The effect is more pronounced (middle graph) for drives
in the age ranges [10,20) and [20, 60] months, while
newer drives in the range [0,5) months suffer more than
their next generation. This could again be due to infant
mortality effects, although it appears to be less drastic in
this case than for scan errors.
After their first reallocation, drives are over 14 times
more likely to fail within 60 days than drives without
reallocation counts, making the critical threshold for this
parameter also one.
Figure 9: AFR for offline reallocation count. Figure 10: AFR for probational count.
Figure 11: Impact of reallocation count values on survival probability. Left figure shows aggregate survival probability for all
drives after first reallocation. Middle figure breaks down survival probability per drive ages in months. Right figure breaks down
drives by their number of reallocations.
3.5.3 Offline Reallocations
Offline reallocations are defined as a subset of the reallocation
counts studied previously, in which only reallocated
sectors found during background scrubbing are
counted. In other words, it should exclude sectors that
are reallocated as a result of errors found during actual
I/O operations. Although this definition mostly holds,
we see evidence that certain disk models do not implement
this definition. For instance, some models show
more offline reallocations than total reallocations. Since
the impact of offline reallocations appears to be significant
and not identical to that of total reallocations, we
decided to present it separately (Figure 9). About 4% of
our population shows non-zero values for offline reallocations,
and they tend to be concentrated on a particular
subset of drive models.
Overall, the effects on survival probability of offline
reallocation seem to be more drastic than those of total
reallocations, as seen in Figure 12 (as before, some
curves are clipped at 8 months because our data for those
points were not within high confidence intervals). Drives
in the older age groups appear to be more highly affected
by it, although we are unable to attribute this effect to
age given the different model mixes in the various age
groups.
After the first offline reallocation, drives have over
21 times higher chances of failure within 60 days than
drives without offline reallocations; an effect that is
again more drastic than total reallocations.
Our data suggest that, although offline reallocations
could be an important parameter affecting failures, it is
particularly important to interpret trends in these values
within specific models, since there is some evidence that
different drive models may classify reallocations differently.
3.5.4 Probational Counts
Disk drives put suspect bad sectors “on probation” until
they either fail permanently and are reallocated or
continue to work without problems. Probational counts,
Figure 12: Impact of offline reallocation on survival probability. Left figure shows aggregate survival probability for all drives
after first offline reallocation. Middle figure breaks down survival probability per drive ages in months. Right figure breaks down
drives by their number offline reallocation.
Figure 13: Impact of probational count values on survival probability. Left figure shows aggregate survival probability for all
drives after first probational count. Middle figure breaks down survival probability per drive ages in months. Right figure breaks
down drives by their number of probational counts.
therefore, can be seen as a softer error indication. It
could provide earlier warning of possible problems but
might also be a weaker signal, in that sectors on probation
may indeed never be reallocated. About 2% of
our drives had non-zero probational count values. We
note that this number is lower than both online and offline
reallocation counts, likely indicating that sectors
may be removed from probation after further observation
of their behavior. Once more, the distribution of
drives with non-zero probational counts are somewhat
skewed towards a subset of disk drive models.
Figures 10 and 13 show that probational count trends
are generally similar to those observed for offline reallocations,
with age group being somewhat less pronounced.
The critical threshold for probational counts
is also one: after the first event, drives are 16 times more
likely to fail within 60 days than drives with zero probational
counts.
3.5.5 Miscellaneous Signals
In addition to the SMART parameters described in the
previous sections, which we have found to most closely
impact failure rates, we have also studied several other
parameters from the SMART set as well as other environmental
factors. Here we briefly mention our relevant
findings for some of those parameters.
Seek Errors. Seek errors occur when a disk drive fails to
properly track a sector and needs to wait for another revolution
to read or write from or to a sector. Drives report
it as a rate, and it is meant to be used in combination with
model-specific thresholds. When examining our population,
we find that seek errors are widespread within
drives of one manufacturer only, while others are more
conservative in showing this kind of errors. For this one
manufacturer, the trend in seek errors is not clear, changing
from one vintage to another. For other manufacturers,
there is no correlation between failure rates and seek
errors.
CRC Errors. Cyclic redundancy check (CRC) errors
are detected during data transmission between the physical
media and the interface. Although we do observe
some correlation between higher CRC counts and failures,
those effects are somewhat less pronounced. CRC
errors are less indicative of drive failures than that of cables
and connectors. About 2% of our population had
CRC errors.
Power Cycles. The power cycles indicator counts the
number of times a drive is powered up and down. In
a server-class deployment, in which drives are powered
continuously, we do not expect to reach high enough
power cycle counts to see any effects on failure rates.
Our results find that for drives aged up to two years, this
is true, there is no significant correlation between failures
and high power cycles count. But for drives 3 years
and older, higher power cycle counts can increase the
absolute failure rate by over 2%. We believe this is due
more to our population mix than to aging effects. Moreover,
this correlation could be the effect (not the cause)
of troubled machines that require many repair iterations
and thus many power cycles to be fixed.
Calibration Retries. We were unable to reach a consistent
and clear definition of this SMART parameter from
public documents as well as consultations with some of
the disk manufacturers. Nevertheless, our observations
do not indicate that this is a particularly useful parameter
for the goals of this study. Under 0.3% of our drives
have calibration retries, and of that group only about 2%
have failed, making this a very weak and imprecise signal
when compared with other SMART parameters.
Spin Retries. Counts the number of retries when the
drive is attempting to spin up. We did not register a single
count within our entire population.
Power-on hours Although we do not dispute that
power-on hours might have an effect on drive lifetime,
it happens that in our deployment the age of the drive is
an excellent approximation for that parameter, given that
our drives remain powered on for most of their life time.
Vibration This is not a parameter that is part of the
SMART set, but it is one that is of general concern in designing
drive enclosures as most manufacturers describe
how vibration can affect both performance and reliability
of disk drives. Unfortunately we do not have sensor
information to measure this effect directly for drives in
service. We attempted to indirectly infer vibration effects
by considering the differences in failure rates between
systems with a single drive and those with multiple
drives, but those experiments were not controlled
enough for other possible factors to allow us to reach
any conclusions.
3.5.6 Predictive Power of SMART Parameters
Given how strongly correlated some SMART parameters
were found to be with higher failure rates, we were
hopeful that accurate predictive failure models based on
SMART signals could be created. Predictive models are
very useful in that they can reduce service disruption
due to failed components and allow for more efficient
scheduled maintenance processes to replace the less efficient
(and reactive) repairs procedures. In fact, one of
the main motivations for SMART was to provide enough
insight into disk drive behavior to enable such models to
be built.
After our initial attempts to derive such models
yielded relatively unimpressive results, we turned to the
question of what might be the upper bound of the accuracy
of any model based solely on SMART parameters.
Our results are surprising, if not somewhat disappointing.
Out of all failed drives, over 56% of them have no
count in any of the four strong SMART signals, namely
scan errors, reallocation count, offline reallocation, and
probational count. In other words, models based only
on those signals can never predict more than half of the
failed drives. Figure 14 shows that even when we add
all remaining SMART parameters (except temperature)
we still find that over 36% of all failed drives had zero
counts on all variables. This population includes seek
error rates, which we have observed to be widespread in
our population (> 72% of our drives have it) which further
reduces the sample size of drives without any errors.
It is difficult to add temperature to this analysis since
despite it being reported as part of SMART there are no
crisp thresholds that directly indicate errors. However,
if we arbitrarily assume that spending more than 50%
of the observed time above 40C is an indication of possible
problem, and add those drives to the set of predictable
failures, we still are left with about 36% of all
drives with no failure signals at all. Actual useful models,
which need to have small false-positive rates are in
fact likely to do much worse than these limits might suggest.
We conclude that it is unlikely that SMART data alone
can be effectively used to build models that predict failures
of individual drives. SMART parameters still appear
to be useful in reasoning about the aggregate reliability
of large disk populations, which is still very important
for logistics and supply-chain planning. It is possible,
however, that models that use parameters beyond
those provided by SMART could achieve significantly
better accuracies. For example, performance anomalies
and other application or operating system signals could
be useful in conjunction with SMART data to create
more powerful models. We plan to explore this possibility
in our future work.
Figure 14: Percentage of failed drives with SMART errors.
4 Related Work
Previous studies in this area generally fall into two categories:
vendor (disk drive or storage appliance) technical
papers and user experience studies. Disk vendors
studies provide valuable insight into the electromechanical
characteristics of disks and both modelbased
and experimental data that suggests how several
environmental factors and usage activities can affect device
lifetime. Yang and Sun [21] and Cole [4] describe
the processes and experimental setup used by Quantum
and Seagate to test new units and the models that attempt
to make long-term reliability predictions based on accelerated
life tests of small populations. Power-on-hours,
duty cycle, temperature are identified as the key deployment
parameters that impact failure rates, each of them
having the potential to double failure rates when going
from nominal to extreme values. For example, Cole
presents thermal de-rating models showing that MTBF
could degrade by as much as 50% when going from operating
temperatures of 30C to 40C. Cole’s report also
presents yearly failure rates from Seagate’s warranty
database, indicating a linear decrease in annual failure
rates from 1.2% in the first year to 0.39% in the third
(and last year of record). In our study, we did not find
much correlation between failure rate and either elevated
temperature or utilization. It is the most surprising result
of our study. Our annualized failure rates were generally
higher than those reported by vendors, and more consistent
with other user experience studies.
Shah and Elerath have written several papers based
on the behavior of disk drives inside Network Appliance
storage products [6, 7, 19]. They use a reliability
database that includes field failure statistics as well as
support logs, and their position as an appliance vendor
enables them more control and visibility into actual deployments
than a typical disk drive vendor might have.
Although they do not report directly on the correlation
between SMART parameters or environmental factors
and failures (possibly for confidentiality concerns), their
work is useful in enabling a qualitative understanding
of factors what affect disk drive reliability. For example,
they comment that end-user failure rates can be as
much as ten times higher than what the drive manufacturer
might expect [7]; they report in [6] a strong experimental
correlation between number of heads and higher
failure rates (an effect that is also predicted by the models
in [4]); and they observe that different failure mechanisms
are at play at different phases of a drive lifetime
[19]. Generally, our findings are in line with these results.
User experience studies may lack the depth of insight
into the device inner workings that is possible in manufacturer
reports, but they are essential in understanding
device behavior in real-world deployments. Unfortunately,
there are very few such studies to date, probably
due to the large number of devices needed to observe
statistically significant results and the complex infrastructure
required to track failures and their contributing
factors.
Talagala and Patterson [20] perform a detailed error
analysis of 368 SCSI disk drives over an eighteen
month period, reporting a failure rate of 1.9%. Results
on a larger number of desktop-class ATA drives
under deployment at the Internet Archive are presented
by Schwarz et al [17]. They report on a 2% failure rate
for a population of 2489 disks during 2005, while mentioning
that replacement rates have been as high as 6%
in the past. Gray and van Ingen [9] cite observed failure
rates ranging from 3.3-6% in two large web properties
with 22,400 and 15,805 disks respectively. A recent
study by Schroeder and Gibson [16] helps shed light
into the statistical properties of disk drive failures. The
study uses failure data from several large scale deployments,
including a large number of SATA drives. They
report a significant overestimation of mean time to failure
by manufacturers and a lack of infant mortality effects.
None of these user studies have attempted to correlate
failures with SMART parameters or other environmental
factors.
We are aware of two groups that have attempted
to correlate SMART parameters with failure statistics.
Hughes et al [11, 13, 14] and Hamerly and Elkan [10].
The largest populations studied by these groups was of
3744 and 1934 drives and they derive failure models that
achieve predictive rates as high as 30%, at false positive
rates of about 0.2% (that false-positive rate corresponded
to a number of drives between 20-43% of the
drives that actually failed in their studies). Hughes et al.
also cites an annualized failure rate of 4-6%, based on
their 2-3 month long experiment which appears to use
stress test logs provided by a disk manufacturer.
Our study takes a next step towards a better understanding
of disk drive failure characteristics by essentially
combining some of the best characteristics of studies
from vendor database analysis, namely population
size, with the kind of visibility into a real-world deployment
that is only possible with end-user data.
5 Conclusions
In this study we report on the failure characteristics of
consumer-grade disk drives. To our knowledge, the
study is unprecedented in that it uses a much larger
population size than has been previously reported and
presents a comprehensive analysis of the correlation between
failures and several parameters that are believed to
affect disk lifetime. Such analysis is made possible by
a new highly parallel health data collection and analysis
infrastructure, and by the sheer size of our computing
deployment.
One of our key findings has been the lack of a consistent
pattern of higher failure rates for higher temperature
drives or for those drives at higher utilization levels.
Such correlations have been repeatedly highlighted
by previous studies, but we are unable to confirm them
by observing our population. Although our data do not
allow us to conclude that there is no such correlation,
it provides strong evidence to suggest that other effects
may be more prominent in affecting disk drive reliability
in the context of a professionally managed data center
deployment.
Our results confirm the findings of previous smaller
population studies that suggest that some of the SMART
parameters are well-correlated with higher failure probabilities.
We find, for example, that after their first scan
error, drives are 39 times more likely to fail within 60
days than drives with no such errors. First errors in reallocations,
offline reallocations, and probational counts
are also strongly correlated to higher failure probabilities.
Despite those strong correlations, we find that
failure prediction models based on SMART parameters
alone are likely to be severely limited in their prediction
accuracy, given that a large fraction of our failed drives
have shown no SMART error signals whatsoever. This
result suggests that SMART models are more useful in
predicting trends for large aggregate populations than for
individual components. It also suggests that powerful
predictive models need to make use of signals beyond
those provided by SMART.
Acknowledgments
We wish to acknowledge the contribution of numerous
Google colleagues, particularly in the Platforms and
Hardware Operations teams, who made this study possible,
directly or indirectly; among them: Xiaobo Fan,
Greg Slaughter, Don Yang, Jeremy Kubica, Jim Winget,
Caio Villela, Justin Moore, Henry Green, Taliver Heath,
and Walt Drummond. We are also thankful to our shepherd,
Mary Baker for comments and guidance. A special
thanks to Urs H¨olzle for his extensive feedback on our
drafts.
References
[1] The r project for statistical computing.
http://www.r-project.org.
[2] Dave Anderson, Jim Dykes, and Erik Riedel. More
than an interface – scsi vs. ata. In Proceedings of
the 2nd USENIX Conference on File and Storage
Technologies (FAST’03), pages 245 – 257, February
2003.
[3] Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson
C. Hsieh, Deborah A. Wallach, Mike Burrows,
Tushar Chandra, Andrew Fikes, and Robert E.
Gruber. Bigtable: A distributed storage system for
structured data. In Proceedings of the 7th USENIX
Symposium on Operating Systems Design and Implementation
(OSDI’06), November 2006.
[4] Gerry Cole. Estimating drive reliability in desktop
computers and consumer electronics systems. Seagate
Technology Paper TP-338.1, November 2000.
[5] Jeffrey Dean and Sanjay Ghemawat. Mapreduce:
Simplified data processing on large clusters.
In Proceedings of the 6th USENIX Symposium
on Operating Systems Design and Implementation
(OSDI’04), pages 137 – 150, December 2004.
[6] Jon G. Elerath and Sandeep Shah. Disk drive reliability
case study: Dependence upon fly-height
and quantity of heads. In Proceedings of the Annual
Symposium on Reliability and Maintainability,
January 2003.
[7] Jon G. Elerath and Sandeep Shah. Server class
disk drives: How reliable are they? In Proceedings
of the Annual Symposium on Reliability and
Maintainability, pages 151 – 156, January 2004.
[8] Sanjay Ghemawat, Howard Gobioff, and Shun-Tak
Leung. The google file system. In Proceedings of
the 19th ACM Symposium on Operating Systems
Principles, pages 29 – 43, December 2003.
[9] Jim Gray and Catherine van Ingen. Empirical
measurements of disk failure rates and error rates.
Technical Report MSR-TR-2005-166, December
2005.
[10] Greg Hamerly and Charles Elkan. Bayesian approaches
to failure prediction for disk drives. In
Proceedings of the Eighteenth International Conference
on Machine Learning (ICML’01), June
2001.
[11] Gordon F. Hughes, Joseph F. Murray, Kenneth
Kreutz-Delgado, and Charles Elkan. Improved
disk-drive failure warnings. IEEE Transactions on
Reliability, 51(3):350 – 357, September 2002.
[12] Peter Lyman and Hal R.Varian.
How much information? October
2003. http://www2.sims.berkeley.edu/
research/projects/how-much-info-2003/index.htm.
[13] Joseph F. Murray, Gordon F Hughes, and Kenneth
Kreutz-Delgado. Hard drive failure prediction using
non-parametric statistical methods. Proceedings
of ICANN/ICONIP, June 2003.
[14] Joseph F. Murray, Gordon F. Hughes, and Kenneth
Kreutz-Delgado. Machine learning methods
for predicting failures in hard drives: A multipleinstance
application. J. Mach. Learn. Res., 6:783–
816, 2005.
[15] Rob Pike, Sean Dorward, Robert Griesemer, and
Sean Quinlan. Interpreting the data: Parallel analysis
with sawzall. Scientific Programming Journal,
Special Issue on Grids and Worldwide Computing
Programming Models and Infrastructure,
13(4):227 – 298.
[16] Bianca Schroeder and Garth A. Gibson. Disk
failures in the real world: What does an mttf of
1,000,000 hours mean to you? In Proceedings of
the 5th USENIX Conference on File and Storage
Technologies (FAST), February 2007.
[17] Thomas Schwartz, Mary Baker, Steven Bassi,
Bruce Baumgart, Wayne Flagg, Catherine van
Ingen, Kobus Joste, Mark Manasse, and Mehul
Shah. Disk failure investigations at the internet
archive. 14th NASA Goddard, 23rd IEEE Conference
on Mass Storage Systems and Technologies,
May 2006.
[18] Sandeep Shah and Jon G. Elerath. Disk drive vintage
and its effect on reliability. In Proceedings
of the Annual Symposium on Reliability and Maintainability,
pages 163 – 167, January 2004.
[19] Sandeep Shah and Jon G. Elerath. Reliability analysis
of disk drive failure mechanisms. In Proceedings
of the Annual Symposium on Reliability and
Maintainability, pages 226 – 231, January 2005.
[20] Nisha Talagala and David Patterson. An analysis
of error behavior in a large storage system. Technical
Report CSD-99-1042, University of California,
Berkeley, February 1999.
[21] Jimmy Yang and Feng-Bin Sun. A comprehensive
review of hard-disk drive reliability. In Proceedings
of the Annual Symposium on Reliability and
Maintainability, pages 403 – 409, January 1999.In July of 2018, this honey pot forum was sold out to an unidentified NPC sock puppet and troll organization. Most independent thinkers and writers migrated to other MGTOW forums as a result of the never-ending infighting and deliberate trouble starting caused by members who were given "carte blanche" by the admin to do whatever they want. Before my departure, I only left a few thousand cat pics here to comfort and ridicule the feminist owners who now run this place. Their background agenda is to make MGTOW look like a club of losers the public eye. And during the course of 2019, they actually managed to destroy almost all other MGTOW venues as well. Here is the truth about "theindependentman.org" aka "TIM" which was created as an extended workbench to further divide the community. When you register, they install a spyware Zombie cookie on your browser that does all kinds of things the user does not know of: http://www.filedropper.com/essay-on-the-removal-of-malware-cookies-used-by-tim
2018-10-02 at 7:40 AM#862716Some of you sure are mouthy little pricks…. behind the anonymous safety of the internet. I am 6 foot 6, around 220….. very strong from 25 years of construction. If you said the things you’ve said to me here, to my face, I would snatch you up by the throat beat your ass. Anyone that would like to arrange said meeting, PM ME PLEASE!AGAIN…. I did not post this for validation. I dont know a single one of you, and therefor dont give a flying s~~~ what you think of me, or my situation. This post was intended to help those who still date avoid mistakes I have made in the past. That’s it. Period.Over the years… I have met many women whom I could have had a good relationship with. I just wasnt attracted to them. Finally I have found one that I am actually attracted to. I am happier than I have ever been in my entire life, and I will be the one laughing in 5 years when I am still getting country home cooking and hot young pussy every night, while you are still jerking off and playing video games. And now I will be sure to come back here and rub it in your face.I’m gonna go wake up my wife with my c~~~, and then have her fix me some lunch. Enjoy your Fortnite, Call of Duty, or whatever other pathetic waste of time you had planned today.Grow the f~~~ up losers.
You seem like you care alot, otherwise why seek our opinion.
You seem to think we are jealous of you and use shaming language – just like a woman would.
People will tell you the truth here. Your story goes against all MGTOW principles. You espouse NAWALT and try to persuade others to accept that. Please do come back in five, ten, fifteen years and tell us how it went for you.
If you were close to suicide that is a terrible place to be but happiness with women is fleeting and male purpose must be built on other pillars, especially in light of unplanned pregnncies, false allegations etc etc, being cheated on, given stds.
If you are a man of experience it baffles me you would wilfully ignore that or threaten to beat to a pulp those who disagree. It comes across as insecure and “hot young pussy” as you put it is at best an enjoyable pleasure, at worst a venus fly trap.
We all must pursue our own happiness. I guarantee you it’s not in the arms (or legs) of a woman. Hope it works out regardless. I’d love to be wrong here but much experience has taught me otherwise.If you fall down 7 times, get up 8
2018-10-02 at 6:58 AM#862705Some of you sure are mouthy little pricks…. behind the anonymous safety of the internet. I am 6 foot 6, around 220….. very strong from 25 years of construction. If you said the things you’ve said to me here, to my face, I would snatch you up by the throat beat your ass. Anyone that would like to arrange said meeting, PM ME PLEASE!
AGAIN…. I did not post this for validation. I dont know a single one of you, and therefor dont give a flying s~~~ what you think of me, or my situation. This post was intended to help those who still date avoid mistakes I have made in the past. That’s it. Period.
Over the years… I have met many women whom I could have had a good relationship with. I just wasnt attracted to them. Finally I have found one that I am actually attracted to. I am happier than I have ever been in my entire life, and I will be the one laughing in 5 years when I am still getting country home cooking and hot young pussy every night, while you are still jerking off and playing video games. And now I will be sure to come back here and rub it in your face.
I’m gonna go wake up my wife with my c~~~, and then have her fix me some lunch. Enjoy your Fortnite, Call of Duty, or whatever other pathetic waste of time you had planned today.
Grow the f~~~ up losers.2018-10-02 at 6:43 AM#862698In reply to: Jerry Brown signs new gun laws in California
I am guessing they can go to the Courthouse just tell the judge they want a Gun violence restraining order without filling out the details and enact it (which gives the Police the right to seize your Firearm without even a search warrant or even a written request). Now again i believe that by the time they would be starting this (aka the Cop going to the Courthouse etc) anyone who would actually be shooting People would be in the final stages of planning (meaning the Police are useless when the shooter is driving to the School or the Mall right before it).
One can guess that anti-gun groups will file false reports such as the above on EVERY Republican they can find.
It seems the state government of California want to drive all every person and business worth anything, with only illegals being left in California.
2018-10-02 at 12:06 AM#862627In reply to: Jerry Brown signs new gun laws in California
— Ban anyone convicted of certain domestic violence misdemeanors from owning a firearm for life.
Wait this is just in California? i thought this was nationwide via the Lattenberg amendment. Either way in my opinion the laws are bulls~~~ because they restrict the rights of Citizens to own Firearm’s (yes even Felons should have the right to own a Gun if they so choose to).
However this is my opinion but onto the next one.
— Allow police to ask for a gun violence restraining order verbally when there’s not time to make a written request.”
I am guessing they can go to the Courthouse just tell the judge they want a Gun violence restraining order without filling out the details and enact it (which gives the Police the right to seize your Firearm without even a search warrant or even a written request). Now again i believe that by the time they would be starting this (aka the Cop going to the Courthouse etc) anyone who would actually be shooting People would be in the final stages of planning (meaning the Police are useless when the shooter is driving to the School or the Mall right before it).
Just an east coast asshole who likes to curse, If you get offended by words like fuck, cunt, shit, piss, bitch or any racial slurs then you just scroll down.
2018-09-24 at 10:12 AM#860290In reply to: Oh, How F~~~ed Up the Hamster Wheel Turns
Do you have any idea when you will finally escape this legal binding and be an actual free man?
I will start the process in about 5 years.
Until then, I’m stuck with a Lazy, Useless ‘room mate’ that doesn’t pay THEIR SHARE of ANYTHING, but it’s a DAILY REMINDER of something that I WILL NEVER DO AGAIN.Oh damn. Sorry to hear that.
As you said above in another post, I don’t see how anyone can do it more than once in their lives. The only explanation I can find is maybe they think they just picked a bad one the first time around. How many times does it take for those guys to realize that there aren’t any good ones?
The evil in women’s hearts leaves them no moral bounds as to inhibit them from descending to the lowest levels of darkness to acquire their self entitled desires.
2018-09-21 at 6:22 PM#859525And if anyone wants to know what the definition of insanity is:
Objectively, and I am using this as an approximate number, but –
Factoring in average wage levels / income, cost of living, taxes and related life expenses – About 75% of men (+/- 5% margin of error) CANNOT afford to get married and stayed married but it is without a doubt the primary life goal of blue pill men to be married.
That right there is the problem.
It is the equivalent of a group of blind men aspiring to become artists and paint landscapes or deaf men having ambitions of composing classical symphonies or men that are paralyzed planning to compete in a triathlon.
The fundamental foundation of the entire equation is flawed from the start yet marriage to a blue pill chump is considered by him to be his highest future potential calling and life goal.
It is hard to fully grasp – But in closing, just imagine if it was about 1830 and some tribal black guy was running around the jungle in Africa, dreaming of one day being captured, put in chains then transported across the ocean on a boat so that he could be sold to someone in South Carolina and finally achieve his dream of picking cotton all day on a plantation.
2018-09-14 at 7:19 PM#856979And I never did s~~~ like that. It was your little mistake pounding on me. I now keep digging and we will see who gets punted.
As if you had anything to say here.
I don’t have to answer to youFinally a heart to heart conversation Gargamel, that’s better! And I don’t have to answer to you either, I have been living the Mgtow life style longer than it has existed online by the way.
Don’t try to dig up some tiny pieces of dirt or skeletons from my closet. I’m a a Very peaceful dude but there’s a limit to my patience. And by the way, it’s true that you keeps saying you’re done here and yet you try to create chaos as if you would like to take this site down! Like those kill suicide cases!
It’s like this for you… you couldn’t become the king , so now you want to destroy it before leaving! Is that your genius master plan?? What do they say in America… Misery loves company??
In July of 2018, this honey pot forum was sold out to an unidentified NPC sock puppet and troll organization. Most independent thinkers and writers migrated to other MGTOW forums as a result of the never-ending infighting and deliberate trouble starting caused by members who were given "carte blanche" by the admin to do whatever they want. Before my departure, I only left a few thousand cat pics here to comfort and ridicule the feminist owners who now run this place. Their background agenda is to make MGTOW look like a club of losers the public eye. And during the course of 2019, they actually managed to destroy almost all other MGTOW venues as well. Here is the truth about "theindependentman.org" aka "TIM" which was created as an extended workbench to further divide the community. When you register, they install a spyware Zombie cookie on your browser that does all kinds of things the user does not know of: http://www.filedropper.com/essay-on-the-removal-of-malware-cookies-used-by-tim
2018-09-14 at 7:17 PM#856976And I never did s~~~ like that. It was your little mistake pounding on me. I now keep digging and we will see who gets punted.
As if you had anything to say here.
I don’t have to answer to you
Finally a heart to heart conversation Gargamel, that’s better! And I don’t have to answer to you either, I have been living the Mgtow life style longer than it has existed online by the way.
Don’t try to dig up some tiny pieces of dirt or skeletons from my closet. I’m a Very peaceful dude but there’s a limit to my patience. And by the way, it’s true that you keep saying you’re done here and yet you try to create chaos as if you would like to take this site down! Like those kill suicide cases!
It’s like this for you… you couldn’t become the king , so now you want to destroy it before leaving! Is that your genius master plan?? What do they say in America… Misery loves company??
You must own a better Crystal ball than I2018-09-10 at 7:12 PM#855834In reply to: The Sun Shines On a Dog’s Ass Every Now and Then.
Then I realize that I did stop cursing, and blaming God for my problem. I had started legitimately asking God for his help. I know this is where my story is starting to get weird, but my belief, and respect for God is the only thing that I’ve changed over this summer. It could be a coincidence, but also there are other strange things that have happened to me lately that seem to point to God trying to offer me help. I’ve never been a strong believer in a God or Jesus in the past, and after going through these past few years of hell, and finally getting some relief, I’m starting to change my views on the subject.
I don’t mean to testify to you gentlemen here about faith in a God. I’m not one to push that on anyone, and I don’t have any answers. I am glad to share my experiences with you. I am not the same man I was two years ago, that is for sure, and I can say, that I no longer fear death after all of this.There is actually a neurobiological explanation for your improvement. Your brain is beginning to literally rewire.
Read more: “The God Shaped Brain” Tim Jennings, MDNow this is interesting. Thanks Solo Man’s Wisdom. I will check it out.
Back off Barbie!
2018-09-10 at 4:28 AM#855652In reply to: The Sun Shines On a Dog’s Ass Every Now and Then.
Then I realize that I did stop cursing, and blaming God for my problem. I had started legitimately asking God for his help. I know this is where my story is starting to get weird, but my belief, and respect for God is the only thing that I’ve changed over this summer. It could be a coincidence, but also there are other strange things that have happened to me lately that seem to point to God trying to offer me help. I’ve never been a strong believer in a God or Jesus in the past, and after going through these past few years of hell, and finally getting some relief, I’m starting to change my views on the subject.
I don’t mean to testify to you gentlemen here about faith in a God. I’m not one to push that on anyone, and I don’t have any answers. I am glad to share my experiences with you. I am not the same man I was two years ago, that is for sure, and I can say, that I no longer fear death after all of this.
There is actually a neurobiological explanation for your improvement. Your brain is beginning to literally rewire.
Read more: “The God Shaped Brain” Tim Jennings, MD
When women lead, destruction is the destination. -- Me.
2018-09-07 at 11:50 PM#854958In reply to: Abortion Stories
I am pretty sure that one or two children of mine have been aborted and if there was any legal way of punishing the criminals I would seek it. It burns me inside to think of it.
Here is a story about a girl I never knew well who had an abortion. She was the sister of a good friend’s slut wife. (My friend supported the wife for years as she tried to be a writer and then the wife ran off with the neighbour, taking the car he had bought her, leaving my friend in negative equity on their shared flat). Anyhow the interesting thing about this tale is that the slut was on holiday before her abortion to see her sister. I was on holiday to see my old mate, so we were staying together in the same house. The girl was a medical student and she was happily telling everyone how great abortion was and how wonderful and heroic the doctors who did it were for standing up to the nasty bigots who did not support a woman’s right to choose.
Anyhow, my old friend had parents who loved to entertain. They had us all round one evening. My friend was from a quite political family. They had been big in the liberal party. His parents were personal friends of David steel, the liberal (later to be party leader) who brought the private members bill that allowed abortion in the UK. They were horrified as to what this girl was planning to do. The mother had lost a baby to a late miscarriage and hated the idea of killing a baby. They were also however liberals. They had to respect the slut’s right to kill her child. So they did not say one word against her but they were always saying things like “Are you sure you would like another drink /another cigarette, you know you are pregnant”. The girl was happily standing up for her rights and the older couple were trying to gently talk around the subject of the morality of abortion constantly coming back to “well it is a woman’s right and I will never stand in the way of that”.
Finally when the girl had just ignored everything they said “You know we have sat around dinner tables with David Steele talking about his private members bill and he never saw it coming to this. He brought that bill to stop women dying of backstreet abortions or suicide. He never imagined it would be more than a few hundred cases a year, women who were psychologically damaged and who should not be forced to be mothers.”
The 1967 act provided a defence for doctors performing an abortion on any of the following grounds:
To save the woman’s life
To prevent grave permanent injury to the woman’s physical or mental health
Under 24 weeks to avoid injury to the physical or mental health of the woman
Under 24 weeks to avoid injury to the physical or mental health of the existing child(ren)
If the child was likely to be severely physically or mentally handicappedTwo doctors are required to make this judgement in good faith. None of that was with a view to hundreds of thousands of sluts annually clearing out unwanted garbage on demand. The trouble with the way the law is made is that it can be made for special cases that then become widened to include anyone.
I will never forget that old couple contemplating the ruin that their misguided liberalism had wrought, the cruelty they had unleashed by attempting to stop cruelty. They were trying to reconcile their liberal principles with the mass murder they had unleashed.
A woman is like fire -fun to play with, can warm you through and cook your food, needs constant feeding, can burn you and consume all you own
2018-09-03 at 1:57 PM#853562In reply to: Opinion as Fact
These teachers are taught to do this it has been going on even when I went to school in the early 90’s from grade school all the way up to college/university. It started ramping up in the early 2000’s and hasn’t let up since it has only gotten worse. Here is an example found here of how they teach in their classrooms.
Even when caught doing it the teachers don’t care this one was asked to resign in Israel of all places and he refused. Link.
Teachers caught indoctrinating children to Antifa violence.
Link:
School that forced children to write essays about Muslims being ‘excluded’ apologises after outraged families accuse teachers of ‘indoctrinating’ students.
-Leumeah High School in Sydney is testing Year 12 students on ‘Muslim exclusion’
-Society and Culture students assessed on ‘barriers Muslims face’ in Australia
-An older brother of a student described the task as ‘indoctrinating the young’
-Department of Education spokesman said the question would now be changed
These are a variety of teaching methods that teachers employ and a list can be found here.
If you read through this article you will find many methods but the ones that the teachers who are paid to indoctrinate employ the most is the Direct Instruction (Low Tech) approach.
Direct instruction is the general term that refers to the traditional teaching strategy that relies on explicit teaching through lectures and teacher-led demonstrations.
In this method of instruction, the teacher might play one or all of the following roles:

As the primary teaching strategy under the teacher-centered approach, direct instruction utilizes passive learning, or the idea that students can learn what they need to through listening and watching very precise instruction. Teachers and professors act as the sole supplier of knowledge, and under the direct instruction model, teachers often utilize systematic, scripted lesson plans. Direct instruction programs include exactly what the teacher should say, and activities that students should complete, for every minute of the lesson.
Because it does not include student preferences or give them opportunities for hands-on or alternative types of learning, direct instruction is extremely teacher-centered. it’s also fairly low-tech, often relying on the use of textbooks and workbooks instead of computers and 1:1 devices.As you can see there is no room for discussion as to what is being taught as the teachers themselves are taught to make sure they stick to the point and not deviate from the lesson being taught regardless if it is wrong. Text books that were once full of factual information are being rewritten to fit the new false narrative.
Inquiry-based Learning (High Tech)
Based on student investigation and hands-on projects, inquiry-based learning is a teaching method that casts a teacher as a supportive figure who provides guidance and support for students throughout their learning process, rather than a sole authority figure.
In this method of instruction, the teacher might play one or all of the following roles:

Teachers encourage students to ask questions and consider what they want to know about the world around them. Students then research their questions, find information and sources that explain key concepts and solve problems they may encounter along the way. Findings might be presented as self-made videos, websites, or formal presentations of research results.
Inquiry-based learning falls under the student-centered approach, in that students play an active and participatory role in their own learning. But teacher facilitation is also extremely key to the process. Usually, during the inquiry cycle, every student is working on a different question or topic. In this environment, teachers ask high-level questions and make research suggestions about the process rather than the content. At the end of the inquiry cycle, students reflect on the experience and what they learned. They also consider how it connects to other topics of interest, as an inquiry on one topic often results in more questions and then an inquiry into new fields External link .
Inquiry-based learning can make great use of technology through online research sites, social media, and the possibility for global connections with people outside of the community. But depending on the subject at hand, it doesn’t necessarily require it.
In the first link that I posted above at the start of this you will find how the teachers use this method to subvert a sound teaching method into one that becomes less beneficial to the student and instead used as an indoctrination tool.
In Virginia, Liberty Middle School teacher Michael Denman forced his students to identify weaknesses in each of the Republican presidential contenders. They were then asked to develop a strategy to expose these flaws and research where they’d send their plans to reach President Barack Obama’s campaign. Much to the shock and dismay of conservatives, the students weren’t asked to vet Obama in like manner.
But should we really be surprised? America’s schools are infested with left-wing teachers. The facts are there to prove it.
Finally even in college and universities the students are taught to hate western civilization and ignore historical facts because they are racist. Here is a link to this article from fox news that although should be shocking is the norm across the world.
Today’s students don’t know what liberty costs. They can’t identify between good and right. They have no sense of what “exceptionalism” means. They don’t know how history got us to where we are today, and why its bloody path was worth it.
Can you answer the following questions?
Who fought in the Peloponnesian War?
Who taught Plato, and whom did Plato teach?
Who was Saul of Tarsus?
Why does the Magna Carta matter?
What are one or two of the arguments made in Federalist 10?
Hard questions, right? Maybe not. Maybe you learned some or all of the answers in school, or you knew them at one time, but have now forgotten the details. Or perhaps you are devoted to a few events that you have internalized and helped form you into the person you are today.
But knowing the answers in great detail may be less important than recognizing the importance of the questions.
Unfortunately, Stanford University students may never realize how significant and meaningful these questions are because the student government earlier this week voted overwhelmingly against requiring students to complete a two-quarter course on Western civilization.
That’s right. Instead, the student leadership, validated by its Pravda-esque mouthpiece, The Stanford Daily, concluded that supporting Western civilization basically equated to “upholding white supremacy, capitalism and colonialism, and all other oppressive systems that flow from Western civilizations.”
Apparently no one taught this up-and-coming generation that Western civilization is full of the theoretical underpinnings for things like democracy, equality, freedom, liberty, is the source of many of those sticky philosophical foundations for arguments in support of ending human oppression and slavery, and developed the economic principles responsible for pulling billions out of poverty.So where does this leave us? Facts are replaced with opinion because children are taught at a early age to ignore facts. Indoctrination on a global scale has a dramatic planned negative consequence by the global elite to dumb down the population who are never taught to think logically. Some children will speak up against this and those that do are called problem children and fed drugs or removed from the class room if they become a distraction. Parents who are working could care less what their children are taught so long as they don’t have to babysit them while they are at work although some parents wise up and remove their children from this environment at great cost to them or their pocket book. It will take another generation once all these teachers are removed to fix this problem and I don’t see that happening anytime soon.
2018-08-30 at 4:31 PM#852674I’m for once going to give the cliff notes or Lee LOO version of this meltdown theory.
Okay, so I am already rich and powerful. So what do I do now? I know I am going to kick the bucket soon, and I have 5 kids. I think of myself as Gods gift to women and the world. And No one is better than me and I now have a chance to PROVE that my family name is going to go down in History as being at the very least NEVER FORGOTTEN. I also have the chance where my own son Might become president of the united states one day or one of my daughters.
Oh and I am also a Business owner. I don’t want to f~~~ over my company, and I am Tired of paying High taxes. And I am Tried of the Red tape I have to go through to make a buck. Do I?
1. Run the Economy to the ground?
2. Be the Sole Cause of a Meltdown, therefore ruin my chances of getting elected again?
3. F~~~ things up so that all the business that I own take a serious hit. All the people I love and care about are getting f~~~ed over too?
By the way, I think I’m gods gift to planet earth. Oh, and I’m a White boy. I once FIRED someone for saying he was WHITE TRASH on my TV show. And… I f~~~ing LOVE to stick it to people and get the Last laugh.
Oh, and BTW. Almost EVERYONE Hates the fact that I can do whatever the hell I want to do and make it happen.
Did I mention that I will go down in History as one of the Greatest Presidents and White men in History? MILLIONS of young WHITE MEN are going to see the s~~~ I went through, and are going to want to be JUST LIKE ME? Never once fazed by being called a Racist, a Bigot, Cis White Male, Rape allegations, etc. Etc. THE LIST GOES ON FOREVER.
Oh and ONE MORE THING,… “GRAB THEM BY THE PUSSY!” is going to replace “GO get em Tiger”. And Why? Because women will never admit nor say that their pussy is a weakness. Hell, they have been Playing the Pussy Hat Power card for so long, they can’t even mount ANY attack on that either.
So as far as the Economy goes? All he has to do is turn to men, you know.. All the ones being f~~~ed over and not being able to work or had their licenses taken away. Or men of the Rust belt who’s area is PRIME for cheap Manual labor jobs. Or say to men Finally, you all get 50/50 custody.
How about this. TELL THE YOUNG 17 year old young men OF AMERICAN SCHOOLS that “I guarantee you a training pipeline by joining the Military. And they will finally having a training program Centered around MEN that HE ALONE is able to control, and run to make every single YOUNG MAN that ever even THOUGHT about NOT voting, VOTE for.. You guessed it. TRUMP.
Btw, how many of these men, are going to have young women that are Fat, Ugly, no hopes of a better future, all of a sudden see that they have a chance to live off a young man that they MIGHT be able to Monkey branch off from…
Btw, the Program being all men, is likely to have people in charge and WOMEN around that are going to tell young men Not to get married Until they finish the program. In as little as 1 year, Almost 90% of the relationships that these young men had, will vanish as their SMV goes skyrockets.
In one year, Trump could make the USA a f~~~ing economic powerhouse. that would even pay off the national debt in 6 years.
I want you to imagine What Federal Welfare spending would look like to the MILLIONS of single Young moms out there. That in order to get services, from the government, YOU had to WORK. And I mean 40 LONG hours a week. Mandatory if you are 18. No option for school, I mean you get a JOB. And if you work LESS then 40 hours, you get Reduced benefits. Oh and that money? You only get 1/4 of that money. The Business takes you as a complete tax right off. That means for every single mom they hire, the business Littlery pays NOTHING to hire that person. Every cost this person Makes for the business, Insurance liability etc, ALL of it gets taken DOUBLE off for the RISK of hiring this woman. Basically, a Program that cost the Tax PAYER nothing, since all the cost of the welfare, is paid by this woman who is WORKING not only to pay HER SHARE, but a LARGE amount of that money goes back into the system.
Also, when she gets a Regular job, she has to pay back that welfare money. If she can’t Women’s organizations can step in and help out. THere are over a million programs for women. Let them put their money where their mouth is.
How about MEN on Welfare. Poor Neighborhoods? No more free handouts. These men start patrolling their streets 8 hours a day. 24 hours a day. They work with their state to bring back jobs, and all learn to be Security guards. CLeaning the streets of trash, debris etc. They work with business to clean up Every piece of garbage & graffit their is.
Next would be to put these Kids in Highschool to work. All the Legal Crap and cost of hiring kids to work at the Fast Food joints, Jobs at the mall etc are going to be given HUGE tax incentives to do so. And Kids that chose to work at places in industry like Aeronautical, Engineering, and trades will get mandatory entry into Internship programs that will allow them to get a job Right after exiting High School. If they want to join the Military, they will get advancement to E-3 right out of the gate if they go into the pipeline where people are needed in the military for that trade.
HUD will now offer a Home buyers program that makes it so that if you buy a home and are married, and you seperate, the home will be sold, but the money will only be able to be split in half IF the marriage was BEFORE the man bought a house. And if the man was married AFTER he had a home, he gets to keep his home PERIOD. No amount of contribution that she makes will be counted and she can never be given the home, or assets to the home etc. Kind of like a trust.
We have a serious problem with single moms. Parents will now be WHOLY responsible if their daughters get pregnant. So if a single mom has a daughter that gets Prego, SHE HAS TO PAY ALL THE COSTS of Welfare she seeks etc. ALL OF IT.
No more funds to programs that are ESL. THe STATE pays for these programs. OUT OF POCKET. PERIOD. Same with those people who are not US CItizens.
All AID to other nations will be Halted till the US debt is ZERO. That includes untied nations, all that crap. ALL OF IT. If they need AID, we have the Military that will be HAPPY to AID YOU with any kind of LABOR you need. That Includes FARMING. But for every dollar of aid you get, you pay 5 dollars back. Or allow the US government and its business to set up shop, and or resources of your country. Sell our goods and services etc.
Every African Country that Owes us money will pay its Natural resources. Or anything it has of value to the USA. That means if we want to Mine, set up oil fields, ETC, we get to do that.
Every woman’s program has to have a MALE counterpart with likewise spending. If not, the money comes from WOMENs TAXES. End of story. Since Male children come from women, And the N.O.W. said that there is not benefit to having a father around? He will no longer be held responsible for paying for that child. It is the WOMEN’S resource, It came out of HER VAGINA, and GOD DAMN IT SHE SHOULD ENJOY PAYING FOR IT!
All Family courts now will have to pay Their own way for funding. THat means ZERO Federal funding for Family courts etc.
VAWA will be thrown out. And WOMEN can f~~~ing FUND that s~~~. I am sure MILLIONS of men and women will donate and vote to pay for that s~~~ per state. THey can do their OWN VAWA. So if they want to go BROKE, they can keep doing that s~~~. States that do not follow that crap WILL I GUARANTEE do better economically then states that Keep pushing the VAWA CRAP.
Also, the Federal Government is going to get out of the Child Support Racket. So no more funding their for all those FAT bitches at helping FAT BITCHES have FAT JOBS. WOmen can fund that s~~~ through the state if they want it.
Children are a RESOURCE! If you have the kids YOU PAY FOR EM! Let give WOMEN the chance to show us they are willing to take care of their kids and pay for all the costs. ITS ONLY EQUALITY AFTER ALL! And you know if men can do it, then women should be able to since there is TONS of Research by WOMEN that point out that WOMEN are NATURAL CAREGIVERS. Now that they have jobs, WOMEN can take care of all the needs of their children, and keep the Custody arrangements they asked for.
All MEN who only get to see their kids on the weekends will get Child support based on VISITATION if courts order it. If it is every other weekend, then that is the amount they have to pay, and that amount paid is at rates we pay CHILD CARE PROVIDERS! That means if the going rate for child care is $3.59 an hour, per child, then that is what they get. FAIR IS FAIR. THe amount CANNOT EXCEED social security if the government gets involved.
Divorce and Alimony payments will be will be paid by the the state that issues the marriage license. All Marriage will include the cost of divorce in the marriage. That means if the marriage goes south, all Alimony that is rewarded is paid by the state. All marriages MUST Include a CLASS that goes over ALL of the PITFALLS and DANGERS of MARRIAGE. ALL OF THEM. This will also include Attending FAMILY COURT and volunteering to deal with all them men and women that have to go through the system. Just seeing what the family court does will CURB MARRIAGE and at the same time, tell MEN its not only a bad idea, but they are BETTER OFF not getting married, and having women face reality that if they ever WANT to get married, they are going to have to pay for the thing ALL THE WAY. Or change some laws that make it Beneficial for men to do so.
Foster Kid Act. Each child who is in foster care will get a family. How? If you adopt a foster kid, Those with High Incomes will get a 200% tax right off for all any funds spent on that child. THat means that Foster kids are NEVER going to be raised by any s~~~lord parents period. ZERO MONEY GOES TO THE STATE PERIOD. NONE. NADA. ZILCH! Any Family that wishes to adopt children will get significant breaks on all sorts of things as well as their employers. The Program will be made so that these children understand that the USA CARES about its future and wants them to be the next generation of successful and loved kids. A REAL NO CHILD LEFT BEHIND PROGRAM.
There will now be a 50/50 gender quota on schools that take federal funds. THat means that 50% of the staff in education will have to be MALE in order to have funding. NO EXPECTATIONS.
High School sports will be now the BIGGEST f~~~ing thing in the world…In the USA. Also, all those damn things about Parents having to pay for their kids to play sports are going to go out the window. Local Business are going to be ENCOURAGED and now will be able to get FREE advertisements and all sorts of things to be able to sponsor SCHOOL SPORTS. Americanas will be encouraged to be able to go watch their kids Play. However, All recording, and equipment will be manned by the School news team. THe KIDS will be apart of the whole process. From editing, graphics etc. Companies that wish to sponsor this WILL have have MASSIVE support from the public, and people who work at these companies are going to have men and women from their companies go in and tutor these kids on how to do it. The Whole idea is, to make these poor kids PROUD of the achievements they make. Also, high school sports will have ALL members of the MILITARY bands there aiding these schools in MUSIC and EQUIPMENT!
All Federal internship programs will have an also 50/50 gender quota. Women can drop out if they want, but if its one man, then one woman must enlist. period.
The MILITARY will no longer subsidize Marriages. PERIOD. No more baiting men with money to get married. It is not equal to gays, or people who do not believe in marriage. Or Incels, trangender lesbians etc. whatever. However, if you have KIDS in the Military? You get money for extra dependents. NOT WOMAN. Not for being married. But you WILL get extra if you decided to take care of dear old dad. Most men won’t bother to take care of their single moms anyways. F~~~ EM! You went into the military for a reason to get the hell out of your s~~~y situaton your MOM put you in.
All School funding and loans will be paid back by working at the federal/ local government. So.. If you want to get into childcare? And get a degree in that? You work at a child care facility to pay back your loan. PERIOD. If the government is giving you money, it better be in a field that needs people. This goes for ALL doctors Programs. And there will be a 50/50 ratio on that as well. So that takes care of women opting out of the medical field to have kids. Again, otherwise NO FEDERAL FUNDING.
DE REGULATION of INSURANCE!
If men have less accidents, less cost to insure, then Foreign companies can now sell insurance. More Competition the better. Its time MEN pay less for services they don’t pay use. And women get to pay THEIR FAIR SHARE.
You get the TIME for the CRIME.
You f~~~ an underage boy, you get a STANDARD SENTENCE PERIOD. You accuse a man of DV, or RAPE and its proven FALSE, YOU GET THAT JAIL TIME. PERIOD that comes with what you charged him with. Its not MANDATORY. No pass go, no Pussy pass. YOU GET what you get and its even.
NO MORE DEBTORS PRISON. PERIOD. No more f~~~ing over men.
CPS, (child Protective services) will be abolished. All calls will now go to 911 period, and have a real police officer go to the sight. No special team, no special anything. A POLICE OFFICER. A true police report, and the POLICE/ SHERIF will be responsible for this. ANYONE failing to protect a child will be subject to MANDATORY SENTENCES. THIS INCLUDES TEACHERS! Principals, staff etc. All cases of Misconduct even from the past will have those workers staff, politicians that put these people in power WILL BE MONETARILY RESPONSIBLE. PERIOD. THis gets rid of women working their that don’t give a s~~~.
Real Avenues for employment for MEN. If there is a program that is not doing what it is supposed to be doing? THE PROGRAM GETS CUT. PERIOD. SO all those LAZY Public sector employees and contractors that don’t meet targets. ( Va, Voc Rehab and Employment HELLO! ) are going to be cut, or revamped in order to make the program make sense or make money or have to show ACTUAL rates of people hired, employed etc. If you do a S~~~Y JOB… YOU ARE FIRED!
Pay will be on a per Success basis. THe more f~~~ed up the person is that gets emplayed, the more the bonus to pay.Sexual harassment claims.
This may suck, but Imagine if the EMPLOYER has to pay the persons legal fees that is accused. No kidding. And the EMPLOYER has to put the employees involved on a police report, and have to have the matter settles in a court. All Costs will be paid by the employer. Also, if the charge is false, Same legal ramifications of mandatory sentencing, and the person making the accusation has to pay back all funds. All costs, and Basically, social Alimony to the man falsely accused for the time UNTIL the man gets a NEW JOB, gets a new career, or when the courts say its okay. But so far, the charge of Social Character Alimony will be paid as long as the man shall live. Hey, if its real, FINE. I am sure a woman who feels so strongly about what she heard some person say by a 3rd person at another company REALLY MATTERS.
END to ALL FEMALE ONLY BIRTH CONTROL. Young men that want to have a Reversible Vasectomy Procedure should be able to get one for free if women get free birth control. Teens moms go down, and TEEN that should NEVER be having kids in the first place never get in that f~~~ed up deal. Also, a TON of women can no longer say its HIS BABY. Also, in order to get welfare, women will have to verify paternity BY LAW. Can’t find the daddy? NOT MY F~~~ING PROBLEM.
Yea Just doing SOME of these things and getting rid of a S~~~ TON of CRAPPY programs that don’t work would take care of ANY MELTDOWN.
You are all alone. If you have been falsely accused of RAPE, DV, PLEASE let all men know about the people who did this. http://register-her.net/web/guest/home
2018-08-23 at 5:08 PM#850382In reply to: They \"think\" they are just so \"lucky\" !!
My brother had a very similar situation that you just described. The only difference is that she worked long hours until the kids came. That came to an abrupt stop after the 2nd kid was born. My brother was planning his exit when his youngest turned 18. He filed for divorce and she left the house. She figured she would just go back to work and nothing would really change except they weren’t married anymore. They split the assets including selling the house and they went their separate ways. They each had six figures in cash when the divorce was final.
My brother his bought another house for cash while she has since burned through all of hers. She found that going back to work wasn’t as easy as it looked. She was not in her upper 20’s anymore. She was in her 50s. She is now living with her sister that will put up with her while he lives by himself with no intention of letting anybody in the house other than his kids. As we all know very seldom does it work out like this. Btw, it took my brother a few women to realize to never let another one in the house again. He finally listened to me after plenty of hassle. Better late than never.
2018-08-17 at 11:08 AM#848359In reply to: It's Just So Damn Easy !!
I have a room in my house that is completely empty, no chair, not even a picture on the wall.
I have a room like that. I finally broke down and put the xbox 360 in there with a big ass speaker and a cushy office chair. Now its the Halo 4 room. I do not play games online though, I play solo campaign only so its still a solitude room. I’ve never had an xbox account.
You did that as something useful to you. A woman sees an empty room and can’t stand the bareness. She has to clutter it up with pictures all over the walls, useless nic-nacs, too much furniture, end tables, plants whether they be real or fake and curtains. Man do they love their curtains. I’ve lived in my house for over 6 years now and there isn’t one curtain on any window. I have blinds and that’s enough for me.
Man, I absolutely love living alone. Like Awakened said, it’s just so damn easy. Women complicate s~~~. Men simplify life. Can’t wait to get home and continue living the good life.
The evil in women’s hearts leaves them no moral bounds as to inhibit them from descending to the lowest levels of darkness to acquire their self entitled desires.
- AuthorSearch Results

921526
921524
919244
916783
915526
915524
915354
915129
914037
909862
908811
908810
908500
908465
908464
908300
907963
907895
907477
902002
901301
901106
901105
901104
901024
901017
900393
900392
900391
900390
899038
898980
896844
896798
896797
895983
895850
895848
893740
893036
891671
891670
891336
891017
890865
889894
889741
889058
888157
887960
887768
886321
886306
885519
884948
883951
881340
881339
880491
878671
878351
877678
