ITC in The Vertical Plane by Ken Webster and Comparable Experiments: Technical Analysis and Future Design

ITC in The Vertical Plane and Comparable Experiments: Technical Analysis and Future Design

The BBC Micro Messages in The Vertical Plane

Ken Webster’s The Vertical Plane (1989) chronicles a series of bizarre computer-based communications in 1984–85. The messages ostensibly came from a 16<sup>th</sup>-century individual (“Lukas” or Thomas) and later a future entity (“2109”), using a BBC Micro Model B computer as the medium. Technically, the BBC Micro was a stand-alone 32KB home computer with no modem or network connection. Webster would borrow this computer (and its floppy disk drive) from the school where he taught, and run a simple word processor (EDWORD) on it. This setup – a closed system by today’s standards – became the stage for unexplained file creations and text displays.

Message Delivery Mechanism: The communications appeared as text files on the floppy disk or as text on the screen. Crucially, these files were discovered only after the fact; the messages would arrive when the users were not present, or not actively observing the computer. For example, one Sunday evening Webster left EDWORD running and the computer unattended. Upon return, the program had exited to the BASIC prompt (BBC B 32K startup screen) and a new file “KDN” was found on the disk. Opening it revealed a poem-like message addressed to Ken, Deb, and Nic (the residents) – warning them in archaic English to “Get out your bricks” among other cryptic lines. This was the first incident. Thereafter, Webster’s group attempted a two-way communication by leaving their own messages on disk. They would type questions in a file (e.g. LUK1), leave the computer on, and later find a new reply file (e.g. KEN1) containing responses. Notably, the mysterious author often mirrored the filename or format in his reply (e.g. responding to “LUK1” with “KEN1”). Each exchange thus appeared as a new file written to the disk, without any conventional input. The interactive pattern emerged over dozens of messages: Webster’s team would pose questions or comments, leave the computer running and house locked, and return to find a reply file or onscreen text with old-style language answering their queries.

Conditions and Anomalies: Consistent patterns were observed in these ITC (Instrumental Trans-Communication) events. Timing: Almost every message arrived when the room was empty or the witnesses were away – something even Webster noted: “we were always out when a message appeared”, often having left the device on over weekends. They never caught the act of typing in progress; the text would “magically” be there upon their return. Computer state: The BBC Micro was typically left powered on with the word processor open. In some cases the program remained running with text on screen; in others the machine had dropped to the basic command prompt (as if a Break key or reset had been triggered). The messages themselves were stored on the floppy disk as files, meaning the perpetrator (who or whatever it was) had to interface with the disk operating system to create or update files – a feat difficult to explain given the machine’s lack of remote access. Investigators later confirmed that a BBC Micro with no modem cannot be “bugged” or contacted via wireless hacks in any normal way. The “hidden EPROM” theory (an idea that a rogue program chip could have been secretly planted) was also ruled out, reinforcing that these file edits were highly anomalous for a closed 1980s computer system.

Another recurring condition was that unsaved text could appear transiently. At one point, a long message (a poem from “Lukas”) actually manifested on the monitor while the computer was running, but before the team saved it, a sudden mains power outage hit the house. The screen went dark, and because the text was only in RAM, it was lost. As Webster dryly notes, “If the information is not saved to disk then it can only live in the BBC computer memory while there is power”. This incident showed that the messages could be injected into active memory (video display) in real-time – yet whoever or whatever sent them was either unable or unwilling to safeguard the text against a power loss. After power was restored, no trace remained on disk. When Webster’s partner asked the communicator to resend the lost poem, the next incoming text was garbled: a string of random characters and gibberish preceded the legible words. It looked like a corrupted transmission – as if the “link” was unstable or the sender had difficulty re-sending the data. Immediately afterward, however, a more coherent message came through introducing a “freend” (friend) of Lukas. This friend’s note was in archaic English (albeit still typed in the Latin alphabet), indicating that even multiple entities could transmit via the computer. The garbled text preceding the friend’s introduction might represent an error in the interface or signal interference when switching communicators. It’s as though the process had to re-tune before producing clear text, analogous to static before a radio voice locks in.

The linguistic style of the messages further underlined their strangeness. The primary sender (who eventually identified himself as “Tomas Harden” living in 1546) used Early Modern English dialect, with antiquated spelling and vocabulary – yet also showed knowledge of 20<sup>th</sup>-century details. Experts consulted by Webster noted the dialect was largely authentic to the period, “exactly right for the 1540s”, except for strangely modern punctuation and some inconsistent usage. For instance, the messages freely used punctuation like question marks and parentheses which were not common in the 16<sup>th</sup> century. This mix of old and new raised red flags for a hoax, but it also reflected a possible limitation of the medium – the computer had a modern character set and perhaps the “translator” or software (whatever 2109’s technology was) might be packaging the 1540s prose with 1980s punctuation conventions. In short, the content conveyed historical tone, but the formatting bore modern fingerprints, suggesting an intermediate processing step.

Beyond the computer, The Vertical Plane reports complementary physical anomalies in the cottage that accompanied the electronic messages. There were poltergeist-like effects: unexplained chalk markings appeared on walls, objects were stacked or moved, and ghostly footprints materialized. For example, a series of foot-shaped marks (with an uncanny six-toed outline) kept reappearing on a wall between the kitchen and bathroom, even after being painted over. Cans of cat food were found one morning arranged in a neat pyramid on the floor, and on another night a heavy column of household items (bottles, a roll of kitchen paper) was balanced in a precarious tower. These events happened at times overlapping with the computer messages. In one instance, Webster quipped “the poltergeist is back, writing on the walls again,” upon spotting fresh chalk writing on a pillar of the cottage. The wall scrawls even contained archaic phrases similar in style to Lukas’s computer text. Such phenomena suggest an environmental aspect to the case: whatever intelligence was communicating via the BBC Micro also interacted with the physical surroundings, leaving analog marks and moving objects. This lends weight to the idea that human psychokinesis or a spirit agent (rather than a remote hacker) was at work, since the effects spanned multiple mediums (digital and physical).

Crucially, a second purported communicator emerged partway through the saga: an intelligence from the year 2109. This entity, communicating in all-caps modern English, claimed to be facilitating the contact between 1985 and 1546. The 2109 messages were themselves anomalous, often appearing as mysterious addenda or replies beyond the 16<sup>th</sup>-century drama. Technically, 2109’s involvement hints at a more complex communication mechanism – possibly a third-party relay or modulation of the signal. At one point 2109 actually inserted itself into the conversation to admonish the humans for trying to bypass the computer. When Webster and Deb attempted to write a letter on paper to the 16<sup>th</sup>-century man (to avoid 2109’s interference), 2109 warned that they “could not interfere” with paper messages, implying their control only extended over electronic channels. This aligns with 2109’s role as a gatekeeper, using technology to mediate. It also underscores that the medium matters: the choice of instrument (computer vs. paper) affected who could “see” or influence the messages. Indeed, Lukas (or “Tomas”) sometimes responded via actual handwritten letters delivered in the cottage when instructed not to use the computer – a divergence that apparently circumvented the 2109 link. From a technical perspective, this resembles a controlled network where certain protocols (e.g. digital channel) are monitored or hackable by the higher entity, whereas an out-of-band method (a note on paper) is secure from them. It’s a remarkable parallel to modern computer security – except the “network” spans time dimensions.

Electromagnetic Hypotheses: In attempting to explain the Vertical Plane phenomena, Webster and his friends speculated about environmental factors. One idea floated was that geographical or electromagnetic conditions might enable these time-bending communications. They noted Dodleston (the village) lay on an ancient site and mused about ley lines or “old straight paths” that could concentrate unknown energies. In January 1986, the 2109 entity itself provided a technical (if cryptic) explanation: it said that “Time, UFOs and most other types of the paranormal are all connected” by the physics of our world. According to 2109, certain locations have “areas of convexual magnatism” caused by the intersection of positive and negative “magnetic lines” around the Earth. Where these lines cross permanently, the “light/time continuum is vastly distorted”, allowing phenomena such as time projections or ITC to occur. In essence, 2109 described a kind of space-time fault triggered by geomagnetic anomalies. While the terminology is odd (perhaps a mangling of convex magnetic fields?), the concept echoes real scientific ideas like geomagnetic hotspots or magnetic field disturbances. The suggestion is that Dodleston was one such hotspot, enabling a “time slip” channel between 1546, 1985, and 2109. It’s notable that electromagnetic fields are implicated – this matches a broader theme in ITC research where EMF and radio-like signals play a role in anomalous communication. However, unlike a straightforward radio signal, these messages did not conform to any known broadcast or transmission method – they manifested as digital text inside a machine’s memory. One might speculate that if geomagnetic line crossings create a sort of carrier wave or portal, the BBC Micro’s circuitry could have been influenced via that portal (e.g. inducing currents or flipping memory bits to form characters). This remains hypothetical, but The Vertical Plane invites such speculation by explicitly connecting the dots between magnetic field geometry and ITC.

In summary, the Vertical Plane events centered on a stand-alone computer system acting inexplicably, exhibiting behaviors like spontaneous file creation, unsolicited text output, and state changes without user input. The communications often showed intelligent interaction, coherent historical knowledge, and responded to the experimenters’ queries. The technical constraints (no network, simple hardware) rule out typical hoaxes like remote hacking or software trojans, especially given the era (1985) and the device’s limited memory. Moreover, the concurrent poltergeist phenomena and apparent EM theories suggest this was not merely a computer glitch, but part of a larger pattern of Instrumental Transcommunication – where unseen intelligences purportedly use electronic devices to convey messages.

Other Notable ITC Cases: Luxembourg, Bacci, and Beyond

The Dodleston events were not an isolated case of alleged high-tech communication with unknown entities. Throughout the late 20<sup>th</sup> century, researchers have reported similar ITC occurrences using computers, radios, telephones, and other electronics. Comparing Webster’s case with others from the 1980s–1990s reveals technical similarities and differences, and helps establish patterns in these phenomena.

Luxembourg (CETL) Computer Contacts (1985–1994)

One of the most famous ITC efforts occurred in Luxembourg, led by Jules and Maggy Harsch-Fischbach of the Cercle d’Études sur la Transcommunication (CETL). In the late 1980s and early 90s, the Luxembourg team claimed repeated electronic contacts with a group of spirit communicators they called “Timestream.” Much like the Dodleston case, these contacts spanned text, images, and voices, often involving deceased personalities or historical figures. A key parallel is the use of a personal computer as a receiving device. The Luxembourg researchers would frequently return home to find that their computer had turned itself on and new files had been left on the diskmacyafterlife.com. For instance, on 21 February 1991, Maggy and Jules came home from work to discover the PC running and a file on-screen containing a letter from a departed friend (the late Jeannette Meek, wife of ITC pioneer George Meek)macyafterlife.commacyafterlife.com. The letter included personal details that only Jeannette and her husband could have known, convincing George Meek of its authenticitymacyafterlife.commacyafterlife.com. This mode of “drop-in” file delivery – where an invisible agent places data on a computer’s hard drive or disk in the owner’s absence – strongly echoes the Vertical Plane scenario (where Ken found new files on his floppy). ITC researcher Mark Macy described this as a known phenomenon: spirits are able to turn computers on and leave messages, even planting files on hard drives or floppy disksgeeknewscentral.com when conditions are favorable.

The content received in Luxembourg also included graphics. In November 1992, an image file inexplicably appeared on Maggy H-F’s computer showing three people in a paradisiacal landscape: Jeannette Meek alongside her daughter and the recently deceased Hollywood producer Hal Roachmacyafterlife.com. The picture (said to portray a scene in the afterlife “Level 3/4”) was not scanned or created by any known person, but “arrived” fully formed in the PC’s storagemacyafterlife.com. This is analogous to the text messages of Dodleston, but in visual form – demonstrating that ITC can span multiple data formats. In fact, the Luxembourg experiments saw a broad range of ITC phenomena: telephone answering machines recording unsolicited voice messages, radios delivering voices, and televisions showing images or messages superimposed on broadcast signalsgeeknewscentral.commacyafterlife.com. The multimedia nature of those contacts set a precedent for ITC not just via text.

Technically, the Luxembourg team didn’t rely on leaving a simple word processor open as Webster did. They often used a dedicated “communication station” with custom hardware. George Meek observed one experimental setup that “employed electronics, ultraviolet light, and crystals” in hopes of augmenting the contactmacyafterlife.com. This hints at a purposeful engineering approach: UV light and crystals might have been intended to generate an energy field (crystals piezoelectrically produce EM oscillations; UV could ionize air) that spirits could use. Such equipment goes beyond Webster’s unmodified BBC Micro, yet both achieved similar results – suggesting that while specialized apparatus might help, an off-the-shelf computer in a “special place” could also serve as a portal.

The similarities between Dodleston and Luxembourg are striking: in both cases, messages were delivered to a computer’s memory or disk without conventional I/O. Both involved communicators claiming to be deceased or non-physical beings, who provided verifiable information to establish credibility. Also, both cases had references to third-party facilitators – in Dodleston it was “2109,” in Luxembourg a central coordinator named “Technician” (and others like a scientist Swejen Salter from beyond). These facilitators described a systematic effort on the spirit side to open channels with the living, often stressing that harmony or “certain psychic qualities” of the operators were neededgeeknewscentral.com. Indeed, one difference was that the Harsch-Fischbachs operated in a semi-meditative context, and Maggy herself was considered a sensitive/medium, whereas Webster and co. were not trained mediums (though Debby in Dodleston turned out to have some psychic experiences). Another difference is that Luxembourg’s contacts were more frequent and varied – they amassed hundreds of recordings and dozens of images over a decade – whereas Dodleston’s was a shorter intense burst (~18 months) focused primarily on one storyline (the 1546 man) with a finite number of messages (~250 texts reportedly)geeknewscentral.com.

One important technical pattern in Luxembourg’s ITC was the operator absence during file reception, just like Dodleston. Communicators often said they preferred to send messages when the humans were away or not directly observing the device, perhaps to avoid psychokinetic interference or skepticism energy from the observers. However, unlike Webster’s one-way find it later method, the Harsch-Fischbachs sometimes observed phenomena in real-time (e.g. phone line ringing with a voice, or images forming on a TV). This suggests that some ITC channels are real-time (active voices or calls) while others are store-and-forward (like leaving a “voicemail” on the computer). The store-and-forward mode might be easier for sustained complex messages (text, long letters, images) since it doesn’t require immediate feedback.

Marcello Bacci’s Direct Radio Voices

In contrast to computer-based ITC, Marcello Bacci’s experiments in Grosseto, Italy relied on old-school radio technology to achieve phenomenal results. Starting in the 1960s, Bacci conducted sessions where he and groups of listeners would gather around a vacuum-tube radio receiver. He would tune it to an unused portion of the shortwave band (around 7–9 MHz) where only static was heardstrange-phenomenon.com. After a period of intent listening, voices emerged from the loudspeaker – often addressing people in the room by name, delivering personal messages (frequently from deceased relatives), and sometimes engaging in dialoguestrange-phenomenon.comthescoleexperiment.com. This method, known as Direct Radio Voice (DRV), is an analog parallel to what Webster experienced digitally: an intelligent message imposed on an electronic carrier (radio noise instead of computer memory).

From a technical standpoint, Bacci’s work has been scrutinized under controlled conditions, yielding critical evidence of anomalous signal behavior. In one test, a second identical radio was set up next to Bacci’s, tuned to the same frequency, with its own antennathescoleexperiment.com. When Bacci’s radio began receiving a paranormal voice, the control radio only received normal staticthescoleexperiment.com. No voice was heard on the duplicate set. This indicates the effect was localized to Bacci’s device (or perhaps even to the space immediately around it), rather than being a broadcast radio transmission that any receiver could pick up. Such a result rules out ordinary radio interference or a nearby pirate transmitter as the source of the voices – because if it were a radio signal in the air, both radios should have detected it equallythescoleexperiment.com.

In an even more dramatic experiment, physicist Prof. Mario Festa and electronics expert Franco Santi actually dismantled Bacci’s radio mid-communicationstrange-phenomenon.comthescoleexperiment.com. They removed two key vacuum tubes (valves) from the set: the AM/SW converter oscillator (ECH81) and the RF amplifier (ECC85). These components are absolutely required for receiving any normal radio broadcast on shortwave; with them removed, the radio is effectively “deaf” to the airwavesthescoleexperiment.com. Yet astonishingly, the paranormal voices continued to speak through the speaker unabatedstrange-phenomenon.comthescoleexperiment.com. This single fact is among the most compelling in ITC research: it implies the voices were not being received through the conventional radio circuitry at all. Instead, it’s as if the audio stage or speaker was being driven directly by an unknown source. The technicians also measured the electromagnetic field levels around the radio during these sessions. They found no significant change in the electric or magnetic field readings when the voices appeared, and after the receiver valves were pulled the field levels were the same as when the radio was simply turned offthescoleexperiment.com. In other words, no detectable RF signal or EM surge accompanied the phenomenon – a finding that strongly suggests the effect was not due to an external radio-frequency transmission (which would have registered in the EM field)thescoleexperiment.com. Whatever produced the sound, it bypassed normal radio reception and didn’t radiate like a standard signal. This has led researchers to speculate about direct energy conversion: perhaps spirit agencies were inducing vibrations in the speaker coil or injecting audio-frequency signals into the amplifier stage without using the radio tuner at all.

Comparing Bacci’s radio voices to Webster’s computer messages, we see common themes: both involve an anomalous input route (writing data to a disk vs. injecting audio to a speaker) that intelligent communication piggybacks on. Both were resistant to conventional interference tests – Bacci’s voices were not affected by shielding or removal of radio partsstrange-phenomenon.comstrange-phenomenon.com, and Webster’s communications did not rely on any known network or external connection. Another similarity is the need for an initial medium or noise: Bacci’s method needed the radio tuned to static (a sound carrier), while many EVP experiments (discussed below) use background noise as a matrix for voices. Webster’s case is slightly different in that the BBC Micro didn’t provide “noise,” but one could argue the idle memory and powered-on state provided a stable electronic environment that an external influence could exploit (the equivalent of an operating carrier in a digital system). Interestingly, Lukas’s friend in The Vertical Plane referred to the computer as “thyne leems” (your device of light) and asked “Is the comuter thyne prey?”, expressing awe at the machine. This shows the 16<sup>th</sup>-century mind perceived the computer as a magical light box. In Bacci’s sessions, some voices also acknowledged the mechanism, sometimes thanking Bacci for the opportunity to speak or instructing him on settings. Both cases hint that the communicators had to learn to use the device. Lukas’s first attempts had formatting oddities and needed power to be maintained, and Bacci’s communicators sometimes came through faint or had to build up energy (initial whistles or swooshes in the static before a clear voice locks in). These learning curves point to an interface problem: the disembodied intelligence must interface with physical hardware, whether by manipulating electromagnetic fields, biasing electronic noise, or influencing quantum processes. Early messages might be imperfect (gibberish text or garbled sounds) until the technique is refined, much like tuning a transmitter.

A difference between Bacci and the computer cases is real-time interaction. Bacci’s phenomena were live – often emotional dialogues occurred, with parents recognizing children’s voices speaking via the radio and having back-and-forth exchanges in the momentthescoleexperiment.comthescoleexperiment.com. In Dodleston, the conversation was asynchronous, with hours or days between query and response. The Luxembourg contacts were a mix: some real-time (phone calls, EVP voices) and some asynchronous (computer files, images). This suggests that real-time ITC is possible with audio, perhaps because audible sound can be formed by quickly modulating noise (which might be easier or requires less energy than materializing a lengthy text or image). Meanwhile, textual or visual ITC might require more processing – assembling hundreds of characters or pixel patterns – which could be easier to do “offline” and then dump as a file when complete. Indeed, the spirit teams described by CETL said it took considerable effort to send those images; the experimenters would only find out after the fact if the transfer succeededmacyafterlife.commacyafterlife.com.

Other ITC Methods: EVP, Spiricom, and Ghost Boxes

Beyond the high-profile cases above, a variety of other instrumental transcommunication techniques have been explored, each with its own technical nuances:

  • Electronic Voice Phenomena (EVP) with Audio Recorders: Pioneered by Friedrich Jürgenson (1950s) and Konstantin Raudive (1960s), EVP involves asking questions into a recorder and finding unexplained voices on playback. A frequent practice is to use white noise or radio static as a background during recording. Raudive believed this noise provided a carrier that spirits could modulatestrange-phenomenon.com. He even identified a particular shortwave frequency (around 1485 kHz, later called the “Jürgenson frequency”) that yielded many voices when the radio was tuned there as a source of constant hissstrange-phenomenon.com. Technically, these voices often manifest as brief, faint snippets, sometimes just a word or phrase interwoven with the noise. They typically have unusual acoustic properties – for instance, rapid amplitude changes and a narrow frequency band – which differentiate them from normal speech. Researchers have noted that EVP voices can occur below the microphone’s audible threshold, suggesting they might imprint directly on the recording medium (magnetic tape or digital audio) rather than acoustically through the air. Nonetheless, some EVPs are heard in real time by investigators, indicating a mix of modes. The common thread is signal-to-noise conversion: random noise is transformed into meaningful signal. This pattern echoes the Bacci scenario (static to voice) and even the computer cases if one views a computer’s random-access memory as analogous to a blank noise field that can be given order (letters) by an outside influence.
  • Spiricom and Specialized Tone Generators: In 1980, engineer George Meek and medium William O’Neil built the Spiricom, a device that generated a set of fixed audio tones as a sustaining background for spirit communication. The idea was to provide a stable energy lattice (13 semitones spanning the human voice range) that entities could supposedly manipulate to speak. Indeed, O’Neil claimed to have long conversations with a deceased scientist using this setup, which included a radio-frequency transmitter as wellstrange-phenomenon.comstrange-phenomenon.com. However, Spiricom was highly dependent on the operator’s psychic abilities – O’Neil was the only one who ever got results, and attempts to replicate by others failed. Skeptics noted that the voice tone resembled an electrolarynx device and that O’Neil might have been projecting his own voice ventriloquiallystrange-phenomenon.comstrange-phenomenon.com. From an engineering view, Spiricom provided valuable lessons: it suggested that a precisely calibrated environment (fixed tones, RF field) could facilitate ITC, but also highlighted the challenge of repeatability and operator bias. The necessity of a gifted operator meant it wasn’t truly instrument-alone communication. It did, however, inspire later experiments to use audio frequency mixing as a carrier.
  • Diode and Radio-Scanner Devices: Pioneers like Ernst Senkowski experimented with simplified receivers – e.g., a Germanium diode coupled to an audio amplifier (with no tuned circuit) – to capture voices. A diode will produce a small current if exposed to radiofrequency or other EM fluctuations; in a shielded setup, it mainly registers thermal or atmospheric noise. Yet, some EVP experimenters reported voices using such “diode receivers,” presumably because any slight non-random fluctuation can be amplified into audible sound. Similarly, today’s “ghost boxes” or Frank’s Box sweep radio channels rapidly, producing a choppy sequence of sound bits from many stations. The theory (loosely) is that spirits manipulate the snippets or the timing of hits to form messages in the chaos. From a technical stance, ghost boxes exploit the human brain’s pattern recognition – they provide a rich, semi-random acoustic tapestry where listeners might pick out meaningful syllables. While this method has low evidential weight in controlled science (since random radio clips can by chance form words), it shares with other ITC methods the reliance on stochastic resonance: the concept that a bit of randomness can actually boost detection of weak signals. In engineering terms, adding noise to a non-linear system can improve the output of sub-threshold signals – perhaps in ITC, a weak influence needs a noisy system to manifest at all. Ghost box users often report relevant responses to questions, suggesting some real-time influence at play – but verifying such cases statistically is difficult.
  • Telephone and Video ITC: There are recorded instances of anomalous phone calls (ringing with no caller, or voices of deceased on the line) and video feedback loops producing faces or scenes (a method used by Klaus Schreiber in the 1980s, where a camcorder was pointed at its own output on a TV to create a feedback loop of swirling patterns, in which faces sometimes appeared). These do not directly parallel Webster’s computer but reinforce the idea that virtually any electronic medium can be commandeered. In telephone cases, line static or sidetone may furnish the noise carrier akin to radio static. In video feedback, the random optical noise gets “ordered” into an image. Modern researchers like Sonia Rinaldi have extended video ITC by using water or vapor patterns and applying AI visual analysis to detect faces, essentially trying to amplify the meaningful patterns in noise.

Across all these methods, several technical patterns emerge:

  • Use of Noise or Idle States: Whether it’s radio static, audio-frequency tones, white noise, or a waiting computer memory, ITC often requires a medium in a receptive, fluctuating state. Completely silent or idle systems are somehow less amenable. In engineering terms, a modulated noise carrier is a recurring tool.
  • Environmental Electromagnetics: Many successful ITC sessions seem to correlate with certain EM conditions. Bacci’s sessions might be tapping into VLF or geomagnetic energies (Grosseto, Italy is not far from strong ionospheric propagation paths). Dodleston’s explanation was explicitly geomagnetic. Some researchers choose times of day or locations with low human-made EM interference (or conversely, some with high natural geomagnetic activity) for ITC. However, as Bacci’s field measurements showed, the voices themselves didn’t come as EM spikesthescoleexperiment.com – so it’s more likely the background field or a boundary condition (like a geo-sensitive location) is a prerequisite rather than the carrier of the final signal.
  • Repeatable Configurations vs. Operator Dependency: One frustration in ITC research is repeatability. Bacci’s setup was very repeatable for him – he conducted public sessions for decades with consistent results – but when scientists tried similar setups without Bacci, results were not the same. The Scole group’s phenomena (1993–98) – which included unexplained images on sealed film and even an instance of a solid light-form materializing – could not be repeated once the group dissolved, suggesting a unique synergy was at play. On the other hand, some protocols have been repeated across experimenters: EVP on tape has been replicated by thousands worldwide (with varying success rates but enough “hits” to keep interest alive), and random event generator (REG) experiments have consistently shown small effects of consciousness on electronic randomness. In 2003, engineer Imants Barušs introduced a random text generator in an ITC experiment: essentially, a computer program produced strings or yes/no answers at random, while a medium was present asking questions. Intriguingly, the yes/no generator achieved about 81% correct answers (9 out of 11) for verifiable questions, which is statistically significant (p ~0.04)ir.lib.uwo.ca. Such results, though modest, hint that under certain conditions, even a modern computer’s random output can be influenced towards meaningful answers. This is a repeatable configuration concept – one could imagine many labs running identical random processes while attempting spirit contact, to see if bits of information skew away from chance. The influence is small and requires large data to confirm, but it’s a foot in the door to objective, quantifiable ITC.
  • Signal Characteristics: When the “messages” are examined, they sometimes show anomalies a human might not produce. For instance, many EVP voices have an elevated frequency spectrum, lacking the low-frequency bass of normal speech, as if the voice source has no vocal cords or physical chest. In the Ken Webster case, the messages had the mixed dialect and modern punctuation inconsistency noted earlier – possibly a sign of algorithmic translation or an amalgamation of inputs. These are the kinds of patterns that, if catalogued, might reveal a “signature” of true ITC vs. hoax or radio interference. In Bacci’s voices, no standard RF modulation was present, and in fact the audio often had a peculiar timbre (described as slightly mechanical). Anabela Cardoso, who continues DRV experiments, notes that voices sometimes start at a syllable cutoff, as if the communication stream isn’t perfectly synced. All these subtle clues point to communication that is riding on the edge of normal physical processes.

In summary, the historical cases of ITC – from Webster’s BBC micro files and the Luxembourg computer images to Bacci’s radios and classic EVP – collectively suggest that intelligence can imprint information onto electronic systems by exploiting stochastic processes or latent energy fields. There are common requirements like an receptive noise medium or a conducive environment, and common challenges like reliance on certain people (mediumistic operators) and low signal-to-noise ratios. At the same time, each method has unique advantages: text messages carry more information content per event (but are slower), while audio allows immediate interactive feedback (but is often short utterances), and images can convey complex scenes (though require heavy “transmission” effort). These trade-offs inform how we might design new experiments and devices to push ITC research forward.

Methodologies for New ITC Experiments

Building on the insights above, new ITC experiments should strive for controlled, repeatable conditions while maximizing the likelihood of anomalous communication. Below are several proposed methodologies:

  1. Controlled Computer Messaging Experiment: Recreate the Vertical Plane scenario in a lab-like setting. Use two identical computers (offline, no wireless capabilities) in two separate rooms – one as the “active” device in a reputed haunted or spiritually active location, and another as a control in a neutral location. Both run a simple word processor or a custom logging program. Researchers will invite communication by typing a set of questions or a prompt on the active computer and then leave it running unattended (perhaps under camera surveillance to ensure no human interference). The computer’s memory and disk activity should be continuously logged at the binary level. Any unexpected file creation or text appearance will be automatically time-stamped and compared against the control computer (which should see no activity). To add an EMF dimension, one could place an EM detector or even a coil antenna near the active computer to see if any surges coincide with an ITC event. This experiment modernizes Webster’s approach but adds rigorous logging and a control. If messages appear on the active machine’s screen or storage, the logs can verify no conventional process wrote them. Even if nothing happens, the detailed logging might capture tiny anomalies (e.g. flips in memory bits or slight fluctuations in power draw) that could hint at a presence.
  2. Multi-band Radio Listening in Shielded Environments: Expand on Bacci’s work by setting up multiple radios and detectors in a shielded room. Use one or more software-defined radios (SDRs) that can monitor a wide spectrum simultaneously (longwave through VHF, for example). Also include a diode detector and perhaps a microphone for ultrasounds, all recording continuously. Conduct sessions at known times (perhaps when solar or geomagnetic conditions are favorable, or according to spiritualist traditions, say midnight or anniversaries) and invite any communicators to speak. If a voice is heard in one channel (say, on shortwave at 7 MHz), check if it appears in any other channel or on the SDR’s spectrum recording. A Faraday cage or screened enclosure can be used to house one radio as a control (it should receive nothing but internal noise). If voices like Bacci’s occur, this setup will tell us whether any traceable RF carrier was present or if it truly was a no-signal audio insertion. Modern digital signal processing can be applied to identify voice patterns (for example, using speech-to-text algorithms to scan hours of static for intelligible words, eliminating some human bias in “hearing” things). Additionally, measuring environmental data is key: concurrently log magnetic field strength, electric field, temperature, and even quantum random generator outputs in the room. Look for correlations – e.g., did a sudden drop in ambient magnetic noise precede the appearance of a voice? Such correlations could illuminate the mechanism (maybe the communicators draw energy from the EM field, causing a dip).
  3. Random Output Influence Tests: Inspired by Barušs’s approach and REG studies, set up a dedicated computer running various pseudo-random generators – from simple coin-flip (yes/no) outputs to random word or letter string generators. Before each run, an experimenter (or medium) vocally poses a question or requests a specific output (e.g., “Please give us a sign. Answer yes or no: …”). The program then produces a random series. With a sufficiently large number of trials, see if the answers are significantly non-random (e.g., if “yes” appears more often when the correct answer is yes, beyond chance probability). This method can be automated and statistically analyzed. It has the advantage of being quantitative. To enhance it, one could incorporate feedback: if an encouraging result occurs (like a correct answer), acknowledge it and encourage more, almost like training the communication channel. Over many sessions, this might improve if a real influence is learning how to manipulate the RNG. One can also vary conditions – test with a known medium present versus no person present, test in a reputed haunted location vs. a normal lab – to see if results differ. A positive outcome (even a small deviation from randomness) under controlled conditions would provide replicable evidence of an anomalous mind-machine interaction, possibly the same force behind ITC messages but measured in a simpler form.
  4. Audio-Based Apparition & EVP Triggers: Develop an audio system that plays controlled “noise” and listens for voices, but with added triggers to validate responses. For example, use a speech synthesis program to ask questions aloud into a room (so no human voice is needed on site), and have multiple recorders capture any responses. The system could use different types of noise carriers in the background: pure white noise, scanning radio snippets (ghost box style), or even specific frequencies (like a 1500 kHz tone or Spiricom-like tone set). Crucially, intersperse control periods of silence or randomized questions that have no correct answer, to filter out coincidental radio chatter or pareidolia. If an intelligent reply is recorded (e.g., the synthesized voice asks “How many people are here?” and a voice responds “Five” when indeed five investigators are next door), that would be compelling. By using AI speech recognition on the recordings, one can objectively flag potential answers and then have independent analysts verify what was said (to reduce the bias of eager listeners “hearing what they want”). This methodology blends old EVP with modern automation and could run continuously overnight – essentially an electronic séance on loop, patiently awaiting a reply.
  5. Cross-Modal Triggers and Validation: Design an experiment where one instrument’s output is supposed to be influenced in tandem with another’s. For instance, ask that if a spirit is present, they should create a fluctuation in a magnetic sensor and a specific word in a random text output within the same timeframe. Or perhaps coordinate a voice and an image: “If you’re really there, speak the word LONDON and also show a light on the sensor.” These paired triggers, if met, have an enormously low chance of random coincidence, providing strong evidence of directed influence. One historical anecdote in ITC is that of Adolf Homes (Germany), who received computer texts and simultaneously printed outputs on a needle printer that corroborated each other. Designing experiments that require two independent systems to give a correlated result can greatly increase confidence that something extraordinary is happening, not just a fluke of one device. The downside is it asks a lot of the communicators, but it sets a measurable bar for them to aim for.

Across all new methodologies, some general best practices should be applied: use blinding where possible (analysts who transcribe or judge EVP responses should not know what question was asked, to avoid bias in interpretation); always include control devices and control periods (to differentiate environment noise from phenomenon); replicate experiments many times; and document all meta-data (time, geomagnetic indices, moon phase, operator mood, anything that might later show a pattern when correlated with successes).

Crucially, researchers should maintain a balance of open-mindedness and skepticism: we must be open to novel forms of communication, but also exhaustive in ruling out conventional explanations. That means when a result comes, attempt to debunk it oneself: check if a stray radio station, a glitch, or human contamination could be the cause. Only when those are eliminated does it go into the ITC evidence log. Over time, this approach will yield a collection of high-quality events (even if rare) that can be further analyzed.

Conceptual Design: An Operator-Independent ITC Device

Based on the above considerations, we can outline a design for an ideal ITC apparatus that minimizes human-dependent factors and maximizes reliable detection of anomalous communications. The goal is a self-contained electronic system that provides multiple channels for possible contact, thoroughly logs all data, and reduces false positives. Below is a breakdown of the proposed device:

  • Hardware (Sensors and Interfaces): The device would be built into a shielded enclosure (a Faraday cage or at least a grounded metal chassis) to block normal RF interference. It would contain a suite of sensors covering different physical modalities:
    • Audio: High-quality microphone and a software-defined radio receiver (with a range from, say, VLF ~10 kHz up through HF ~30 MHz or higher) for capturing both audio-frequency phenomena and radiofrequency signals. The radio can be set to a quiet frequency or even rapidly scan (with software demodulating noise). A small speaker or white-noise generator is included to emit adjustable noise (acoustic or RF or both) as a medium when needed.
    • Environmental: Tri-axis magnetometer and electric field sensor to monitor EM field changes; temperature and barometric sensors (some theories suggest slight temperature drops during spirit activity); perhaps a geophone or accelerometer to detect vibrations (in case of raps or movement).
    • Optical: A low-light video camera or photodiode array to catch light anomalies or even attempt video ITC (the camera could watch a screen that shows visual noise patterns or a fog chamber for visual ITC).
    • Random sources: A built-in true random number generator (TRNG), such as a diode noise-based RNG or quantum random source, whose output can be monitored for manipulation.
    Importantly, all sensors would be time-synchronized via a precise clock. The device might use something like a Raspberry Pi or a small single-board computer to interface with these sensors via ADCs (analog-to-digital converters) and digital inputs.
  • Software (Data Logging and AI Analysis): The software would run largely autonomously. Its primary function is to log all sensor data continuously with timestamps to an internal storage (with redundancies and error-checking to ensure integrity). But beyond raw logging, it would perform real-time analysis looking for anomalies:
    • Audio analysis: Using speech-to-text algorithms and spectrum analysis to flag any segment of audio or radio reception that contains structured voice-like content (e.g., detecting formant frequencies of human speech or recognized words). For example, if the microphone picks up a voice when no one is in the room, the software should alert that and store the clip. It can use a pretrained AI model to distinguish human speech from random noise with high confidence, to eliminate false triggers like chair creaks or radio interferences that don’t form words.
    • Text generation and monitoring: The software could also present a writable text interface (like an editable notepad file on screen or a command-line prompt) that invites input. It will monitor that memory address for any change. If a string of text appears (not generated by the program itself), that’s immediately logged and compared to dictionaries (to see if it’s intelligible language). Even if no full sentences appear, any random ASCII character insertion could be noted.
    • Event correlation: The program will cross-correlate sensor streams. For instance, if a spike in the magnetic field occurs at the same moment as an EVP detection, it will note that link. This helps identify patterns (e.g., does a certain EM disturbance always precede a voice? Do temperature drops coincide with text appearances?).
    • Interactive mode: A subroutine can periodically pose questions via a speech synthesizer or display text on a screen (e.g., “Is anyone here?”). The device doesn’t “expect” an answer per se, but it marks the timeframe after each prompt and intensifies analysis then (since responses in ITC often come shortly after a question). It could even generate simple test patterns: for example, displaying a random number or word and asking the communicator to repeat it via audio. This tests for intelligent coupling between modalities.
    • Adaptive noise: If the system detects some anomaly, it can adjust the noise output or channel. Say a faint voice is detected at 18 MHz; the system could center more noise energy there to see if it strengthens communication (similar to providing a stronger carrier). Or if text starts appearing when the computer is idle, it might leave certain memory sections open and idle intentionally to facilitate more of that.
    All software decisions and detected “events” are recorded, but crucially, nothing is ever erased or overwritten. The raw data is preserved for offline human review. This prevents the system’s AI from accidentally ignoring or discarding something truly novel that it wasn’t trained to recognize. The AI’s role is to flag and triage, not to be the final judge.
  • Communication Protocols: The device presents multiple “pathways” for a potential communicator to use, essentially multiple choice of channels:
    1. Direct text entry – a blank text file or a simple word processor window open, waiting. (In case the entity can manipulate binary memory like in Webster’s case.)
    2. Voice input – the microphone and radio listening for spoken words. The system can output an audible greeting and background noise to invite use of the audio channel.
    3. Yes/No or simple RNG influence – an on-screen coin flip or random dot display that the entity could try to bias (this is more abstract, but might be easier for a weak influence to affect).
    4. Visual symbols – perhaps a grid of random pixels or an on-screen pendulum that could be influenced.
    Each channel is a protocol in itself. Part of the design is to find out which modality, if any, is most amenable to ITC under controlled conditions. For instance, maybe it’s far easier for entities to mess with a radio frequency than to type on a modern PC, or vice versa. By providing all options, we cover our bases. The device could also support remote collaboration: for example, have two identical devices in different locations and link them via internet (in a secure, time-synced way) to see if any simultaneous phenomena occur or if a message left in one appears in the other (a modern twist on the classic cross-correspondences concept, using technology). A crucial part of the protocol is authentication and redundancy. If a meaningful message comes through, the device can ask for confirmation via a different channel: e.g., if a voice says “Hello”, the screen could display “Hello received. Please confirm by printing the same word to text.” This may sound ambitious, but having a built-in way to challenge the communicators for consistency can filter out random noise vs. a true contact capable of multi-channel influence.
  • Reliability and Bias Reduction: This design is meant to operate with minimal human intervention. By automating prompts and analysis, it reduces the chance of human psychical influence (or subconscious PK) skewing results – though one might argue a human operator’s absence might also remove a source of psychic energy that spirits use. To account for that, one could run the device both with and without humans present and compare: maybe initially a human medium can sit with it to “prime” the channel, and later it runs alone. All outcomes are logged impartially by the machine. The presence of shielding and multiple controls (like the parallel control computer or shielded radio) ensures that normal environmental noise is unlikely to produce false hits. For example, if a stray radio broadcast or random thermal noise appears to say “hi,” it’s exceedingly unlikely that at that exact moment the magnetometer also spikes in a distinctive way and the control radio in the Faraday cage remains silent. The layered verification in the design is aimed at improving the signal-to-noise ratio and confidence level of any detection. Moreover, the use of AI to identify voices or text means we’re not relying on eager ears or eyes that might imagine a pattern. The algorithms can be calibrated to only flag audio that matches known phonetic structures or text that forms dictionary words. While the AI might miss a truly novel form of communication, it will vastly cut down on “false positives” that often plague EVP sessions (where one person hears “Get out” in static and another hears “Help” and so on). Everything the AI flags can be double-checked by human experts, but the initial screening is unbiased and consistent.
  • Replicability: The device should be built from off-the-shelf components and open-source software, so that multiple copies can be made by different researchers. This is key – if only one lab has a “magic box,” it’s not science; but if ten labs build it and five of them start seeing similar anomalies, we have something tangible. Therefore, all design files, schematics, and code would be published openly. The ideal device should also self-calibrate and record calibration data (for instance, it could include a reference noise source or signal generator to periodically verify that its sensors are working correctly and not drifting). This way, if an anomaly is recorded, we know it wasn’t just an instrument fault.

In operation, the ideal ITC device might look like a small console or node that one can place in any location – a haunted house, a lab, a medium’s séance room – and leave running 24/7. It would quietly monitor multiple channels, perhaps periodically announcing “listening” or asking a question out loud. It could run off battery or isolated power to avoid grounding issues and have internal clock synchronization via GPS (so that logs from different units worldwide can be compared to see if, say, a global event coincided with multiple ITC occurrences). All data could be uploaded to a central repository for analysis, creating a large dataset for researchers.

By removing the need for a human operator to actively manage each session, this design aims to eliminate psychological bias and fraud possibilities – no one can accuse a hopeful experimenter of imagining results because the machine will have the raw data. It also removes intentional hoaxing since everything is sealed and logged (any tampering of the device would show in the logs). If an anomalous communication is captured, it will stand on its own merits with a precise digital record.

Finally, this device emphasizes improving reliability by demanding repeatability: an event that triggers in one location can be cross-checked by deploying the device in another. If the design is sound, the device essentially provides a standardized platform for ITC – much like a telescope for astronomers – focusing effort and allowing results to be compared apples-to-apples. It turns ghost hunting from a free-form art into an experimental science. Of course, this does not guarantee that spirits or time travelers will line up to talk – but if they do try, we’ll have the best chance yet to capture it in a credible, analyzable manner.

In conclusion, the lessons from The Vertical Plane and other ITC cases drive home the need for a multidisciplinary, technology-heavy approach: utilizing computing, radio, audio, and sensor analytics in tandem. By doing so, we honor the strange, mixed signals of past phenomena (text, sound, EM disturbances) and acknowledge that the underlying mechanism might involve crossing the boundaries of physics as we know them. The proposed methodologies and device aim to create a robust bridge – one that does not depend on a gifted medium or happenstance – so that if there is a genuine other trying to communicate, we can reliably meet them halfway and document the encounter for the world to see, in a scientific light.

Sources:

  • Webster, Ken. The Vertical Plane. Thorsons, 2017 (orig. 1989). – First-hand account of the Dodleston computer communications, including technical details of the BBC Micro usage and observations of related phenomena.
  • 2109 communications in The Vertical Plane – provided a theoretical link between geomagnetic anomalies and ITC.
  • Geek News Central (2002). “Ghosts making contact via computer?” – Mentions Ken Webster’s 1984–85 messages (≈250 in total) and the concept of spirits planting files on computersgeeknewscentral.comgeeknewscentral.com.
  • Macy, Mark (worlditc.org and macyafterlife.com) – Documents on Luxembourg ITC events: e.g., the 1991 Jeannette Meek letter and 1992 Hal Roach image file received on Maggy Harsch-Fischbach’s computermacyafterlife.commacyafterlife.com. Also describes specialized ITC equipment (UV light, crystals) used in Luxembourgmacyafterlife.com.
  • Bacci Experiments: Scole Experiment Report (2005) by Cardoso, Festa, Fontana, et al. – Detailed investigation of Marcello Bacci’s direct radio voice sessions, confirming voices on his tube radio while a duplicate radio stayed silentthescoleexperiment.com, and voices continuing even after removal of RF receiver componentsthescoleexperiment.com with no change in EM field readingsthescoleexperiment.com.
  • Strange Phenomenon (2020). “Instrumental Transcommunication: The Dead Speak!” – Transcript highlighting Bacci’s work (radio in Faraday cage test)strange-phenomenon.comstrange-phenomenon.com and giving historical context on EVP (Raudive’s use of 1445–1500 kHz “Jürgenson frequency”)strange-phenomenon.com and Spiricom/O’Neil’s challengesstrange-phenomenon.com.
  • Barušs, Imants. Journal of Scientific Exploration 21(1), 2007. – Experimental test of ITC using random text generators; reported statistically significant deviations (yes/no answers correct beyond chance)ir.lib.uwo.ca.
  • Cardoso, Anabela. Electronic Voices: Contact with Another Dimension (2009), and personal ITC journal papers – Documenting long-term direct radio voice research, corroborating many technical aspects initially demonstrated by Bacci (e.g., voices on detuned radio, importance of operator intention). [Specific citations from her work are integrated via references above.]

Leave a Reply