Around the same time I bought that core plane, I bought a bag of bits from someone in eastern Europe: a little ziplock bag with about 512 loose ferrite cores of this same size. Yep, that's 64 bytes in a baggie! It is pretty cool to see loose bits like that.
But if you think these bits are tiny, they got a lot smaller when they went to machine weaving. Maybe half or quarter this size.
Of course, "tiny" is relative, isn't it? At least you can see these bits, not like your newfangled semiconductor memory.
I actually have core memory that I built myself on my desk right now! It’s only one bit, but consists of three cores, the two additional ones are not memory but there to cancel out the big inductive pulse when “accessing” the core that is separate from the actual memory kickback pulse (real core memory does it by clever wiring across the entire plane of cores, no extra cores needed).
It’s great fun and actually very simple once you understood the (already relatively simple) concept. My “magnetic core bit” is literally just a few wires going through the cores (https://live.staticflickr.com/7874/33365624228_61caed6e71_k_... ), though I later added a couple of transistors to form a sense amp. I got the cores by just looking for tiny ferrite cores on eBay, highly unlikely that they were meant as memory cores.
For driving them, I just set my function generator to its maximum output, 10V at 50Ohm output impedance. Anything less than that and you barely get a kickback, so it’s really not efficient, in any way.
I really enjoy core memory, my PDP 8's have 24K words and 32K words. There is a great process for tuning the drivers in the PDP 8 maintenance manual which results in a matched set of driver boards to a core-plane board.
I built a small 4 bit (2 x 2) using #2 iron nuts and LM293D H-bridges as the drivers. The hysteresis band of the nuts was really small so it made it pretty unreliable sadly.
Perhaps the most interesting fallout from core memory was the invention of FRAM which looks like a serial EEPROM (in 8 pin PDIPs) but are in fact small ferro-magnetic memories. Unlimited writes and all that.
The 6700 was a university mainframe, the only one at the Uni, everything ran on it, from payroll to research. That memory purchase doubled the size of the machine's memory. Memory cycle time was 1uS ... That's read time, core reads are destructive, and you have to write the data you read back, writes were faster ... Read-modify-write cycle could be done in the same time as a read
In addition to this core RAM there was Core Rope Memory which was used as the ROM for the Apollo Guidance computer amongst others. The cores were hand woven to program in the binary, a core could be wired to produce a 0 or 1 and any changes would require the core to be woven again.
You can get to the patent from here. The way I understand it, depending how/where the mylar sheet is punched or not (figure 8 is the unpunched sheet, figure 9/10 show a punched one) you will interrupt some of the connections and that particular line (the primary) will pass outside or inside the transformer core, so the secondary will be energized or not (1/0 output). By changing the mylar sheets you can "reprogram" the microcode ROM. I'm not an electrical engineer, so my terminology might not be 100% correct :)
That's a good explanation of the transformer read-only storage (TROS) used by the Model 20 and Model 40. Microcode storage gets even more complicated, as IBM use different microcode storage techniques on different 360 models, for a combination of technical and political reasons. The fundamental problem was how to implement tens of kilobytes of ROM in a way that was affordable and could be updated in the field.
The Model 30 used CCROS (Card Capacitor Read-Only Store). Like TROS, this used a mylar sheet with holes punched in it. However, in this case, the holes were capacitively sensed; an air compressor forced the sense lines against the card. The card was the same size as a punch card, so you could punch new microcode with a keypunch.
The Model 50 and Model 65 used BCROS (Balanced Capacitor Read Only Storage). This was kind of like CCROS, but faster. It used 8 inch by 18 inch mylar sheets with an etched copper pattern like a PCB. Each sheet held 35,200 bits and the system used 16 sheets.
The Model 25 stored microcode in the regular core memory, a rather boring solution. For faster performance, the high-end System/360 machines had hardwired control rather than microcode.
Funny, I watched the embedded video yesterday. It gives a very good explanation of how the core worked, including a lot of theory and implimentation details. They wire up an actual core, and write a bit, then read it back by detecting core flips with an oscilloscope.
It's a bit long, but well worth the time to watch.
Manual assembly of a 64×64 core plane for SAGE (1953) took 40 hours. On occasion, an assembler would burst into tears if a core plane failed final testing and the work was wasted. IBM rapidly introduced automation and brought the assembly time down to minutes.
In 1965, as demand for core memory increased, IBM moved some production to Japan and Taiwan, where labor costs were so low that manual stringing of cores was cheaper than automated assembly. Unfortunately for IBM, competitors also moved production to Asia, negating the advantage IBM had from automation.
It's surely super fussy, but at least it's a repeating pattern and not as fussy as wiring rope memory. That's a form of read-only memory used in the space program where the bits are wired in by hand to represent the stored program bit by bit.
They could fix bad cores in a core plane. They cut the X and Y wire to remove the bad core. Then new X and Y wires were rethreaded through the old cores and the new core. They tested planes after stringing the X and Y wires, before adding the sense and inhibit wires. If they found a problem after the sense and inhibit wires were threaded through, I don't know if they needed to remove them or if they could add a splice.
Yes, but since you mentioned the repeating pattern (or absence thereof), I think we thought you were talking about mistakes in the bit pattern, and not material failure? And in this case, I certainly hope they just read out all the bits and compared them against expectations before launch.