Why are PDP-7-style microprogrammed instructions out of vogue? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)How were Zuse Z22 Instructions Encoded?Why did the PDP-11 include a JMP instruction?What are some programs known to take advantage of Intel MMX instructions?What is the relative code density of 8-bit microprocessors?PDP-11 instruction set inconsistenciesAre there any articles elucidating the history of the POPCOUNT instruction?Intel 8080 - Behaviour of the carry bit when comparing a value with 0Why does the Z80 include the RLD and RRD instructions?What is the purpose of the ω register of the БЭСМ-6?How can floating point addition be so slow on a BESM-6?What was the main purpose of bitshift instructions in CPU?

How does the particle を relate to the verb 行く in the structure「A を + B に行く」?

Why did the IBM 650 use bi-quinary?

ListPlot join points by nearest neighbor rather than order

3 doors, three guards, one stone

What causes the vertical darker bands in my photo?

Why did the Falcon Heavy center core fall off the ASDS OCISLY barge?

The logistics of corpse disposal

What is a non-alternating simple group with big order, but relatively few conjugacy classes?

Denied boarding although I have proper visa and documentation. To whom should I make a complaint?

Why is my conclusion inconsistent with the van't Hoff equation?

What does this icon in iOS Stardew Valley mean?

How do I keep my slimes from escaping their pens?

How to find all the available tools in mac terminal?

Why aren't air breathing engines used as small first stages

Why are there no cargo aircraft with "flying wing" design?

Can an alien society believe that their star system is the universe?

Can I cast Passwall to drop an enemy into a 20-foot pit?

Output the ŋarâþ crîþ alphabet song without using (m)any letters

Apollo command module space walk?

English words in a non-english sci-fi novel

Error "illegal generic type for instanceof" when using local classes

How to align text above triangle figure

Dating a Former Employee

What exactly is a "Meth" in Altered Carbon?



Why are PDP-7-style microprogrammed instructions out of vogue?



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)How were Zuse Z22 Instructions Encoded?Why did the PDP-11 include a JMP instruction?What are some programs known to take advantage of Intel MMX instructions?What is the relative code density of 8-bit microprocessors?PDP-11 instruction set inconsistenciesAre there any articles elucidating the history of the POPCOUNT instruction?Intel 8080 - Behaviour of the carry bit when comparing a value with 0Why does the Z80 include the RLD and RRD instructions?What is the purpose of the ω register of the БЭСМ-6?How can floating point addition be so slow on a BESM-6?What was the main purpose of bitshift instructions in CPU?










9















DEC, and at least some of their computers, especially those in the 18-bit family and 12-bit family, had these opr instructions, which contained many bitfields which encoded something like "subinstructions". Things like



  • clear the accumulator

  • increment the accumulator

  • rotate the accumulator one place leftward

  • complement the accumulator

  • skip if the accumulator is zero

The nature of these simple operations is such that it's convenient to encode each one in some bit or bitfield in the instruction word, and to have the computer execute each one in a statically scheduled manner. My understanding is that's because they are often used together1, and have simple encodings.



A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations, which might not be as space or time efficient.



From what I can tell, using DEC-style microcoded instructions to perform any number of simple operations of a single register, has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures. Why is this?




1: Not only to load small integers into the accumulator, as in cla cll cml rtl inc to set the accumulator to 3 on the PDP-8, but also for examining or manipulating bitfields, probably long division, etc.










share|improve this question






















  • What you're describing are VLIW architectures - except the question you have is quite unclear - adding 'Why this' to a description isn't exactly a question.

    – Raffzahn
    Apr 12 at 14:19











  • @Raffzahn I think I've identified a trend; I am asking if it's there, and if so, what's motivated it. My understanding of VLIW is that the operations are dyadic, or have variable transitivities, but on the PDP 7 et al., the operations were all strictly monadic.

    – Wilson
    Apr 12 at 14:26






  • 1





    This is a little off topic. The DEC PDP-6 had 16 variations on the Boolean operations. It used four bits out of the opcode field to specify a truth table for the corresponding Boolean operation. Thus it was able to implement 16 operations with about the same logic that it would have taken to implement just one.

    – Walter Mitty
    Apr 12 at 14:36












  • @Wilson VLIW is not intrinsic tied to any kind of operation. The basic idea is that there is no (general) decoding, but each function unit that can be initiated separate will get it's own mark in the instruction field. THus the decoder stage can be removed - or at least quite simplified.

    – Raffzahn
    Apr 12 at 14:48






  • 1





    Yes these opcodes are in the PDP-10 as well. Open the opcode list and take a close look at opcodes 400-477. If you convert the opcodes from octal to binary, you will find four bits that provide a truth table for the operation in question. SETZ has all four of these bits set to zero, and SETO has all four set to one. AND has three zeroes and a one.

    – Walter Mitty
    Apr 13 at 13:36
















9















DEC, and at least some of their computers, especially those in the 18-bit family and 12-bit family, had these opr instructions, which contained many bitfields which encoded something like "subinstructions". Things like



  • clear the accumulator

  • increment the accumulator

  • rotate the accumulator one place leftward

  • complement the accumulator

  • skip if the accumulator is zero

The nature of these simple operations is such that it's convenient to encode each one in some bit or bitfield in the instruction word, and to have the computer execute each one in a statically scheduled manner. My understanding is that's because they are often used together1, and have simple encodings.



A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations, which might not be as space or time efficient.



From what I can tell, using DEC-style microcoded instructions to perform any number of simple operations of a single register, has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures. Why is this?




1: Not only to load small integers into the accumulator, as in cla cll cml rtl inc to set the accumulator to 3 on the PDP-8, but also for examining or manipulating bitfields, probably long division, etc.










share|improve this question






















  • What you're describing are VLIW architectures - except the question you have is quite unclear - adding 'Why this' to a description isn't exactly a question.

    – Raffzahn
    Apr 12 at 14:19











  • @Raffzahn I think I've identified a trend; I am asking if it's there, and if so, what's motivated it. My understanding of VLIW is that the operations are dyadic, or have variable transitivities, but on the PDP 7 et al., the operations were all strictly monadic.

    – Wilson
    Apr 12 at 14:26






  • 1





    This is a little off topic. The DEC PDP-6 had 16 variations on the Boolean operations. It used four bits out of the opcode field to specify a truth table for the corresponding Boolean operation. Thus it was able to implement 16 operations with about the same logic that it would have taken to implement just one.

    – Walter Mitty
    Apr 12 at 14:36












  • @Wilson VLIW is not intrinsic tied to any kind of operation. The basic idea is that there is no (general) decoding, but each function unit that can be initiated separate will get it's own mark in the instruction field. THus the decoder stage can be removed - or at least quite simplified.

    – Raffzahn
    Apr 12 at 14:48






  • 1





    Yes these opcodes are in the PDP-10 as well. Open the opcode list and take a close look at opcodes 400-477. If you convert the opcodes from octal to binary, you will find four bits that provide a truth table for the operation in question. SETZ has all four of these bits set to zero, and SETO has all four set to one. AND has three zeroes and a one.

    – Walter Mitty
    Apr 13 at 13:36














9












9








9


1






DEC, and at least some of their computers, especially those in the 18-bit family and 12-bit family, had these opr instructions, which contained many bitfields which encoded something like "subinstructions". Things like



  • clear the accumulator

  • increment the accumulator

  • rotate the accumulator one place leftward

  • complement the accumulator

  • skip if the accumulator is zero

The nature of these simple operations is such that it's convenient to encode each one in some bit or bitfield in the instruction word, and to have the computer execute each one in a statically scheduled manner. My understanding is that's because they are often used together1, and have simple encodings.



A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations, which might not be as space or time efficient.



From what I can tell, using DEC-style microcoded instructions to perform any number of simple operations of a single register, has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures. Why is this?




1: Not only to load small integers into the accumulator, as in cla cll cml rtl inc to set the accumulator to 3 on the PDP-8, but also for examining or manipulating bitfields, probably long division, etc.










share|improve this question














DEC, and at least some of their computers, especially those in the 18-bit family and 12-bit family, had these opr instructions, which contained many bitfields which encoded something like "subinstructions". Things like



  • clear the accumulator

  • increment the accumulator

  • rotate the accumulator one place leftward

  • complement the accumulator

  • skip if the accumulator is zero

The nature of these simple operations is such that it's convenient to encode each one in some bit or bitfield in the instruction word, and to have the computer execute each one in a statically scheduled manner. My understanding is that's because they are often used together1, and have simple encodings.



A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations, which might not be as space or time efficient.



From what I can tell, using DEC-style microcoded instructions to perform any number of simple operations of a single register, has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures. Why is this?




1: Not only to load small integers into the accumulator, as in cla cll cml rtl inc to set the accumulator to 3 on the PDP-8, but also for examining or manipulating bitfields, probably long division, etc.







instruction-set microcode






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Apr 12 at 13:51









WilsonWilson

12.7k658145




12.7k658145












  • What you're describing are VLIW architectures - except the question you have is quite unclear - adding 'Why this' to a description isn't exactly a question.

    – Raffzahn
    Apr 12 at 14:19











  • @Raffzahn I think I've identified a trend; I am asking if it's there, and if so, what's motivated it. My understanding of VLIW is that the operations are dyadic, or have variable transitivities, but on the PDP 7 et al., the operations were all strictly monadic.

    – Wilson
    Apr 12 at 14:26






  • 1





    This is a little off topic. The DEC PDP-6 had 16 variations on the Boolean operations. It used four bits out of the opcode field to specify a truth table for the corresponding Boolean operation. Thus it was able to implement 16 operations with about the same logic that it would have taken to implement just one.

    – Walter Mitty
    Apr 12 at 14:36












  • @Wilson VLIW is not intrinsic tied to any kind of operation. The basic idea is that there is no (general) decoding, but each function unit that can be initiated separate will get it's own mark in the instruction field. THus the decoder stage can be removed - or at least quite simplified.

    – Raffzahn
    Apr 12 at 14:48






  • 1





    Yes these opcodes are in the PDP-10 as well. Open the opcode list and take a close look at opcodes 400-477. If you convert the opcodes from octal to binary, you will find four bits that provide a truth table for the operation in question. SETZ has all four of these bits set to zero, and SETO has all four set to one. AND has three zeroes and a one.

    – Walter Mitty
    Apr 13 at 13:36


















  • What you're describing are VLIW architectures - except the question you have is quite unclear - adding 'Why this' to a description isn't exactly a question.

    – Raffzahn
    Apr 12 at 14:19











  • @Raffzahn I think I've identified a trend; I am asking if it's there, and if so, what's motivated it. My understanding of VLIW is that the operations are dyadic, or have variable transitivities, but on the PDP 7 et al., the operations were all strictly monadic.

    – Wilson
    Apr 12 at 14:26






  • 1





    This is a little off topic. The DEC PDP-6 had 16 variations on the Boolean operations. It used four bits out of the opcode field to specify a truth table for the corresponding Boolean operation. Thus it was able to implement 16 operations with about the same logic that it would have taken to implement just one.

    – Walter Mitty
    Apr 12 at 14:36












  • @Wilson VLIW is not intrinsic tied to any kind of operation. The basic idea is that there is no (general) decoding, but each function unit that can be initiated separate will get it's own mark in the instruction field. THus the decoder stage can be removed - or at least quite simplified.

    – Raffzahn
    Apr 12 at 14:48






  • 1





    Yes these opcodes are in the PDP-10 as well. Open the opcode list and take a close look at opcodes 400-477. If you convert the opcodes from octal to binary, you will find four bits that provide a truth table for the operation in question. SETZ has all four of these bits set to zero, and SETO has all four set to one. AND has three zeroes and a one.

    – Walter Mitty
    Apr 13 at 13:36

















What you're describing are VLIW architectures - except the question you have is quite unclear - adding 'Why this' to a description isn't exactly a question.

– Raffzahn
Apr 12 at 14:19





What you're describing are VLIW architectures - except the question you have is quite unclear - adding 'Why this' to a description isn't exactly a question.

– Raffzahn
Apr 12 at 14:19













@Raffzahn I think I've identified a trend; I am asking if it's there, and if so, what's motivated it. My understanding of VLIW is that the operations are dyadic, or have variable transitivities, but on the PDP 7 et al., the operations were all strictly monadic.

– Wilson
Apr 12 at 14:26





@Raffzahn I think I've identified a trend; I am asking if it's there, and if so, what's motivated it. My understanding of VLIW is that the operations are dyadic, or have variable transitivities, but on the PDP 7 et al., the operations were all strictly monadic.

– Wilson
Apr 12 at 14:26




1




1





This is a little off topic. The DEC PDP-6 had 16 variations on the Boolean operations. It used four bits out of the opcode field to specify a truth table for the corresponding Boolean operation. Thus it was able to implement 16 operations with about the same logic that it would have taken to implement just one.

– Walter Mitty
Apr 12 at 14:36






This is a little off topic. The DEC PDP-6 had 16 variations on the Boolean operations. It used four bits out of the opcode field to specify a truth table for the corresponding Boolean operation. Thus it was able to implement 16 operations with about the same logic that it would have taken to implement just one.

– Walter Mitty
Apr 12 at 14:36














@Wilson VLIW is not intrinsic tied to any kind of operation. The basic idea is that there is no (general) decoding, but each function unit that can be initiated separate will get it's own mark in the instruction field. THus the decoder stage can be removed - or at least quite simplified.

– Raffzahn
Apr 12 at 14:48





@Wilson VLIW is not intrinsic tied to any kind of operation. The basic idea is that there is no (general) decoding, but each function unit that can be initiated separate will get it's own mark in the instruction field. THus the decoder stage can be removed - or at least quite simplified.

– Raffzahn
Apr 12 at 14:48




1




1





Yes these opcodes are in the PDP-10 as well. Open the opcode list and take a close look at opcodes 400-477. If you convert the opcodes from octal to binary, you will find four bits that provide a truth table for the operation in question. SETZ has all four of these bits set to zero, and SETO has all four set to one. AND has three zeroes and a one.

– Walter Mitty
Apr 13 at 13:36






Yes these opcodes are in the PDP-10 as well. Open the opcode list and take a close look at opcodes 400-477. If you convert the opcodes from octal to binary, you will find four bits that provide a truth table for the operation in question. SETZ has all four of these bits set to zero, and SETO has all four set to one. AND has three zeroes and a one.

– Walter Mitty
Apr 13 at 13:36











2 Answers
2






active

oldest

votes


















11















[...] had these opr instructions, which contained many bitfields which encoded something like "subinstructions"[...]




What you describe is basically a (V)LIW instruction format - at least that's what it might be called today. That's what computers started out with. Separate bits for each function to be applied to the value addressed.



The DEC is somewhat of a bad example here, as its accumulator instructions are a special kind, already a bastard between clean all over LIW and dedicated encoding. The LIW aspect is used only for this accumulator subset.



Zuse's machines, like the Z22, might make a better example with their ability to have each and every instruction carry multiple operations.




A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations,




Yes - and no. For one, not all possible combinations could be used together, resulting in illegal instructions. In fact, depending on the machine's construction, most of these combinations were illegal. And that's why dedicated instructions took over. Let's assume, there are like 8 different operational units in the data path. Having one bit for each in the instruction word makes easy decoding, as each would just be wired up with the enable for a single function, resulting in a fast and simple machine structure.



Of these 256 combinations (of which one would be a nop), many would not make sense - think shifting left and shifting right, or adding and subtracting at the same time. By encoding only the 20 useful combinations into a 5 bit field, 3 bits (almost half) could be freed - at the cost of an additional decoding stage.



Now, back in the old times, when machines were word-orientated (e.g. 36 bits in one word), there was much space - even resulting in unused bits. No need to add a decoding stage. Even worse, doing so would slow down the execution. Well, only a bit, but it would.



The situation changed when machines became byte-orientated and variable length instruction formats were used. Here cramping down the 8 unit lines into a single encoded 5-bit field enabled it to squeeze into a byte while leaving room for more (like a register number), without the need to fetch two bytes. Heck, it even leaves 12x8 instruction points for other encodings/irregular instructions without needing more.




which might not be as space or time efficient.




That's partially true for the time efficiency, but not space - space-wise it's an extreme saving enabling more compact code. The inner workings are (can be) still (mostly) the same, but less visible. Instead of setting a shift and an add bit, there's now a Add-And-Shift instruction.



Then again, by now encoding it into a single byte instead of a full 36 bit word, the CPU can fetch the instructions at the same speed (byte bus vs. word bus) or even 4 times the speed (word sized bus) than before. So with memory always being the slowest part, tighter encoding does not only save space, but also speeds up execution - despite the additional decoding stage.




From what I can tell, [this] has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures.




Not nearly as common on the surface is maybe the point here. For one, explicit VLIW instructions are still a thing (think Itanium), but more importantly, they are always an option for internal workings of modern CPUs. Where 'traditional' code gets first decoded into sub-operations, and these later get either combined to LIW instructions again, or scheduled in parallel over different function units.



In fact, the mentioned ARM makes another good point for it to vanish. ARM had traditionally the ability to have every instruction being executed conditionally (much like Zuse did first). Cool when thinking in sequential execution, but a gigantic hurdle when it comes to modern CPUs with the ability to reorder instructions according to available data and function units. It makes rescheduling not just a hard task, but almost impossible. Even worse, ARM featured DEC-like condition handling, where each and every load did change the flags.



Bottom line: Just because something isn't (always) visible to the user-side programmer, doesn't mean it isn't there.






share|improve this answer




















  • 1





    The Transmeta CPUs were a somewhat recent example of CPUs that used a (proprietary) VLIW instruction set internally, and another completely different one (namely x86) externally. In the Itanium, the VLIW bundles have explicit parallelism semantics (Intel calls this Explicit Parallel Instruction Computing (EPIC)), i.e. one VLIW bundle is 2 64 bit words with 3 41 bit instructions and a 5 bit "template" that tells the CPU what kinds of instructions the three instructions are and what the data dependencies are.

    – Jörg W Mittag
    Apr 13 at 7:06











  • Can you share (by way of a link to the reference or by adding to you answer) an example of how the Z22 instruction format allowed more operations to be specified in a single word? It must be completely unlike the earlier Z4 if that's the case.

    – Wilson
    Apr 13 at 12:03






  • 1





    I tried to google it but my German is really quite bad by now.

    – Wilson
    Apr 13 at 12:04











  • @Wilson Ouch. Ok, to start with, Z4 is an original Zuse (the man himself) design, while the Z22 was imagined by Theodor Fromme (call it design lead) with much help from Heinz Zemanek and Rudolf Bodo both designed the Mailüfterl and made the schematics for the Z22. The idea was to design the tube based Z22 in a way that it may be transistorized later. Which happend with the Z23. Which is a quite remarkable planing at that time. ... more to follow

    – Raffzahn
    Apr 13 at 13:52







  • 2





    @Wilson you asked for it: how-were Zuse Z22 Instructions Encoded? ... wasted another perfect good day - even included some German for you to test your knowledge :))

    – Raffzahn
    Apr 14 at 0:10


















14














The PDP-7 was a one address machine. All instructions occupied 18 bits. The operations that manipulated the accumulator didn't reference memory, and therefore didn't need an address. But the address bits were in the instruction anyway, because all instructions were encoded in an 18 bit word. So why not use these unused bits to get more use out of the instruction bits?



Once you get to opcodes with a variable number of operand addresses, the need to economize in this way goes away.






share|improve this answer


















  • 2





    To add to this, the PDP-7 is from an era when it was common for the width of the address bus to be less than the width of the data bus. In this case, you could fit a full 13-bit address into an 18-bit instruction word, which meant that you could pack an entire instruction (including the operand address) into a single word. Compare this to a CPU like the 6502 with 8-bit words and 16-bit addresses: if you can't fit an address into an instruction word then naturally they must come in extra bytes that follow the opcode byte. (continued)

    – Ken Gober
    Apr 13 at 14:56






  • 2





    The flip side of being able to fit the address into the instruction word was that you wasted a lot of bits for instructions that did not need an operand address or jump address. So the PDP-7 style sub-instructions were essentially a way to use unused bits in the instruction word to encode additional instructions, allowing many more instructions to be added without the cost of widening the word size, the only caveat being that the extra instructions had to be ones that didn't need to include an address.

    – Ken Gober
    Apr 13 at 15:00






  • 2





    @KenGober, I think you and I are saying the same thing, in different words. Thanks for adding a little clarity.

    – Walter Mitty
    Apr 13 at 19:36











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "648"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9666%2fwhy-are-pdp-7-style-microprogrammed-instructions-out-of-vogue%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









11















[...] had these opr instructions, which contained many bitfields which encoded something like "subinstructions"[...]




What you describe is basically a (V)LIW instruction format - at least that's what it might be called today. That's what computers started out with. Separate bits for each function to be applied to the value addressed.



The DEC is somewhat of a bad example here, as its accumulator instructions are a special kind, already a bastard between clean all over LIW and dedicated encoding. The LIW aspect is used only for this accumulator subset.



Zuse's machines, like the Z22, might make a better example with their ability to have each and every instruction carry multiple operations.




A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations,




Yes - and no. For one, not all possible combinations could be used together, resulting in illegal instructions. In fact, depending on the machine's construction, most of these combinations were illegal. And that's why dedicated instructions took over. Let's assume, there are like 8 different operational units in the data path. Having one bit for each in the instruction word makes easy decoding, as each would just be wired up with the enable for a single function, resulting in a fast and simple machine structure.



Of these 256 combinations (of which one would be a nop), many would not make sense - think shifting left and shifting right, or adding and subtracting at the same time. By encoding only the 20 useful combinations into a 5 bit field, 3 bits (almost half) could be freed - at the cost of an additional decoding stage.



Now, back in the old times, when machines were word-orientated (e.g. 36 bits in one word), there was much space - even resulting in unused bits. No need to add a decoding stage. Even worse, doing so would slow down the execution. Well, only a bit, but it would.



The situation changed when machines became byte-orientated and variable length instruction formats were used. Here cramping down the 8 unit lines into a single encoded 5-bit field enabled it to squeeze into a byte while leaving room for more (like a register number), without the need to fetch two bytes. Heck, it even leaves 12x8 instruction points for other encodings/irregular instructions without needing more.




which might not be as space or time efficient.




That's partially true for the time efficiency, but not space - space-wise it's an extreme saving enabling more compact code. The inner workings are (can be) still (mostly) the same, but less visible. Instead of setting a shift and an add bit, there's now a Add-And-Shift instruction.



Then again, by now encoding it into a single byte instead of a full 36 bit word, the CPU can fetch the instructions at the same speed (byte bus vs. word bus) or even 4 times the speed (word sized bus) than before. So with memory always being the slowest part, tighter encoding does not only save space, but also speeds up execution - despite the additional decoding stage.




From what I can tell, [this] has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures.




Not nearly as common on the surface is maybe the point here. For one, explicit VLIW instructions are still a thing (think Itanium), but more importantly, they are always an option for internal workings of modern CPUs. Where 'traditional' code gets first decoded into sub-operations, and these later get either combined to LIW instructions again, or scheduled in parallel over different function units.



In fact, the mentioned ARM makes another good point for it to vanish. ARM had traditionally the ability to have every instruction being executed conditionally (much like Zuse did first). Cool when thinking in sequential execution, but a gigantic hurdle when it comes to modern CPUs with the ability to reorder instructions according to available data and function units. It makes rescheduling not just a hard task, but almost impossible. Even worse, ARM featured DEC-like condition handling, where each and every load did change the flags.



Bottom line: Just because something isn't (always) visible to the user-side programmer, doesn't mean it isn't there.






share|improve this answer




















  • 1





    The Transmeta CPUs were a somewhat recent example of CPUs that used a (proprietary) VLIW instruction set internally, and another completely different one (namely x86) externally. In the Itanium, the VLIW bundles have explicit parallelism semantics (Intel calls this Explicit Parallel Instruction Computing (EPIC)), i.e. one VLIW bundle is 2 64 bit words with 3 41 bit instructions and a 5 bit "template" that tells the CPU what kinds of instructions the three instructions are and what the data dependencies are.

    – Jörg W Mittag
    Apr 13 at 7:06











  • Can you share (by way of a link to the reference or by adding to you answer) an example of how the Z22 instruction format allowed more operations to be specified in a single word? It must be completely unlike the earlier Z4 if that's the case.

    – Wilson
    Apr 13 at 12:03






  • 1





    I tried to google it but my German is really quite bad by now.

    – Wilson
    Apr 13 at 12:04











  • @Wilson Ouch. Ok, to start with, Z4 is an original Zuse (the man himself) design, while the Z22 was imagined by Theodor Fromme (call it design lead) with much help from Heinz Zemanek and Rudolf Bodo both designed the Mailüfterl and made the schematics for the Z22. The idea was to design the tube based Z22 in a way that it may be transistorized later. Which happend with the Z23. Which is a quite remarkable planing at that time. ... more to follow

    – Raffzahn
    Apr 13 at 13:52







  • 2





    @Wilson you asked for it: how-were Zuse Z22 Instructions Encoded? ... wasted another perfect good day - even included some German for you to test your knowledge :))

    – Raffzahn
    Apr 14 at 0:10















11















[...] had these opr instructions, which contained many bitfields which encoded something like "subinstructions"[...]




What you describe is basically a (V)LIW instruction format - at least that's what it might be called today. That's what computers started out with. Separate bits for each function to be applied to the value addressed.



The DEC is somewhat of a bad example here, as its accumulator instructions are a special kind, already a bastard between clean all over LIW and dedicated encoding. The LIW aspect is used only for this accumulator subset.



Zuse's machines, like the Z22, might make a better example with their ability to have each and every instruction carry multiple operations.




A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations,




Yes - and no. For one, not all possible combinations could be used together, resulting in illegal instructions. In fact, depending on the machine's construction, most of these combinations were illegal. And that's why dedicated instructions took over. Let's assume, there are like 8 different operational units in the data path. Having one bit for each in the instruction word makes easy decoding, as each would just be wired up with the enable for a single function, resulting in a fast and simple machine structure.



Of these 256 combinations (of which one would be a nop), many would not make sense - think shifting left and shifting right, or adding and subtracting at the same time. By encoding only the 20 useful combinations into a 5 bit field, 3 bits (almost half) could be freed - at the cost of an additional decoding stage.



Now, back in the old times, when machines were word-orientated (e.g. 36 bits in one word), there was much space - even resulting in unused bits. No need to add a decoding stage. Even worse, doing so would slow down the execution. Well, only a bit, but it would.



The situation changed when machines became byte-orientated and variable length instruction formats were used. Here cramping down the 8 unit lines into a single encoded 5-bit field enabled it to squeeze into a byte while leaving room for more (like a register number), without the need to fetch two bytes. Heck, it even leaves 12x8 instruction points for other encodings/irregular instructions without needing more.




which might not be as space or time efficient.




That's partially true for the time efficiency, but not space - space-wise it's an extreme saving enabling more compact code. The inner workings are (can be) still (mostly) the same, but less visible. Instead of setting a shift and an add bit, there's now a Add-And-Shift instruction.



Then again, by now encoding it into a single byte instead of a full 36 bit word, the CPU can fetch the instructions at the same speed (byte bus vs. word bus) or even 4 times the speed (word sized bus) than before. So with memory always being the slowest part, tighter encoding does not only save space, but also speeds up execution - despite the additional decoding stage.




From what I can tell, [this] has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures.




Not nearly as common on the surface is maybe the point here. For one, explicit VLIW instructions are still a thing (think Itanium), but more importantly, they are always an option for internal workings of modern CPUs. Where 'traditional' code gets first decoded into sub-operations, and these later get either combined to LIW instructions again, or scheduled in parallel over different function units.



In fact, the mentioned ARM makes another good point for it to vanish. ARM had traditionally the ability to have every instruction being executed conditionally (much like Zuse did first). Cool when thinking in sequential execution, but a gigantic hurdle when it comes to modern CPUs with the ability to reorder instructions according to available data and function units. It makes rescheduling not just a hard task, but almost impossible. Even worse, ARM featured DEC-like condition handling, where each and every load did change the flags.



Bottom line: Just because something isn't (always) visible to the user-side programmer, doesn't mean it isn't there.






share|improve this answer




















  • 1





    The Transmeta CPUs were a somewhat recent example of CPUs that used a (proprietary) VLIW instruction set internally, and another completely different one (namely x86) externally. In the Itanium, the VLIW bundles have explicit parallelism semantics (Intel calls this Explicit Parallel Instruction Computing (EPIC)), i.e. one VLIW bundle is 2 64 bit words with 3 41 bit instructions and a 5 bit "template" that tells the CPU what kinds of instructions the three instructions are and what the data dependencies are.

    – Jörg W Mittag
    Apr 13 at 7:06











  • Can you share (by way of a link to the reference or by adding to you answer) an example of how the Z22 instruction format allowed more operations to be specified in a single word? It must be completely unlike the earlier Z4 if that's the case.

    – Wilson
    Apr 13 at 12:03






  • 1





    I tried to google it but my German is really quite bad by now.

    – Wilson
    Apr 13 at 12:04











  • @Wilson Ouch. Ok, to start with, Z4 is an original Zuse (the man himself) design, while the Z22 was imagined by Theodor Fromme (call it design lead) with much help from Heinz Zemanek and Rudolf Bodo both designed the Mailüfterl and made the schematics for the Z22. The idea was to design the tube based Z22 in a way that it may be transistorized later. Which happend with the Z23. Which is a quite remarkable planing at that time. ... more to follow

    – Raffzahn
    Apr 13 at 13:52







  • 2





    @Wilson you asked for it: how-were Zuse Z22 Instructions Encoded? ... wasted another perfect good day - even included some German for you to test your knowledge :))

    – Raffzahn
    Apr 14 at 0:10













11












11








11








[...] had these opr instructions, which contained many bitfields which encoded something like "subinstructions"[...]




What you describe is basically a (V)LIW instruction format - at least that's what it might be called today. That's what computers started out with. Separate bits for each function to be applied to the value addressed.



The DEC is somewhat of a bad example here, as its accumulator instructions are a special kind, already a bastard between clean all over LIW and dedicated encoding. The LIW aspect is used only for this accumulator subset.



Zuse's machines, like the Z22, might make a better example with their ability to have each and every instruction carry multiple operations.




A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations,




Yes - and no. For one, not all possible combinations could be used together, resulting in illegal instructions. In fact, depending on the machine's construction, most of these combinations were illegal. And that's why dedicated instructions took over. Let's assume, there are like 8 different operational units in the data path. Having one bit for each in the instruction word makes easy decoding, as each would just be wired up with the enable for a single function, resulting in a fast and simple machine structure.



Of these 256 combinations (of which one would be a nop), many would not make sense - think shifting left and shifting right, or adding and subtracting at the same time. By encoding only the 20 useful combinations into a 5 bit field, 3 bits (almost half) could be freed - at the cost of an additional decoding stage.



Now, back in the old times, when machines were word-orientated (e.g. 36 bits in one word), there was much space - even resulting in unused bits. No need to add a decoding stage. Even worse, doing so would slow down the execution. Well, only a bit, but it would.



The situation changed when machines became byte-orientated and variable length instruction formats were used. Here cramping down the 8 unit lines into a single encoded 5-bit field enabled it to squeeze into a byte while leaving room for more (like a register number), without the need to fetch two bytes. Heck, it even leaves 12x8 instruction points for other encodings/irregular instructions without needing more.




which might not be as space or time efficient.




That's partially true for the time efficiency, but not space - space-wise it's an extreme saving enabling more compact code. The inner workings are (can be) still (mostly) the same, but less visible. Instead of setting a shift and an add bit, there's now a Add-And-Shift instruction.



Then again, by now encoding it into a single byte instead of a full 36 bit word, the CPU can fetch the instructions at the same speed (byte bus vs. word bus) or even 4 times the speed (word sized bus) than before. So with memory always being the slowest part, tighter encoding does not only save space, but also speeds up execution - despite the additional decoding stage.




From what I can tell, [this] has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures.




Not nearly as common on the surface is maybe the point here. For one, explicit VLIW instructions are still a thing (think Itanium), but more importantly, they are always an option for internal workings of modern CPUs. Where 'traditional' code gets first decoded into sub-operations, and these later get either combined to LIW instructions again, or scheduled in parallel over different function units.



In fact, the mentioned ARM makes another good point for it to vanish. ARM had traditionally the ability to have every instruction being executed conditionally (much like Zuse did first). Cool when thinking in sequential execution, but a gigantic hurdle when it comes to modern CPUs with the ability to reorder instructions according to available data and function units. It makes rescheduling not just a hard task, but almost impossible. Even worse, ARM featured DEC-like condition handling, where each and every load did change the flags.



Bottom line: Just because something isn't (always) visible to the user-side programmer, doesn't mean it isn't there.






share|improve this answer
















[...] had these opr instructions, which contained many bitfields which encoded something like "subinstructions"[...]




What you describe is basically a (V)LIW instruction format - at least that's what it might be called today. That's what computers started out with. Separate bits for each function to be applied to the value addressed.



The DEC is somewhat of a bad example here, as its accumulator instructions are a special kind, already a bastard between clean all over LIW and dedicated encoding. The LIW aspect is used only for this accumulator subset.



Zuse's machines, like the Z22, might make a better example with their ability to have each and every instruction carry multiple operations.




A later computer like the Z80 or ARM7 needs to fetch, decode and execute a separate instruction to perform each of these operations,




Yes - and no. For one, not all possible combinations could be used together, resulting in illegal instructions. In fact, depending on the machine's construction, most of these combinations were illegal. And that's why dedicated instructions took over. Let's assume, there are like 8 different operational units in the data path. Having one bit for each in the instruction word makes easy decoding, as each would just be wired up with the enable for a single function, resulting in a fast and simple machine structure.



Of these 256 combinations (of which one would be a nop), many would not make sense - think shifting left and shifting right, or adding and subtracting at the same time. By encoding only the 20 useful combinations into a 5 bit field, 3 bits (almost half) could be freed - at the cost of an additional decoding stage.



Now, back in the old times, when machines were word-orientated (e.g. 36 bits in one word), there was much space - even resulting in unused bits. No need to add a decoding stage. Even worse, doing so would slow down the execution. Well, only a bit, but it would.



The situation changed when machines became byte-orientated and variable length instruction formats were used. Here cramping down the 8 unit lines into a single encoded 5-bit field enabled it to squeeze into a byte while leaving room for more (like a register number), without the need to fetch two bytes. Heck, it even leaves 12x8 instruction points for other encodings/irregular instructions without needing more.




which might not be as space or time efficient.




That's partially true for the time efficiency, but not space - space-wise it's an extreme saving enabling more compact code. The inner workings are (can be) still (mostly) the same, but less visible. Instead of setting a shift and an add bit, there's now a Add-And-Shift instruction.



Then again, by now encoding it into a single byte instead of a full 36 bit word, the CPU can fetch the instructions at the same speed (byte bus vs. word bus) or even 4 times the speed (word sized bus) than before. So with memory always being the slowest part, tighter encoding does not only save space, but also speeds up execution - despite the additional decoding stage.




From what I can tell, [this] has fallen out of vogue, or are at least not nearly as common on modern instruction set architectures.




Not nearly as common on the surface is maybe the point here. For one, explicit VLIW instructions are still a thing (think Itanium), but more importantly, they are always an option for internal workings of modern CPUs. Where 'traditional' code gets first decoded into sub-operations, and these later get either combined to LIW instructions again, or scheduled in parallel over different function units.



In fact, the mentioned ARM makes another good point for it to vanish. ARM had traditionally the ability to have every instruction being executed conditionally (much like Zuse did first). Cool when thinking in sequential execution, but a gigantic hurdle when it comes to modern CPUs with the ability to reorder instructions according to available data and function units. It makes rescheduling not just a hard task, but almost impossible. Even worse, ARM featured DEC-like condition handling, where each and every load did change the flags.



Bottom line: Just because something isn't (always) visible to the user-side programmer, doesn't mean it isn't there.







share|improve this answer














share|improve this answer



share|improve this answer








edited yesterday

























answered Apr 12 at 14:45









RaffzahnRaffzahn

56.5k6137228




56.5k6137228







  • 1





    The Transmeta CPUs were a somewhat recent example of CPUs that used a (proprietary) VLIW instruction set internally, and another completely different one (namely x86) externally. In the Itanium, the VLIW bundles have explicit parallelism semantics (Intel calls this Explicit Parallel Instruction Computing (EPIC)), i.e. one VLIW bundle is 2 64 bit words with 3 41 bit instructions and a 5 bit "template" that tells the CPU what kinds of instructions the three instructions are and what the data dependencies are.

    – Jörg W Mittag
    Apr 13 at 7:06











  • Can you share (by way of a link to the reference or by adding to you answer) an example of how the Z22 instruction format allowed more operations to be specified in a single word? It must be completely unlike the earlier Z4 if that's the case.

    – Wilson
    Apr 13 at 12:03






  • 1





    I tried to google it but my German is really quite bad by now.

    – Wilson
    Apr 13 at 12:04











  • @Wilson Ouch. Ok, to start with, Z4 is an original Zuse (the man himself) design, while the Z22 was imagined by Theodor Fromme (call it design lead) with much help from Heinz Zemanek and Rudolf Bodo both designed the Mailüfterl and made the schematics for the Z22. The idea was to design the tube based Z22 in a way that it may be transistorized later. Which happend with the Z23. Which is a quite remarkable planing at that time. ... more to follow

    – Raffzahn
    Apr 13 at 13:52







  • 2





    @Wilson you asked for it: how-were Zuse Z22 Instructions Encoded? ... wasted another perfect good day - even included some German for you to test your knowledge :))

    – Raffzahn
    Apr 14 at 0:10












  • 1





    The Transmeta CPUs were a somewhat recent example of CPUs that used a (proprietary) VLIW instruction set internally, and another completely different one (namely x86) externally. In the Itanium, the VLIW bundles have explicit parallelism semantics (Intel calls this Explicit Parallel Instruction Computing (EPIC)), i.e. one VLIW bundle is 2 64 bit words with 3 41 bit instructions and a 5 bit "template" that tells the CPU what kinds of instructions the three instructions are and what the data dependencies are.

    – Jörg W Mittag
    Apr 13 at 7:06











  • Can you share (by way of a link to the reference or by adding to you answer) an example of how the Z22 instruction format allowed more operations to be specified in a single word? It must be completely unlike the earlier Z4 if that's the case.

    – Wilson
    Apr 13 at 12:03






  • 1





    I tried to google it but my German is really quite bad by now.

    – Wilson
    Apr 13 at 12:04











  • @Wilson Ouch. Ok, to start with, Z4 is an original Zuse (the man himself) design, while the Z22 was imagined by Theodor Fromme (call it design lead) with much help from Heinz Zemanek and Rudolf Bodo both designed the Mailüfterl and made the schematics for the Z22. The idea was to design the tube based Z22 in a way that it may be transistorized later. Which happend with the Z23. Which is a quite remarkable planing at that time. ... more to follow

    – Raffzahn
    Apr 13 at 13:52







  • 2





    @Wilson you asked for it: how-were Zuse Z22 Instructions Encoded? ... wasted another perfect good day - even included some German for you to test your knowledge :))

    – Raffzahn
    Apr 14 at 0:10







1




1





The Transmeta CPUs were a somewhat recent example of CPUs that used a (proprietary) VLIW instruction set internally, and another completely different one (namely x86) externally. In the Itanium, the VLIW bundles have explicit parallelism semantics (Intel calls this Explicit Parallel Instruction Computing (EPIC)), i.e. one VLIW bundle is 2 64 bit words with 3 41 bit instructions and a 5 bit "template" that tells the CPU what kinds of instructions the three instructions are and what the data dependencies are.

– Jörg W Mittag
Apr 13 at 7:06





The Transmeta CPUs were a somewhat recent example of CPUs that used a (proprietary) VLIW instruction set internally, and another completely different one (namely x86) externally. In the Itanium, the VLIW bundles have explicit parallelism semantics (Intel calls this Explicit Parallel Instruction Computing (EPIC)), i.e. one VLIW bundle is 2 64 bit words with 3 41 bit instructions and a 5 bit "template" that tells the CPU what kinds of instructions the three instructions are and what the data dependencies are.

– Jörg W Mittag
Apr 13 at 7:06













Can you share (by way of a link to the reference or by adding to you answer) an example of how the Z22 instruction format allowed more operations to be specified in a single word? It must be completely unlike the earlier Z4 if that's the case.

– Wilson
Apr 13 at 12:03





Can you share (by way of a link to the reference or by adding to you answer) an example of how the Z22 instruction format allowed more operations to be specified in a single word? It must be completely unlike the earlier Z4 if that's the case.

– Wilson
Apr 13 at 12:03




1




1





I tried to google it but my German is really quite bad by now.

– Wilson
Apr 13 at 12:04





I tried to google it but my German is really quite bad by now.

– Wilson
Apr 13 at 12:04













@Wilson Ouch. Ok, to start with, Z4 is an original Zuse (the man himself) design, while the Z22 was imagined by Theodor Fromme (call it design lead) with much help from Heinz Zemanek and Rudolf Bodo both designed the Mailüfterl and made the schematics for the Z22. The idea was to design the tube based Z22 in a way that it may be transistorized later. Which happend with the Z23. Which is a quite remarkable planing at that time. ... more to follow

– Raffzahn
Apr 13 at 13:52






@Wilson Ouch. Ok, to start with, Z4 is an original Zuse (the man himself) design, while the Z22 was imagined by Theodor Fromme (call it design lead) with much help from Heinz Zemanek and Rudolf Bodo both designed the Mailüfterl and made the schematics for the Z22. The idea was to design the tube based Z22 in a way that it may be transistorized later. Which happend with the Z23. Which is a quite remarkable planing at that time. ... more to follow

– Raffzahn
Apr 13 at 13:52





2




2





@Wilson you asked for it: how-were Zuse Z22 Instructions Encoded? ... wasted another perfect good day - even included some German for you to test your knowledge :))

– Raffzahn
Apr 14 at 0:10





@Wilson you asked for it: how-were Zuse Z22 Instructions Encoded? ... wasted another perfect good day - even included some German for you to test your knowledge :))

– Raffzahn
Apr 14 at 0:10











14














The PDP-7 was a one address machine. All instructions occupied 18 bits. The operations that manipulated the accumulator didn't reference memory, and therefore didn't need an address. But the address bits were in the instruction anyway, because all instructions were encoded in an 18 bit word. So why not use these unused bits to get more use out of the instruction bits?



Once you get to opcodes with a variable number of operand addresses, the need to economize in this way goes away.






share|improve this answer


















  • 2





    To add to this, the PDP-7 is from an era when it was common for the width of the address bus to be less than the width of the data bus. In this case, you could fit a full 13-bit address into an 18-bit instruction word, which meant that you could pack an entire instruction (including the operand address) into a single word. Compare this to a CPU like the 6502 with 8-bit words and 16-bit addresses: if you can't fit an address into an instruction word then naturally they must come in extra bytes that follow the opcode byte. (continued)

    – Ken Gober
    Apr 13 at 14:56






  • 2





    The flip side of being able to fit the address into the instruction word was that you wasted a lot of bits for instructions that did not need an operand address or jump address. So the PDP-7 style sub-instructions were essentially a way to use unused bits in the instruction word to encode additional instructions, allowing many more instructions to be added without the cost of widening the word size, the only caveat being that the extra instructions had to be ones that didn't need to include an address.

    – Ken Gober
    Apr 13 at 15:00






  • 2





    @KenGober, I think you and I are saying the same thing, in different words. Thanks for adding a little clarity.

    – Walter Mitty
    Apr 13 at 19:36















14














The PDP-7 was a one address machine. All instructions occupied 18 bits. The operations that manipulated the accumulator didn't reference memory, and therefore didn't need an address. But the address bits were in the instruction anyway, because all instructions were encoded in an 18 bit word. So why not use these unused bits to get more use out of the instruction bits?



Once you get to opcodes with a variable number of operand addresses, the need to economize in this way goes away.






share|improve this answer


















  • 2





    To add to this, the PDP-7 is from an era when it was common for the width of the address bus to be less than the width of the data bus. In this case, you could fit a full 13-bit address into an 18-bit instruction word, which meant that you could pack an entire instruction (including the operand address) into a single word. Compare this to a CPU like the 6502 with 8-bit words and 16-bit addresses: if you can't fit an address into an instruction word then naturally they must come in extra bytes that follow the opcode byte. (continued)

    – Ken Gober
    Apr 13 at 14:56






  • 2





    The flip side of being able to fit the address into the instruction word was that you wasted a lot of bits for instructions that did not need an operand address or jump address. So the PDP-7 style sub-instructions were essentially a way to use unused bits in the instruction word to encode additional instructions, allowing many more instructions to be added without the cost of widening the word size, the only caveat being that the extra instructions had to be ones that didn't need to include an address.

    – Ken Gober
    Apr 13 at 15:00






  • 2





    @KenGober, I think you and I are saying the same thing, in different words. Thanks for adding a little clarity.

    – Walter Mitty
    Apr 13 at 19:36













14












14








14







The PDP-7 was a one address machine. All instructions occupied 18 bits. The operations that manipulated the accumulator didn't reference memory, and therefore didn't need an address. But the address bits were in the instruction anyway, because all instructions were encoded in an 18 bit word. So why not use these unused bits to get more use out of the instruction bits?



Once you get to opcodes with a variable number of operand addresses, the need to economize in this way goes away.






share|improve this answer













The PDP-7 was a one address machine. All instructions occupied 18 bits. The operations that manipulated the accumulator didn't reference memory, and therefore didn't need an address. But the address bits were in the instruction anyway, because all instructions were encoded in an 18 bit word. So why not use these unused bits to get more use out of the instruction bits?



Once you get to opcodes with a variable number of operand addresses, the need to economize in this way goes away.







share|improve this answer












share|improve this answer



share|improve this answer










answered Apr 12 at 14:41









Walter MittyWalter Mitty

822313




822313







  • 2





    To add to this, the PDP-7 is from an era when it was common for the width of the address bus to be less than the width of the data bus. In this case, you could fit a full 13-bit address into an 18-bit instruction word, which meant that you could pack an entire instruction (including the operand address) into a single word. Compare this to a CPU like the 6502 with 8-bit words and 16-bit addresses: if you can't fit an address into an instruction word then naturally they must come in extra bytes that follow the opcode byte. (continued)

    – Ken Gober
    Apr 13 at 14:56






  • 2





    The flip side of being able to fit the address into the instruction word was that you wasted a lot of bits for instructions that did not need an operand address or jump address. So the PDP-7 style sub-instructions were essentially a way to use unused bits in the instruction word to encode additional instructions, allowing many more instructions to be added without the cost of widening the word size, the only caveat being that the extra instructions had to be ones that didn't need to include an address.

    – Ken Gober
    Apr 13 at 15:00






  • 2





    @KenGober, I think you and I are saying the same thing, in different words. Thanks for adding a little clarity.

    – Walter Mitty
    Apr 13 at 19:36












  • 2





    To add to this, the PDP-7 is from an era when it was common for the width of the address bus to be less than the width of the data bus. In this case, you could fit a full 13-bit address into an 18-bit instruction word, which meant that you could pack an entire instruction (including the operand address) into a single word. Compare this to a CPU like the 6502 with 8-bit words and 16-bit addresses: if you can't fit an address into an instruction word then naturally they must come in extra bytes that follow the opcode byte. (continued)

    – Ken Gober
    Apr 13 at 14:56






  • 2





    The flip side of being able to fit the address into the instruction word was that you wasted a lot of bits for instructions that did not need an operand address or jump address. So the PDP-7 style sub-instructions were essentially a way to use unused bits in the instruction word to encode additional instructions, allowing many more instructions to be added without the cost of widening the word size, the only caveat being that the extra instructions had to be ones that didn't need to include an address.

    – Ken Gober
    Apr 13 at 15:00






  • 2





    @KenGober, I think you and I are saying the same thing, in different words. Thanks for adding a little clarity.

    – Walter Mitty
    Apr 13 at 19:36







2




2





To add to this, the PDP-7 is from an era when it was common for the width of the address bus to be less than the width of the data bus. In this case, you could fit a full 13-bit address into an 18-bit instruction word, which meant that you could pack an entire instruction (including the operand address) into a single word. Compare this to a CPU like the 6502 with 8-bit words and 16-bit addresses: if you can't fit an address into an instruction word then naturally they must come in extra bytes that follow the opcode byte. (continued)

– Ken Gober
Apr 13 at 14:56





To add to this, the PDP-7 is from an era when it was common for the width of the address bus to be less than the width of the data bus. In this case, you could fit a full 13-bit address into an 18-bit instruction word, which meant that you could pack an entire instruction (including the operand address) into a single word. Compare this to a CPU like the 6502 with 8-bit words and 16-bit addresses: if you can't fit an address into an instruction word then naturally they must come in extra bytes that follow the opcode byte. (continued)

– Ken Gober
Apr 13 at 14:56




2




2





The flip side of being able to fit the address into the instruction word was that you wasted a lot of bits for instructions that did not need an operand address or jump address. So the PDP-7 style sub-instructions were essentially a way to use unused bits in the instruction word to encode additional instructions, allowing many more instructions to be added without the cost of widening the word size, the only caveat being that the extra instructions had to be ones that didn't need to include an address.

– Ken Gober
Apr 13 at 15:00





The flip side of being able to fit the address into the instruction word was that you wasted a lot of bits for instructions that did not need an operand address or jump address. So the PDP-7 style sub-instructions were essentially a way to use unused bits in the instruction word to encode additional instructions, allowing many more instructions to be added without the cost of widening the word size, the only caveat being that the extra instructions had to be ones that didn't need to include an address.

– Ken Gober
Apr 13 at 15:00




2




2





@KenGober, I think you and I are saying the same thing, in different words. Thanks for adding a little clarity.

– Walter Mitty
Apr 13 at 19:36





@KenGober, I think you and I are saying the same thing, in different words. Thanks for adding a little clarity.

– Walter Mitty
Apr 13 at 19:36

















draft saved

draft discarded
















































Thanks for contributing an answer to Retrocomputing Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9666%2fwhy-are-pdp-7-style-microprogrammed-instructions-out-of-vogue%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

getting Checkpoint VPN SSL Network Extender working in the command lineHow to connect to CheckPoint VPN on Ubuntu 18.04LTS?Will the Linux ( red-hat ) Open VPNC Client connect to checkpoint or nortel VPN gateways?VPN client for linux machine + support checkpoint gatewayVPN SSL Network Extender in FirefoxLinux Checkpoint SNX tool configuration issuesCheck Point - Connect under Linux - snx + OTPSNX VPN Ububuntu 18.XXUsing Checkpoint VPN SSL Network Extender CLI with certificateVPN with network manager (nm-applet) is not workingWill the Linux ( red-hat ) Open VPNC Client connect to checkpoint or nortel VPN gateways?VPN client for linux machine + support checkpoint gatewayImport VPN config files to NetworkManager from command lineTrouble connecting to VPN using network-manager, while command line worksStart a VPN connection with PPTP protocol on command linestarting a docker service daemon breaks the vpn networkCan't connect to vpn with Network-managerVPN SSL Network Extender in FirefoxUsing Checkpoint VPN SSL Network Extender CLI with certificate

대한민국 목차 국명 지리 역사 정치 국방 경제 사회 문화 국제 순위 관련 항목 각주 외부 링크 둘러보기 메뉴북위 37° 34′ 08″ 동경 126° 58′ 36″ / 북위 37.568889° 동경 126.976667°  / 37.568889; 126.976667ehThe Korean Repository문단을 편집문단을 편집추가해Clarkson PLC 사Report for Selected Countries and Subjects-Korea“Human Development Index and its components: P.198”“http://www.law.go.kr/%EB%B2%95%EB%A0%B9/%EB%8C%80%ED%95%9C%EB%AF%BC%EA%B5%AD%EA%B5%AD%EA%B8%B0%EB%B2%95”"한국은 국제법상 한반도 유일 합법정부 아니다" - 오마이뉴스 모바일Report for Selected Countries and Subjects: South Korea격동의 역사와 함께한 조선일보 90년 : 조선일보 인수해 혁신시킨 신석우, 임시정부 때는 '대한민국' 국호(國號) 정해《우리가 몰랐던 우리 역사: 나라 이름의 비밀을 찾아가는 역사 여행》“남북 공식호칭 ‘남한’‘북한’으로 쓴다”“Corea 대 Korea, 누가 이긴 거야?”국내기후자료 - 한국[김대중 前 대통령 서거] 과감한 구조개혁 'DJ노믹스'로 최단기간 환란극복 :: 네이버 뉴스“이라크 "韓-쿠르드 유전개발 MOU 승인 안해"(종합)”“해외 우리국민 추방사례 43%가 일본”차기전차 K2'흑표'의 세계 최고 전력 분석, 쿠키뉴스 엄기영, 2007-03-02두산인프라, 헬기잡는 장갑차 'K21'...내년부터 공급, 고뉴스 이대준, 2008-10-30과거 내용 찾기mk 뉴스 - 구매력 기준으로 보면 한국 1인당 소득 3만弗과거 내용 찾기"The N-11: More Than an Acronym"Archived조선일보 최우석, 2008-11-01Global 500 2008: Countries - South Korea“몇년째 '시한폭탄'... 가계부채, 올해는 터질까”가구당 부채 5000만원 처음 넘어서“‘빚’으로 내몰리는 사회.. 위기의 가계대출”“[경제365] 공공부문 부채 급증…800조 육박”“"소득 양극화 다소 완화...불평등은 여전"”“공정사회·공생발전 한참 멀었네”iSuppli,08年2QのDRAMシェア・ランキングを発表(08/8/11)South Korea dominates shipbuilding industry | Stock Market News & Stocks to Watch from StraightStocks한국 자동차 생산, 3년 연속 세계 5위자동차수출 '현대-삼성 웃고 기아-대우-쌍용은 울고' 과거 내용 찾기동반성장위 창립 1주년 맞아Archived"중기적합 3개업종 합의 무시한 채 선정"李대통령, 사업 무분별 확장 소상공인 생계 위협 질타삼성-LG, 서민업종인 빵·분식사업 잇따라 철수상생은 뒷전…SSM ‘몸집 불리기’ 혈안Archived“경부고속도에 '아시안하이웨이' 표지판”'철의 실크로드' 앞서 '말(言)의 실크로드'부터, 프레시안 정창현, 2008-10-01“'서울 지하철은 안전한가?'”“서울시 “올해 안에 모든 지하철역 스크린도어 설치””“부산지하철 1,2호선 승강장 안전펜스 설치 완료”“전교조, 정부 노조 통계서 처음 빠져”“[Weekly BIZ] 도요타 '제로 이사회'가 리콜 사태 불러들였다”“S Korea slams high tuition costs”““정치가 여론 양극화 부채질… 합리주의 절실””“〈"`촛불집회'는 민주주의의 질적 변화 상징"〉”““촛불집회가 민주주의 왜곡 초래””“국민 65%, "한국 노사관계 대립적"”“한국 국가경쟁력 27위‥노사관계 '꼴찌'”“제대로 형성되지 않은 대한민국 이념지형”“[신년기획-갈등의 시대] 갈등지수 OECD 4위…사회적 손실 GDP 27% 무려 300조”“2012 총선-대선의 키워드는 '국민과 소통'”“한국 삶의 질 27위, 2000년과 2008년 연속 하위권 머물러”“[해피 코리아] 행복점수 68점…해외 평가선 '낙제점'”“한국 어린이·청소년 행복지수 3년 연속 OECD ‘꼴찌’”“한국 이혼율 OECD중 8위”“[통계청] 한국 이혼율 OECD 4위”“오피니언 [이렇게 생각한다] `부부의 날` 에 돌아본 이혼율 1위 한국”“Suicide Rates by Country, Global Health Observatory Data Repository.”“1. 또 다른 차별”“오피니언 [편집자에게] '왕따'와 '패거리 정치' 심리는 닮은꼴”“[미래한국리포트] 무한경쟁에 빠진 대한민국”“대학생 98% "외모가 경쟁력이라는 말 동의"”“특급호텔 웨딩·200만원대 유모차… "남보다 더…" 호화病, 고질병 됐다”“[스트레스 공화국] ① 경쟁사회, 스트레스 쌓인다”““매일 30여명 자살 한국, 의사보다 무속인에…””“"자살 부르는 '우울증', 환자 중 85% 치료 안 받아"”“정신병원을 가다”“대한민국도 ‘묻지마 범죄’,안전지대 아니다”“유엔 "학생 '성적 지향'에 따른 차별 금지하라"”“유엔아동권리위원회 보고서 및 번역본 원문”“고졸 성공스토리 담은 '제빵왕 김탁구' 드라마 나온다”“‘빛 좋은 개살구’ 고졸 취업…실습 대신 착취”원본 문서“정신건강, 사회적 편견부터 고쳐드립니다”‘소통’과 ‘행복’에 목 마른 사회가 잠들어 있던 ‘심리학’ 깨웠다“[포토] 사유리-곽금주 교수의 유쾌한 심리상담”“"올해 한국인 평균 영화관람횟수 세계 1위"(종합)”“[게임연중기획] 게임은 문화다-여가활동 1순위 게임”“영화속 ‘영어 지상주의’ …“왠지 씁쓸한데””“2월 `신문 부수 인증기관` 지정..방송법 후속작업”“무료신문 성장동력 ‘차별성’과 ‘갈등해소’”대한민국 국회 법률지식정보시스템"Pew Research Center's Religion & Public Life Project: South Korea"“amp;vwcd=MT_ZTITLE&path=인구·가구%20>%20인구총조사%20>%20인구부문%20>%20 총조사인구(2005)%20>%20전수부문&oper_YN=Y&item=&keyword=종교별%20인구& amp;lang_mode=kor&list_id= 2005년 통계청 인구 총조사”원본 문서“한국인이 좋아하는 취미와 운동 (2004-2009)”“한국인이 좋아하는 취미와 운동 (2004-2014)”Archived“한국, `부분적 언론자유국' 강등〈프리덤하우스〉”“국경없는기자회 "한국, 인터넷감시 대상국"”“한국, 조선산업 1위 유지(S. Korea Stays Top Shipbuilding Nation) RZD-Partner Portal”원본 문서“한국, 4년 만에 ‘선박건조 1위’”“옛 마산시,인터넷속도 세계 1위”“"한국 초고속 인터넷망 세계1위"”“인터넷·휴대폰 요금, 외국보다 훨씬 비싸”“한국 관세행정 6년 연속 세계 '1위'”“한국 교통사고 사망자 수 OECD 회원국 중 2위”“결핵 후진국' 한국, 환자가 급증한 이유는”“수술은 신중해야… 자칫하면 생명 위협”대한민국분류대한민국의 지도대한민국 정부대표 다국어포털대한민국 전자정부대한민국 국회한국방송공사about korea and information korea브리태니커 백과사전(한국편)론리플래닛의 정보(한국편)CIA의 세계 정보(한국편)마리암 부디아 (Mariam Budia),『한국: 하늘이 내린 한 폭의 그림』, 서울: 트랜스라틴 19호 (2012년 3월)대한민국ehehehehehehehehehehehehehehWorldCat132441370n791268020000 0001 2308 81034078029-6026373548cb11863345f(데이터)00573706ge128495

Cannot Extend partition with GParted The 2019 Stack Overflow Developer Survey Results Are In Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Community Moderator Election ResultsCan't increase partition size with GParted?GParted doesn't recognize the unallocated space after my current partitionWhat is the best way to add unallocated space located before to Ubuntu 12.04 partition with GParted live?I can't figure out how to extend my Arch home partition into free spaceGparted Linux Mint 18.1 issueTrying to extend but swap partition is showing as Unknown in Gparted, shows proper from fdiskRearrange partitions in gparted to extend a partitionUnable to extend partition even though unallocated space is next to it using GPartedAllocate free space to root partitiongparted: how to merge unallocated space with a partition