Why did early computer designers eschew integers? The Next CEO of Stack OverflowWhat register...
Is there a difference between "Fahrstuhl" and "Aufzug"?
What is the difference between "hamstring tendon" and "common hamstring tendon"?
Why did early computer designers eschew integers?
What steps are necessary to read a Modern SSD in Medieval Europe?
Is a distribution that is normal, but highly skewed, considered Gaussian?
Does higher Oxidation/ reduction potential translate to higher energy storage in battery?
From jafe to El-Guest
In the "Harry Potter and the Order of the Phoenix" video game, what potion is used to sabotage Umbridge's speakers?
Can I calculate next year's exemptions based on this year's refund/amount owed?
The Ultimate Number Sequence Puzzle
Towers in the ocean; How deep can they be built?
Does Germany produce more waste than the US?
Graph of the history of databases
Why do we say 'Un seul M' and not 'Une seule M' even though M is a "consonne"
Is it correct to say moon starry nights?
How did Beeri the Hittite come up with naming his daughter Yehudit?
Help understanding this unsettling image of Titan, Epimetheus, and Saturn's rings?
Is there a reasonable and studied concept of reduction between regular languages?
Is it ok to trim down a tube patch?
Why is the US ranked as #45 in Press Freedom ratings, despite its extremely permissive free speech laws?
Spaces in which all closed sets are regular closed
Physiological effects of huge anime eyes
Can I board the first leg of the flight without having final country's visa?
Audio Conversion With ADS1243
Why did early computer designers eschew integers?
The Next CEO of Stack OverflowWhat register size did early computers use?What other computers used this floating-point format?Why did so many early microcomputers use the MOS 6502 and variants?Why did keygens play music?Why were early computers named “Mark”?Why did expert systems fall?Why were early personal computer monitors not green?When did “Zen” in computer programming become a thing?History of advanced hardwareWere there any working computers using residue number systems?
Several early computer designs regarded a 'word' as representing not an integer, with the bits having values 2^0, 2^1, 2^2, ..., but as representing a fixed-point fraction 2^-1, 2^-2, 2^-3, ...
(For the sake of simplicity in this question I'm ignoring the existence of the sign bit and talk only in terms of positive numbers)
Some examples of this convention are EDVAC, EDSAC, and the IAS machine.
Why was this? To me, having dealt with since the 1970s with machines that have "integers" at base, this seems a strange way to look at it.
Does it affect the machine operation in any way? Addition and subtraction are the same regardless of what you think the bits mean, but I suppose that for multiplication of two N-bit words giving an N-bit result, the choice of which N bits to keep depends on your interpretation. (Integer: you want the "right hand word"; fixed-point fraction, you want the "left hand word").
history
add a comment |
Several early computer designs regarded a 'word' as representing not an integer, with the bits having values 2^0, 2^1, 2^2, ..., but as representing a fixed-point fraction 2^-1, 2^-2, 2^-3, ...
(For the sake of simplicity in this question I'm ignoring the existence of the sign bit and talk only in terms of positive numbers)
Some examples of this convention are EDVAC, EDSAC, and the IAS machine.
Why was this? To me, having dealt with since the 1970s with machines that have "integers" at base, this seems a strange way to look at it.
Does it affect the machine operation in any way? Addition and subtraction are the same regardless of what you think the bits mean, but I suppose that for multiplication of two N-bit words giving an N-bit result, the choice of which N bits to keep depends on your interpretation. (Integer: you want the "right hand word"; fixed-point fraction, you want the "left hand word").
history
1
Very early on, it was likely that computers were not considered to be general purpose machines. So if the main task for which a computer was designed involved doing calculations with flractional numbers, prioritizing them over integers would make sense. It seems likely that computers designed for business programs would be more tuned to integers, because money (in the USA) can be treated as pennies, and very little would need to be fractional.
– RichF
1 hour ago
add a comment |
Several early computer designs regarded a 'word' as representing not an integer, with the bits having values 2^0, 2^1, 2^2, ..., but as representing a fixed-point fraction 2^-1, 2^-2, 2^-3, ...
(For the sake of simplicity in this question I'm ignoring the existence of the sign bit and talk only in terms of positive numbers)
Some examples of this convention are EDVAC, EDSAC, and the IAS machine.
Why was this? To me, having dealt with since the 1970s with machines that have "integers" at base, this seems a strange way to look at it.
Does it affect the machine operation in any way? Addition and subtraction are the same regardless of what you think the bits mean, but I suppose that for multiplication of two N-bit words giving an N-bit result, the choice of which N bits to keep depends on your interpretation. (Integer: you want the "right hand word"; fixed-point fraction, you want the "left hand word").
history
Several early computer designs regarded a 'word' as representing not an integer, with the bits having values 2^0, 2^1, 2^2, ..., but as representing a fixed-point fraction 2^-1, 2^-2, 2^-3, ...
(For the sake of simplicity in this question I'm ignoring the existence of the sign bit and talk only in terms of positive numbers)
Some examples of this convention are EDVAC, EDSAC, and the IAS machine.
Why was this? To me, having dealt with since the 1970s with machines that have "integers" at base, this seems a strange way to look at it.
Does it affect the machine operation in any way? Addition and subtraction are the same regardless of what you think the bits mean, but I suppose that for multiplication of two N-bit words giving an N-bit result, the choice of which N bits to keep depends on your interpretation. (Integer: you want the "right hand word"; fixed-point fraction, you want the "left hand word").
history
history
asked 2 hours ago
another-daveanother-dave
1,172112
1,172112
1
Very early on, it was likely that computers were not considered to be general purpose machines. So if the main task for which a computer was designed involved doing calculations with flractional numbers, prioritizing them over integers would make sense. It seems likely that computers designed for business programs would be more tuned to integers, because money (in the USA) can be treated as pennies, and very little would need to be fractional.
– RichF
1 hour ago
add a comment |
1
Very early on, it was likely that computers were not considered to be general purpose machines. So if the main task for which a computer was designed involved doing calculations with flractional numbers, prioritizing them over integers would make sense. It seems likely that computers designed for business programs would be more tuned to integers, because money (in the USA) can be treated as pennies, and very little would need to be fractional.
– RichF
1 hour ago
1
1
Very early on, it was likely that computers were not considered to be general purpose machines. So if the main task for which a computer was designed involved doing calculations with flractional numbers, prioritizing them over integers would make sense. It seems likely that computers designed for business programs would be more tuned to integers, because money (in the USA) can be treated as pennies, and very little would need to be fractional.
– RichF
1 hour ago
Very early on, it was likely that computers were not considered to be general purpose machines. So if the main task for which a computer was designed involved doing calculations with flractional numbers, prioritizing them over integers would make sense. It seems likely that computers designed for business programs would be more tuned to integers, because money (in the USA) can be treated as pennies, and very little would need to be fractional.
– RichF
1 hour ago
add a comment |
1 Answer
1
active
oldest
votes
I'd think that it was mostly down to the preferences of John von Neumann at the time. He was a strong advocate of fixed point representations, and early computers were designed with long words to accommodate a large range of numbers that way. You certainly don't need 30-40 bits to cover the most useful integers, but that many were needed if you wanted plenty of digits before and after the decimal point.
By the 1970s though, the costs of integration were such that much smaller word sizes made sense. Minicomputers were commonly 16 bit architectures, and micros 8 bits or sometimes even 4. At that point you needed all the integers you can get, plus floating point had largely replaced fixed point for when you needed decimals.
Nowadays we'd think nothing of using 64 bit integers, of course, but it's a heck of a lot easier to integrate the number of logic gates required for that than it would have been back when they all had to be made out of fragile and expensive vacuum tubes.
New contributor
add a comment |
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "648"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9500%2fwhy-did-early-computer-designers-eschew-integers%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
I'd think that it was mostly down to the preferences of John von Neumann at the time. He was a strong advocate of fixed point representations, and early computers were designed with long words to accommodate a large range of numbers that way. You certainly don't need 30-40 bits to cover the most useful integers, but that many were needed if you wanted plenty of digits before and after the decimal point.
By the 1970s though, the costs of integration were such that much smaller word sizes made sense. Minicomputers were commonly 16 bit architectures, and micros 8 bits or sometimes even 4. At that point you needed all the integers you can get, plus floating point had largely replaced fixed point for when you needed decimals.
Nowadays we'd think nothing of using 64 bit integers, of course, but it's a heck of a lot easier to integrate the number of logic gates required for that than it would have been back when they all had to be made out of fragile and expensive vacuum tubes.
New contributor
add a comment |
I'd think that it was mostly down to the preferences of John von Neumann at the time. He was a strong advocate of fixed point representations, and early computers were designed with long words to accommodate a large range of numbers that way. You certainly don't need 30-40 bits to cover the most useful integers, but that many were needed if you wanted plenty of digits before and after the decimal point.
By the 1970s though, the costs of integration were such that much smaller word sizes made sense. Minicomputers were commonly 16 bit architectures, and micros 8 bits or sometimes even 4. At that point you needed all the integers you can get, plus floating point had largely replaced fixed point for when you needed decimals.
Nowadays we'd think nothing of using 64 bit integers, of course, but it's a heck of a lot easier to integrate the number of logic gates required for that than it would have been back when they all had to be made out of fragile and expensive vacuum tubes.
New contributor
add a comment |
I'd think that it was mostly down to the preferences of John von Neumann at the time. He was a strong advocate of fixed point representations, and early computers were designed with long words to accommodate a large range of numbers that way. You certainly don't need 30-40 bits to cover the most useful integers, but that many were needed if you wanted plenty of digits before and after the decimal point.
By the 1970s though, the costs of integration were such that much smaller word sizes made sense. Minicomputers were commonly 16 bit architectures, and micros 8 bits or sometimes even 4. At that point you needed all the integers you can get, plus floating point had largely replaced fixed point for when you needed decimals.
Nowadays we'd think nothing of using 64 bit integers, of course, but it's a heck of a lot easier to integrate the number of logic gates required for that than it would have been back when they all had to be made out of fragile and expensive vacuum tubes.
New contributor
I'd think that it was mostly down to the preferences of John von Neumann at the time. He was a strong advocate of fixed point representations, and early computers were designed with long words to accommodate a large range of numbers that way. You certainly don't need 30-40 bits to cover the most useful integers, but that many were needed if you wanted plenty of digits before and after the decimal point.
By the 1970s though, the costs of integration were such that much smaller word sizes made sense. Minicomputers were commonly 16 bit architectures, and micros 8 bits or sometimes even 4. At that point you needed all the integers you can get, plus floating point had largely replaced fixed point for when you needed decimals.
Nowadays we'd think nothing of using 64 bit integers, of course, but it's a heck of a lot easier to integrate the number of logic gates required for that than it would have been back when they all had to be made out of fragile and expensive vacuum tubes.
New contributor
New contributor
answered 1 hour ago
Matthew BarberMatthew Barber
1411
1411
New contributor
New contributor
add a comment |
add a comment |
Thanks for contributing an answer to Retrocomputing Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f9500%2fwhy-did-early-computer-designers-eschew-integers%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
Very early on, it was likely that computers were not considered to be general purpose machines. So if the main task for which a computer was designed involved doing calculations with flractional numbers, prioritizing them over integers would make sense. It seems likely that computers designed for business programs would be more tuned to integers, because money (in the USA) can be treated as pennies, and very little would need to be fractional.
– RichF
1 hour ago