Is CLMUL constant time?
Does the carry-less multiplication instruction run in constant time? Said differently, is the time it takes to execute independent of its arguments?
assembly x86 micro-optimization galois-field
add a comment |
Does the carry-less multiplication instruction run in constant time? Said differently, is the time it takes to execute independent of its arguments?
assembly x86 micro-optimization galois-field
2
I don't think the execution time varies according to the values of the operands in any CPU that implements the PCLMULQDQ instruction, but the actual latency of the instruction can vary widely depending a number of factors including what operands are used (ie. whether a register operand is dependent on a previous instruction or where a memory operand is located in the cache hierarchy) just like with any other instruction.
– Ross Ridge
Nov 20 at 21:24
2
Given that CLMUL was made for cryptography, I would personally be very surprised if it was not constant time.
– fuz
Nov 21 at 3:13
add a comment |
Does the carry-less multiplication instruction run in constant time? Said differently, is the time it takes to execute independent of its arguments?
assembly x86 micro-optimization galois-field
Does the carry-less multiplication instruction run in constant time? Said differently, is the time it takes to execute independent of its arguments?
assembly x86 micro-optimization galois-field
assembly x86 micro-optimization galois-field
edited Nov 20 at 21:30
Peter Cordes
118k16180307
118k16180307
asked Nov 20 at 21:07
yberman
1387
1387
2
I don't think the execution time varies according to the values of the operands in any CPU that implements the PCLMULQDQ instruction, but the actual latency of the instruction can vary widely depending a number of factors including what operands are used (ie. whether a register operand is dependent on a previous instruction or where a memory operand is located in the cache hierarchy) just like with any other instruction.
– Ross Ridge
Nov 20 at 21:24
2
Given that CLMUL was made for cryptography, I would personally be very surprised if it was not constant time.
– fuz
Nov 21 at 3:13
add a comment |
2
I don't think the execution time varies according to the values of the operands in any CPU that implements the PCLMULQDQ instruction, but the actual latency of the instruction can vary widely depending a number of factors including what operands are used (ie. whether a register operand is dependent on a previous instruction or where a memory operand is located in the cache hierarchy) just like with any other instruction.
– Ross Ridge
Nov 20 at 21:24
2
Given that CLMUL was made for cryptography, I would personally be very surprised if it was not constant time.
– fuz
Nov 21 at 3:13
2
2
I don't think the execution time varies according to the values of the operands in any CPU that implements the PCLMULQDQ instruction, but the actual latency of the instruction can vary widely depending a number of factors including what operands are used (ie. whether a register operand is dependent on a previous instruction or where a memory operand is located in the cache hierarchy) just like with any other instruction.
– Ross Ridge
Nov 20 at 21:24
I don't think the execution time varies according to the values of the operands in any CPU that implements the PCLMULQDQ instruction, but the actual latency of the instruction can vary widely depending a number of factors including what operands are used (ie. whether a register operand is dependent on a previous instruction or where a memory operand is located in the cache hierarchy) just like with any other instruction.
– Ross Ridge
Nov 20 at 21:24
2
2
Given that CLMUL was made for cryptography, I would personally be very surprised if it was not constant time.
– fuz
Nov 21 at 3:13
Given that CLMUL was made for cryptography, I would personally be very surprised if it was not constant time.
– fuz
Nov 21 at 3:13
add a comment |
1 Answer
1
active
oldest
votes
According to https://agner.org/optimize/ and PCLMULQDQ
has fixed latency on any given CPU. (http://www.uops.info/table.html doesn't list a latency for it, but has good stuff for most instructions).
There's no reason to expect it to be data-dependent- typically only division / sqrt has data-dependent performance in modern high-performance CPUs. Regular multiply doesn't: instead they just make it fast for the general case with lots of hardware parallelism inside the execution unit.
Out-of-order instruction scheduling is a lot easier when uops have fixed latency, and so is building fully-pipelined execution units for them. The scheduler (reservation station) can avoid having 2 operations finish at the same time on the same port and create a write-back conflict. Or worse, in the same execution unit and cause stalls within it. This is why fixed-latency is very common.
(A microcoded multi-uop pclmulqdq
with branching could potentially have variable latency, or more plausibly latency that depends on the immediate operand: maybe an extra shuffle uop or two when the immediate is non-zero. So the fixed-latency of a single uop argument doesn't necessarily apply to a micro-coded instruction, but pclmuqdq
is still simple enough that you wouldn't expect it to actually branch internally the way rep movsb
has to.)
As @fuz points out, PCLMUL was made for crypto, so data-dependent performance would make it vulnerable to timing attacks. So there's a very strong reason to make PCLMUL constant time. (Or at worst, dependent on the immediate, but not the register/memory sources. e.g. an immediate other than 0
could cost extra shift uops to get the high halves of the sources fed to a 64x64 => 128 carryless-multiply unit.)
Numbers from Agner Fog's tables
On Intel since Broadwell, pclmuludq
is 1 uop. On Skylake, it's 7 cycle latency, 1 per clock throughput. (So you need to keep 7 independent PCLMUL operations in flight to saturate the execution unit on port 5). Broadwell has 5 cycle latency. With a memory source operand, it's 1 extra uop.
On Haswell, it's 3 uops (2p0 p5) with 7 cycle latency and one per 2 clock throughput.
On Sandybridge/IvyBridge it's 18 uops, 14c latency, one per 8 clock throughput.
On Westmere (2nd Gen Nehalem) it's 12c latency, one per 8c throughput. (Unknown number of uops, neither Agner Fog nor uops.info has it. But we can safely assume it's microcoded.) This was the first generation to support the instruction- one of the very few differences from Nehalem to Westmere.
On Ryzen it's 4 uops, 4c latency, one per 2 clock throughput. http://instlatx64.atw.hu/ shows it 4.5 cycle latency. I'm not sure what the difference is between their testing and Agner's.
On Piledriver it's 5 uops, 12c latency, one per 7 clock throughput.
On Jaguar it's 1 uop, 3c latency, one per 1 clock throughput!
On Silvermont it's 8 uops, 10c latency/throughput. Goldmont = 3 uops, 6c lat / 3c tput.
See also What considerations go into predicting latency for operations on modern superscalar processors and how can I calculate them by hand? and Agner Fog's optimization guide to understand how latency and throughput (and front-end bottlenecks) matter for performance on out-of-order CPUs, depending on the surrounding code.
Thanks! This is a wonderful answer.
– yberman
Nov 20 at 21:52
1
instlatx64 shows fractional latencies on Sandybridge, IvyBridge, and Ryzen. All the others match though. Also I was not able to findpclmuludq
in http://www.uops.info/table.html.
– Hadi Brais
Nov 20 at 22:14
1
According to this Intel document, the instruction is new in Westmere. Wikipedia also states the same.
– Hadi Brais
Nov 20 at 22:52
1
@HadiBrais: the only plausible mechanism would be decoding to different uops for non-zero immediate (extra shuffles). I added a bit to the answer about why fixed uop latencies are a huge deal. A microcoded instruction as a whole can plausibly be variable by branching (unlikely here) or decoding differently depending on immediates, though.
– Peter Cordes
Nov 20 at 23:05
2
OK, I figured out why PCLMULQDQ is missing from uops.info. My benchmark script uses the cpuid command under Linux to find out which instructions are supported, and apparently, cpuid lists the instruction as PCLMULDQ (i.e., without the first "Q"). I will add data for PCLMULQDQ to uops.info with the next update of the site.
– Andreas Abel
Nov 21 at 0:04
|
show 9 more comments
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53401547%2fis-clmul-constant-time%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
According to https://agner.org/optimize/ and PCLMULQDQ
has fixed latency on any given CPU. (http://www.uops.info/table.html doesn't list a latency for it, but has good stuff for most instructions).
There's no reason to expect it to be data-dependent- typically only division / sqrt has data-dependent performance in modern high-performance CPUs. Regular multiply doesn't: instead they just make it fast for the general case with lots of hardware parallelism inside the execution unit.
Out-of-order instruction scheduling is a lot easier when uops have fixed latency, and so is building fully-pipelined execution units for them. The scheduler (reservation station) can avoid having 2 operations finish at the same time on the same port and create a write-back conflict. Or worse, in the same execution unit and cause stalls within it. This is why fixed-latency is very common.
(A microcoded multi-uop pclmulqdq
with branching could potentially have variable latency, or more plausibly latency that depends on the immediate operand: maybe an extra shuffle uop or two when the immediate is non-zero. So the fixed-latency of a single uop argument doesn't necessarily apply to a micro-coded instruction, but pclmuqdq
is still simple enough that you wouldn't expect it to actually branch internally the way rep movsb
has to.)
As @fuz points out, PCLMUL was made for crypto, so data-dependent performance would make it vulnerable to timing attacks. So there's a very strong reason to make PCLMUL constant time. (Or at worst, dependent on the immediate, but not the register/memory sources. e.g. an immediate other than 0
could cost extra shift uops to get the high halves of the sources fed to a 64x64 => 128 carryless-multiply unit.)
Numbers from Agner Fog's tables
On Intel since Broadwell, pclmuludq
is 1 uop. On Skylake, it's 7 cycle latency, 1 per clock throughput. (So you need to keep 7 independent PCLMUL operations in flight to saturate the execution unit on port 5). Broadwell has 5 cycle latency. With a memory source operand, it's 1 extra uop.
On Haswell, it's 3 uops (2p0 p5) with 7 cycle latency and one per 2 clock throughput.
On Sandybridge/IvyBridge it's 18 uops, 14c latency, one per 8 clock throughput.
On Westmere (2nd Gen Nehalem) it's 12c latency, one per 8c throughput. (Unknown number of uops, neither Agner Fog nor uops.info has it. But we can safely assume it's microcoded.) This was the first generation to support the instruction- one of the very few differences from Nehalem to Westmere.
On Ryzen it's 4 uops, 4c latency, one per 2 clock throughput. http://instlatx64.atw.hu/ shows it 4.5 cycle latency. I'm not sure what the difference is between their testing and Agner's.
On Piledriver it's 5 uops, 12c latency, one per 7 clock throughput.
On Jaguar it's 1 uop, 3c latency, one per 1 clock throughput!
On Silvermont it's 8 uops, 10c latency/throughput. Goldmont = 3 uops, 6c lat / 3c tput.
See also What considerations go into predicting latency for operations on modern superscalar processors and how can I calculate them by hand? and Agner Fog's optimization guide to understand how latency and throughput (and front-end bottlenecks) matter for performance on out-of-order CPUs, depending on the surrounding code.
Thanks! This is a wonderful answer.
– yberman
Nov 20 at 21:52
1
instlatx64 shows fractional latencies on Sandybridge, IvyBridge, and Ryzen. All the others match though. Also I was not able to findpclmuludq
in http://www.uops.info/table.html.
– Hadi Brais
Nov 20 at 22:14
1
According to this Intel document, the instruction is new in Westmere. Wikipedia also states the same.
– Hadi Brais
Nov 20 at 22:52
1
@HadiBrais: the only plausible mechanism would be decoding to different uops for non-zero immediate (extra shuffles). I added a bit to the answer about why fixed uop latencies are a huge deal. A microcoded instruction as a whole can plausibly be variable by branching (unlikely here) or decoding differently depending on immediates, though.
– Peter Cordes
Nov 20 at 23:05
2
OK, I figured out why PCLMULQDQ is missing from uops.info. My benchmark script uses the cpuid command under Linux to find out which instructions are supported, and apparently, cpuid lists the instruction as PCLMULDQ (i.e., without the first "Q"). I will add data for PCLMULQDQ to uops.info with the next update of the site.
– Andreas Abel
Nov 21 at 0:04
|
show 9 more comments
According to https://agner.org/optimize/ and PCLMULQDQ
has fixed latency on any given CPU. (http://www.uops.info/table.html doesn't list a latency for it, but has good stuff for most instructions).
There's no reason to expect it to be data-dependent- typically only division / sqrt has data-dependent performance in modern high-performance CPUs. Regular multiply doesn't: instead they just make it fast for the general case with lots of hardware parallelism inside the execution unit.
Out-of-order instruction scheduling is a lot easier when uops have fixed latency, and so is building fully-pipelined execution units for them. The scheduler (reservation station) can avoid having 2 operations finish at the same time on the same port and create a write-back conflict. Or worse, in the same execution unit and cause stalls within it. This is why fixed-latency is very common.
(A microcoded multi-uop pclmulqdq
with branching could potentially have variable latency, or more plausibly latency that depends on the immediate operand: maybe an extra shuffle uop or two when the immediate is non-zero. So the fixed-latency of a single uop argument doesn't necessarily apply to a micro-coded instruction, but pclmuqdq
is still simple enough that you wouldn't expect it to actually branch internally the way rep movsb
has to.)
As @fuz points out, PCLMUL was made for crypto, so data-dependent performance would make it vulnerable to timing attacks. So there's a very strong reason to make PCLMUL constant time. (Or at worst, dependent on the immediate, but not the register/memory sources. e.g. an immediate other than 0
could cost extra shift uops to get the high halves of the sources fed to a 64x64 => 128 carryless-multiply unit.)
Numbers from Agner Fog's tables
On Intel since Broadwell, pclmuludq
is 1 uop. On Skylake, it's 7 cycle latency, 1 per clock throughput. (So you need to keep 7 independent PCLMUL operations in flight to saturate the execution unit on port 5). Broadwell has 5 cycle latency. With a memory source operand, it's 1 extra uop.
On Haswell, it's 3 uops (2p0 p5) with 7 cycle latency and one per 2 clock throughput.
On Sandybridge/IvyBridge it's 18 uops, 14c latency, one per 8 clock throughput.
On Westmere (2nd Gen Nehalem) it's 12c latency, one per 8c throughput. (Unknown number of uops, neither Agner Fog nor uops.info has it. But we can safely assume it's microcoded.) This was the first generation to support the instruction- one of the very few differences from Nehalem to Westmere.
On Ryzen it's 4 uops, 4c latency, one per 2 clock throughput. http://instlatx64.atw.hu/ shows it 4.5 cycle latency. I'm not sure what the difference is between their testing and Agner's.
On Piledriver it's 5 uops, 12c latency, one per 7 clock throughput.
On Jaguar it's 1 uop, 3c latency, one per 1 clock throughput!
On Silvermont it's 8 uops, 10c latency/throughput. Goldmont = 3 uops, 6c lat / 3c tput.
See also What considerations go into predicting latency for operations on modern superscalar processors and how can I calculate them by hand? and Agner Fog's optimization guide to understand how latency and throughput (and front-end bottlenecks) matter for performance on out-of-order CPUs, depending on the surrounding code.
Thanks! This is a wonderful answer.
– yberman
Nov 20 at 21:52
1
instlatx64 shows fractional latencies on Sandybridge, IvyBridge, and Ryzen. All the others match though. Also I was not able to findpclmuludq
in http://www.uops.info/table.html.
– Hadi Brais
Nov 20 at 22:14
1
According to this Intel document, the instruction is new in Westmere. Wikipedia also states the same.
– Hadi Brais
Nov 20 at 22:52
1
@HadiBrais: the only plausible mechanism would be decoding to different uops for non-zero immediate (extra shuffles). I added a bit to the answer about why fixed uop latencies are a huge deal. A microcoded instruction as a whole can plausibly be variable by branching (unlikely here) or decoding differently depending on immediates, though.
– Peter Cordes
Nov 20 at 23:05
2
OK, I figured out why PCLMULQDQ is missing from uops.info. My benchmark script uses the cpuid command under Linux to find out which instructions are supported, and apparently, cpuid lists the instruction as PCLMULDQ (i.e., without the first "Q"). I will add data for PCLMULQDQ to uops.info with the next update of the site.
– Andreas Abel
Nov 21 at 0:04
|
show 9 more comments
According to https://agner.org/optimize/ and PCLMULQDQ
has fixed latency on any given CPU. (http://www.uops.info/table.html doesn't list a latency for it, but has good stuff for most instructions).
There's no reason to expect it to be data-dependent- typically only division / sqrt has data-dependent performance in modern high-performance CPUs. Regular multiply doesn't: instead they just make it fast for the general case with lots of hardware parallelism inside the execution unit.
Out-of-order instruction scheduling is a lot easier when uops have fixed latency, and so is building fully-pipelined execution units for them. The scheduler (reservation station) can avoid having 2 operations finish at the same time on the same port and create a write-back conflict. Or worse, in the same execution unit and cause stalls within it. This is why fixed-latency is very common.
(A microcoded multi-uop pclmulqdq
with branching could potentially have variable latency, or more plausibly latency that depends on the immediate operand: maybe an extra shuffle uop or two when the immediate is non-zero. So the fixed-latency of a single uop argument doesn't necessarily apply to a micro-coded instruction, but pclmuqdq
is still simple enough that you wouldn't expect it to actually branch internally the way rep movsb
has to.)
As @fuz points out, PCLMUL was made for crypto, so data-dependent performance would make it vulnerable to timing attacks. So there's a very strong reason to make PCLMUL constant time. (Or at worst, dependent on the immediate, but not the register/memory sources. e.g. an immediate other than 0
could cost extra shift uops to get the high halves of the sources fed to a 64x64 => 128 carryless-multiply unit.)
Numbers from Agner Fog's tables
On Intel since Broadwell, pclmuludq
is 1 uop. On Skylake, it's 7 cycle latency, 1 per clock throughput. (So you need to keep 7 independent PCLMUL operations in flight to saturate the execution unit on port 5). Broadwell has 5 cycle latency. With a memory source operand, it's 1 extra uop.
On Haswell, it's 3 uops (2p0 p5) with 7 cycle latency and one per 2 clock throughput.
On Sandybridge/IvyBridge it's 18 uops, 14c latency, one per 8 clock throughput.
On Westmere (2nd Gen Nehalem) it's 12c latency, one per 8c throughput. (Unknown number of uops, neither Agner Fog nor uops.info has it. But we can safely assume it's microcoded.) This was the first generation to support the instruction- one of the very few differences from Nehalem to Westmere.
On Ryzen it's 4 uops, 4c latency, one per 2 clock throughput. http://instlatx64.atw.hu/ shows it 4.5 cycle latency. I'm not sure what the difference is between their testing and Agner's.
On Piledriver it's 5 uops, 12c latency, one per 7 clock throughput.
On Jaguar it's 1 uop, 3c latency, one per 1 clock throughput!
On Silvermont it's 8 uops, 10c latency/throughput. Goldmont = 3 uops, 6c lat / 3c tput.
See also What considerations go into predicting latency for operations on modern superscalar processors and how can I calculate them by hand? and Agner Fog's optimization guide to understand how latency and throughput (and front-end bottlenecks) matter for performance on out-of-order CPUs, depending on the surrounding code.
According to https://agner.org/optimize/ and PCLMULQDQ
has fixed latency on any given CPU. (http://www.uops.info/table.html doesn't list a latency for it, but has good stuff for most instructions).
There's no reason to expect it to be data-dependent- typically only division / sqrt has data-dependent performance in modern high-performance CPUs. Regular multiply doesn't: instead they just make it fast for the general case with lots of hardware parallelism inside the execution unit.
Out-of-order instruction scheduling is a lot easier when uops have fixed latency, and so is building fully-pipelined execution units for them. The scheduler (reservation station) can avoid having 2 operations finish at the same time on the same port and create a write-back conflict. Or worse, in the same execution unit and cause stalls within it. This is why fixed-latency is very common.
(A microcoded multi-uop pclmulqdq
with branching could potentially have variable latency, or more plausibly latency that depends on the immediate operand: maybe an extra shuffle uop or two when the immediate is non-zero. So the fixed-latency of a single uop argument doesn't necessarily apply to a micro-coded instruction, but pclmuqdq
is still simple enough that you wouldn't expect it to actually branch internally the way rep movsb
has to.)
As @fuz points out, PCLMUL was made for crypto, so data-dependent performance would make it vulnerable to timing attacks. So there's a very strong reason to make PCLMUL constant time. (Or at worst, dependent on the immediate, but not the register/memory sources. e.g. an immediate other than 0
could cost extra shift uops to get the high halves of the sources fed to a 64x64 => 128 carryless-multiply unit.)
Numbers from Agner Fog's tables
On Intel since Broadwell, pclmuludq
is 1 uop. On Skylake, it's 7 cycle latency, 1 per clock throughput. (So you need to keep 7 independent PCLMUL operations in flight to saturate the execution unit on port 5). Broadwell has 5 cycle latency. With a memory source operand, it's 1 extra uop.
On Haswell, it's 3 uops (2p0 p5) with 7 cycle latency and one per 2 clock throughput.
On Sandybridge/IvyBridge it's 18 uops, 14c latency, one per 8 clock throughput.
On Westmere (2nd Gen Nehalem) it's 12c latency, one per 8c throughput. (Unknown number of uops, neither Agner Fog nor uops.info has it. But we can safely assume it's microcoded.) This was the first generation to support the instruction- one of the very few differences from Nehalem to Westmere.
On Ryzen it's 4 uops, 4c latency, one per 2 clock throughput. http://instlatx64.atw.hu/ shows it 4.5 cycle latency. I'm not sure what the difference is between their testing and Agner's.
On Piledriver it's 5 uops, 12c latency, one per 7 clock throughput.
On Jaguar it's 1 uop, 3c latency, one per 1 clock throughput!
On Silvermont it's 8 uops, 10c latency/throughput. Goldmont = 3 uops, 6c lat / 3c tput.
See also What considerations go into predicting latency for operations on modern superscalar processors and how can I calculate them by hand? and Agner Fog's optimization guide to understand how latency and throughput (and front-end bottlenecks) matter for performance on out-of-order CPUs, depending on the surrounding code.
edited Nov 21 at 7:51
answered Nov 20 at 21:44
Peter Cordes
118k16180307
118k16180307
Thanks! This is a wonderful answer.
– yberman
Nov 20 at 21:52
1
instlatx64 shows fractional latencies on Sandybridge, IvyBridge, and Ryzen. All the others match though. Also I was not able to findpclmuludq
in http://www.uops.info/table.html.
– Hadi Brais
Nov 20 at 22:14
1
According to this Intel document, the instruction is new in Westmere. Wikipedia also states the same.
– Hadi Brais
Nov 20 at 22:52
1
@HadiBrais: the only plausible mechanism would be decoding to different uops for non-zero immediate (extra shuffles). I added a bit to the answer about why fixed uop latencies are a huge deal. A microcoded instruction as a whole can plausibly be variable by branching (unlikely here) or decoding differently depending on immediates, though.
– Peter Cordes
Nov 20 at 23:05
2
OK, I figured out why PCLMULQDQ is missing from uops.info. My benchmark script uses the cpuid command under Linux to find out which instructions are supported, and apparently, cpuid lists the instruction as PCLMULDQ (i.e., without the first "Q"). I will add data for PCLMULQDQ to uops.info with the next update of the site.
– Andreas Abel
Nov 21 at 0:04
|
show 9 more comments
Thanks! This is a wonderful answer.
– yberman
Nov 20 at 21:52
1
instlatx64 shows fractional latencies on Sandybridge, IvyBridge, and Ryzen. All the others match though. Also I was not able to findpclmuludq
in http://www.uops.info/table.html.
– Hadi Brais
Nov 20 at 22:14
1
According to this Intel document, the instruction is new in Westmere. Wikipedia also states the same.
– Hadi Brais
Nov 20 at 22:52
1
@HadiBrais: the only plausible mechanism would be decoding to different uops for non-zero immediate (extra shuffles). I added a bit to the answer about why fixed uop latencies are a huge deal. A microcoded instruction as a whole can plausibly be variable by branching (unlikely here) or decoding differently depending on immediates, though.
– Peter Cordes
Nov 20 at 23:05
2
OK, I figured out why PCLMULQDQ is missing from uops.info. My benchmark script uses the cpuid command under Linux to find out which instructions are supported, and apparently, cpuid lists the instruction as PCLMULDQ (i.e., without the first "Q"). I will add data for PCLMULQDQ to uops.info with the next update of the site.
– Andreas Abel
Nov 21 at 0:04
Thanks! This is a wonderful answer.
– yberman
Nov 20 at 21:52
Thanks! This is a wonderful answer.
– yberman
Nov 20 at 21:52
1
1
instlatx64 shows fractional latencies on Sandybridge, IvyBridge, and Ryzen. All the others match though. Also I was not able to find
pclmuludq
in http://www.uops.info/table.html.– Hadi Brais
Nov 20 at 22:14
instlatx64 shows fractional latencies on Sandybridge, IvyBridge, and Ryzen. All the others match though. Also I was not able to find
pclmuludq
in http://www.uops.info/table.html.– Hadi Brais
Nov 20 at 22:14
1
1
According to this Intel document, the instruction is new in Westmere. Wikipedia also states the same.
– Hadi Brais
Nov 20 at 22:52
According to this Intel document, the instruction is new in Westmere. Wikipedia also states the same.
– Hadi Brais
Nov 20 at 22:52
1
1
@HadiBrais: the only plausible mechanism would be decoding to different uops for non-zero immediate (extra shuffles). I added a bit to the answer about why fixed uop latencies are a huge deal. A microcoded instruction as a whole can plausibly be variable by branching (unlikely here) or decoding differently depending on immediates, though.
– Peter Cordes
Nov 20 at 23:05
@HadiBrais: the only plausible mechanism would be decoding to different uops for non-zero immediate (extra shuffles). I added a bit to the answer about why fixed uop latencies are a huge deal. A microcoded instruction as a whole can plausibly be variable by branching (unlikely here) or decoding differently depending on immediates, though.
– Peter Cordes
Nov 20 at 23:05
2
2
OK, I figured out why PCLMULQDQ is missing from uops.info. My benchmark script uses the cpuid command under Linux to find out which instructions are supported, and apparently, cpuid lists the instruction as PCLMULDQ (i.e., without the first "Q"). I will add data for PCLMULQDQ to uops.info with the next update of the site.
– Andreas Abel
Nov 21 at 0:04
OK, I figured out why PCLMULQDQ is missing from uops.info. My benchmark script uses the cpuid command under Linux to find out which instructions are supported, and apparently, cpuid lists the instruction as PCLMULDQ (i.e., without the first "Q"). I will add data for PCLMULQDQ to uops.info with the next update of the site.
– Andreas Abel
Nov 21 at 0:04
|
show 9 more comments
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53401547%2fis-clmul-constant-time%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
I don't think the execution time varies according to the values of the operands in any CPU that implements the PCLMULQDQ instruction, but the actual latency of the instruction can vary widely depending a number of factors including what operands are used (ie. whether a register operand is dependent on a previous instruction or where a memory operand is located in the cache hierarchy) just like with any other instruction.
– Ross Ridge
Nov 20 at 21:24
2
Given that CLMUL was made for cryptography, I would personally be very surprised if it was not constant time.
– fuz
Nov 21 at 3:13