Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
201 views
in Technique[技术] by (71.8m points)

c - What's missing/sub-optimal in this memcpy implementation?

I've become interested in writing a memcpy() as an educational exercise. I won't write a whole treatise of what I did and didn't think about, but here's some guy's implementation:

__forceinline   // Since Size is usually known,
                // most useless code will be optimized out
                // if the function is inlined.

void* myMemcpy(char* Dst, const char* Src, size_t Size)
{
        void* start = Dst;
        for ( ; Size >= sizeof(__m256i); Size -= sizeof(__m256i) )
        {
                __m256i ymm = _mm256_loadu_si256(((const __m256i* &)Src)++);
                _mm256_storeu_si256(((__m256i* &)Dst)++, ymm);
        }

#define CPY_1B *((uint8_t * &)Dst)++ = *((const uint8_t * &)Src)++
#define CPY_2B *((uint16_t* &)Dst)++ = *((const uint16_t* &)Src)++
#define CPY_4B *((uint32_t* &)Dst)++ = *((const uint32_t* &)Src)++
#if defined _M_X64 || defined _M_IA64 || defined __amd64
#define CPY_8B *((uint64_t* &)Dst)++ = *((const uint64_t* &)Src)++
#else
#define CPY_8B _mm_storel_epi64((__m128i *)Dst, _mm_loadu_si128((const __m128i *)Src)), ++(const uint64_t* &)Src, ++(uint64_t* &)Dst
#endif
#define CPY16B _mm_storeu_si128((__m128i *)Dst, _mm_loadu_si128((const __m128i *)Src)), ++(const __m128i* &)Src, ++(__m128i* &)Dst

    switch (Size) {
    case 0x00:                                                      break;
    case 0x01:      CPY_1B;                                         break;
    case 0x02:              CPY_2B;                                 break;
    case 0x03:      CPY_1B; CPY_2B;                                 break;
    case 0x04:                      CPY_4B;                         break;
    case 0x05:      CPY_1B;         CPY_4B;                         break;
    case 0x06:              CPY_2B; CPY_4B;                         break;
    case 0x07:      CPY_1B; CPY_2B; CPY_4B;                         break;
    case 0x08:                              CPY_8B;                 break;
    case 0x09:      CPY_1B;                 CPY_8B;                 break;
    case 0x0A:              CPY_2B;         CPY_8B;                 break;
    case 0x0B:      CPY_1B; CPY_2B;         CPY_8B;                 break;
    case 0x0C:                      CPY_4B; CPY_8B;                 break;
    case 0x0D:      CPY_1B;         CPY_4B; CPY_8B;                 break;
    case 0x0E:              CPY_2B; CPY_4B; CPY_8B;                 break;
    case 0x0F:      CPY_1B; CPY_2B; CPY_4B; CPY_8B;                 break;
    case 0x10:                                      CPY16B;         break;
    case 0x11:      CPY_1B;                         CPY16B;         break;
    case 0x12:              CPY_2B;                 CPY16B;         break;
    case 0x13:      CPY_1B; CPY_2B;                 CPY16B;         break;
    case 0x14:                      CPY_4B;         CPY16B;         break;
    case 0x15:      CPY_1B;         CPY_4B;         CPY16B;         break;
    case 0x16:              CPY_2B; CPY_4B;         CPY16B;         break;
    case 0x17:      CPY_1B; CPY_2B; CPY_4B;         CPY16B;         break;
    case 0x18:                              CPY_8B; CPY16B;         break;
    case 0x19:      CPY_1B;                 CPY_8B; CPY16B;         break;
    case 0x1A:              CPY_2B;         CPY_8B; CPY16B;         break;
    case 0x1B:      CPY_1B; CPY_2B;         CPY_8B; CPY16B;         break;
    case 0x1C:                      CPY_4B; CPY_8B; CPY16B;         break;
    case 0x1D:      CPY_1B;         CPY_4B; CPY_8B; CPY16B;         break;
    case 0x1E:              CPY_2B; CPY_4B; CPY_8B; CPY16B;         break;
    case 0x1F:      CPY_1B; CPY_2B; CPY_4B; CPY_8B; CPY16B;         break;
    }
#undef CPY_1B
#undef CPY_2B
#undef CPY_4B
#undef CPY_8B
#undef CPY16B
        return start;
}

The comment translates as "Size is usually known as the compiler can optimize the code inline out most useless".

I would like to improve, if possible, on this implementation - but maybe there isn't much to improve. I see it uses SSE/AVX for the larger chunks of memory, then instead of a loop over the last < 32 bytes does the equivalent of manual unrolling, with some tweaking. So, here are my questions:

  • Why unroll the loop for the last several bytes, but not partially unroll the first (and now single) loop?
  • What about alignment issues? Aren't they important? Should I handle the first several bytes up to some alignment quantum differently, then perform the 256-bit ops on aligned sequences of bytes? And if so, how do I determine the appropriate alignment quantum?
  • What's the most important missing feature in this implementation (if any)?

Features/Principles mentioned in the answers so far

  • You should __restrict__ your parameters. (@chux)
  • The memory bandwidth is a limiting factor; measure your implementation against it.(@Zboson)
  • For small arrays, you can expect to approach the memory bandwidth; for larger arrays - not as much. (@Zboson)
  • Multiple threads (may be | are) necessary to saturate the memory bandwidth. (@Zboson)
  • It is probably wise to optimize differently for large and small copy sizes. (@Zboson)
  • (Alignment is important? Not explicitly addressed!)
  • The compiler should be made more explicitly aware of "obvious facts" it can use for optimization (such as the fact that Size < 32 after the first loop). (@chux)
  • There are arguments for unrolling your SSE/AVX calls (@BenJackson, here), and arguments against doing so (@PaulR)
  • non-temporal transfers (with which you tell the CPU you don't need it to cache the target location) should be useful for copying larger buffers. (@Zboson)
Question&Answers:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I have been studying measuring memory bandwidth for Intel processors with various operations and one of them is memcpy. I have done this on Core2, Ivy Bridge, and Haswell. I did most of my tests using C/C++ with intrinsics (see the code below - but I'm currently rewriting my tests in assembly).

To write your own efficient memcpy function it's important to know what the absolute best bandwidth possible is. This bandwidth is a function of the size of the arrays which will be copied and therefore an efficient memcpy function needs to optimize differently for small and big (and maybe in between). To keep things simple I have optimized for small arrays of 8192 bytes and large arrays of 1 GB.

For small arrays the maximum read and write bandwidth for each core is:

Core2-Ivy Bridge             32 bytes/cycle
Haswell                      64 bytes/cycle

This is the benchmark you should aim for small arrays. For my tests I assume the arrays are aligned to 64-bytes and that the array size is a multiple of 8*sizeof(float)*unroll_factor. Here are my current memcpy results for a size of 8192 bytes (Ubuntu 14.04, GCC 4.9, EGLIBC 2.19):

                             GB/s     efficiency
    Core2 (p9600@2.66 GHz)  
        builtin               35.2    41.3%
        eglibc                39.2    46.0%
        asmlib:               76.0    89.3%
        copy_unroll1:         39.1    46.0%
        copy_unroll8:         73.6    86.5%
    Ivy Bridge (E5-1620@3.6 GHz)                        
        builtin              102.2    88.7%
        eglibc:              107.0    92.9%
        asmlib:              107.6    93.4%
        copy_unroll1:        106.9    92.8%
        copy_unroll8:        111.3    96.6%
    Haswell (i5-4250U@1.3 GHz)
        builtin:              68.4    82.2%     
        eglibc:               39.7    47.7%
        asmlib:               73.2    87.6%
        copy_unroll1:         39.6    47.6%
        copy_unroll8:         81.9    98.4%

The asmlib is Agner Fog's asmlib. The copy_unroll1 and copy_unroll8 functions are defined below.

From this table we can see that the GCC builtin memcpy does not work well on Core2 and that memcpy in EGLIBC does not work well on Core2 or Haswell. I did check out a head version of GLIBC recently and the performance was much better on Haswell. In all cases unrolling gets the best result.

void copy_unroll1(const float *x, float *y, const int n) {
    for(int i=0; i<n/JUMP; i++) {
        VECNF().LOAD(&x[JUMP*(i+0)]).STORE(&y[JUMP*(i+0)]);
    }
}

void copy_unroll8(const float *x, float *y, const int n) {
for(int i=0; i<n/JUMP; i+=8) {
    VECNF().LOAD(&x[JUMP*(i+0)]).STORE(&y[JUMP*(i+0)]);
    VECNF().LOAD(&x[JUMP*(i+1)]).STORE(&y[JUMP*(i+1)]);
    VECNF().LOAD(&x[JUMP*(i+2)]).STORE(&y[JUMP*(i+2)]);
    VECNF().LOAD(&x[JUMP*(i+3)]).STORE(&y[JUMP*(i+3)]);
    VECNF().LOAD(&x[JUMP*(i+4)]).STORE(&y[JUMP*(i+4)]);
    VECNF().LOAD(&x[JUMP*(i+5)]).STORE(&y[JUMP*(i+5)]);
    VECNF().LOAD(&x[JUMP*(i+6)]).STORE(&y[JUMP*(i+6)]);
    VECNF().LOAD(&x[JUMP*(i+7)]).STORE(&y[JUMP*(i+7)]);
}

}

Where VECNF().LOADis _mm_load_ps() for SSE or _mm256_load_ps() for AVX, VECNF().STORE is _mm_store_ps() for SSE or _mm256_store_ps() for AVX, and JUMP is 4 for SSE or 8 for AVX.

For the large size the best result is obtained by using non-temporal store instructions and by using multiple threads. Contrary to what many people may believe a single thread does NOT usually saturate the memory bandwidth.

void copy_stream(const float *x, float *y, const int n) {
    #pragma omp parallel for        
    for(int i=0; i<n/JUMP; i++) {
        VECNF v = VECNF().load_a(&x[JUMP*i]);
        stream(&y[JUMP*i], v);
    }
}

Where stream is _mm_stream_ps() for SSE or _mm256_stream_ps() for AVX

Here are the memcpy results on my E5-1620@3.6 GHz with four threads for 1 GB with a maximum main memory bandwidth of 51.2 GB/s.

                         GB/s     efficiency
    eglibc:              23.6     46%
    asmlib:              36.7     72%
    copy_stream:         36.7     72%

Once again EGLIBC performs poorly. This is because it does not use non-temporal stores.

I modfied the eglibc and asmlib memcpy functions to run in parallel like this

void COPY(const float * __restrict x, float * __restrict y, const int n) {
    #pragma omp parallel
    {
        size_t my_start, my_size;
        int id = omp_get_thread_num();
        int num = omp_get_num_threads();
        my_start = (id*n)/num;
        my_size = ((id+1)*n)/num - my_start;
        memcpy(y+my_start, x+my_start, sizeof(float)*my_size);
    }
}

A general memcpy function needs to account for arrays which are not aligned to 64 bytes (or even to 32 or to 16 bytes) and where the size is not a multiple of 32 bytes or the unroll factor. Additionally, a decision has to be made as to when to use non-temporal stores. The general rule of thumb is to only use non-temporal stores for sizes larger than half the largest cache level (usually L3). But theses are "second order" details which I think should be dealt with after optimizing for ideal cases of large and small. There's not much point in worrying about correcting for misalignment or non-ideal size multiples if the ideal case performs poorly as well.

Update

Based on comments by Stephen Canon I have learned that on Ivy Bridge and Haswell it's more efficient to use rep movsb than movntdqa (a non-temporal store instruction). Intel calls this enhanced rep movsb (ERMSB). This is described in the Intel Optimization manuals in the section 3.7.6 Enhanced REP MOVSB and STOSB operation (ERMSB).

Additionally, in Agner Fog's Optimizing Subroutines in Assembly manual in section 17.9 Moving blocks of data (All processors) he writes:

"There are several ways of moving large blocks of data. The most common methods are:

  1. REP MOVS instruction.
  2. If data are aligned: Read and write in a loop with the largest available register size.
  3. If size is constant: inline move instructions.
  4. If data are misaligned: First move as many bytes as required to make the destination aligned. Then read unaligned and write aligned in a loop with the largest available register size.
  5. If data are misaligned: Read aligned, shift to compensate for misalignment and write aligned.
  6. If the data size is too big for caching, use non-temporal writes to bypass the cache. Shift to compensate for misalignment, if necessary."

A general memcpy should consider each of these points. Additionally, with Ivy Bridge and Haswell it seems that point 1 is better than point 6 for large arrays. Different techniques are necessary for Intel and AMD and for each iteration of technology. I think it's clear that writing your own general efficient memcpyfunction can be quite complicated. But in the special cases I have looked at I have already managed to do better than the GCC builtin memcpy or the one in EGLIBC so the assumption that you can't do better than the standard libraries is incorrect.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...