[OpenAFS] afs memcache tuning... lockups in afs_cv_wait

Mike Polek mike@pictage.com
Thu, 14 Feb 2008 17:49:31 -0800


Mike Garrison wrote:
> 
> On Feb 13, 2008, at 9:17 PM, Mike Polek wrote:
>> Mike Garrison wrote:
> I'd suggest tweaking -rxpck to be higher than the default, it may  
> actually help with the issue you're running into. We use 2000 for it.  
> My mind slips me as to why we picked that number at this point.

> The only things that really stick out to me is the low number of rx  
> packets, I'd try increasing rxpck and seeing if that helps.  
> Unfortunately, I don't have much experience with the memcache, but I  
> have a strong feeling that you shouldn't be seeing such a high number  
> of alloc failures for sending..
> 
> -- 
> Mike Garrison
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info


Well... The alloc-failures went away, and the
noBuffers count is at 0. But the behavior didn't change
initially at -rxpck 2000

   502 afs_cv_wait
   510 afs_osi_SleepSig
   512 afs_osi_SleepSig
   514 afs_osi_SleepSig
   516 afs_osi_SleepSig
   518 afs_osi_SleepSig
   520 afs_osi_SleepSig
  2708 afs_cv_wait
  2709 afs_cv_wait
  [...]
  2904 afs_cv_wait
  2905 afs_cv_wait


So... I cranked -rxpck up to 4000, and the behavior did change.
I could see it allocating thousands of packets by watching the rxstats,
so it definitely appears that was the resource I was looking for.

After pushing rxpck up to 10000, the bottleneck has now moved.
I'm not 100% certain whether it's the back end AFS servers, or
that I'm saturating what the NIC is capable of, or if I've hit
the limit on the CPUs. I'm getting over 500Mb/s of bandwidth
usage, which is plenty... far more than I thought possible.
This blows the lid off my performance concerns.  And the important
thing is that things degrade gracefully as expected.

Thanks for all the help! Once I get things fully tuned up,
I'll post my findings, in case anyone else can use the info.

Mike