对象池从字面上来理解,就是一个能存储很多对象的池子。在Go里面,对象池是通过使用sync
包里的Pool
结构来实现的,对象池能提高内存复用,减少内存申请次数,甚至能降低CPU消耗,是高并发项目优化不可缺少的手法之一。
作者的解释如下:
// A Pool is a set of temporary objects that may be individually saved and
// retrieved.
//
// Any item stored in the Pool may be removed automatically at any time without
// notification. If the Pool holds the only reference when this happens, the
// item might be deallocated.
//
// A Pool is safe for use by multiple goroutines simultaneously.
//
// Pool's purpose is to cache allocated but unused items for later reuse,
// relieving pressure on the garbage collector. That is, it makes it easy to
// build efficient, thread-safe free lists. However, it is not suitable for all
// free lists.
在go源码里,作者对sync.Pool
使用做了如下的建议:
// An appropriate use of a Pool is to manage a group of temporary items
// silently shared among and potentially reused by concurrent independent
// clients of a package. Pool provides a way to amortize allocation overhead
// across many clients.
//
// An example of good use of a Pool is in the fmt package, which maintains a
// dynamically-sized store of temporary output buffers. The store scales under
// load (when many goroutines are actively printing) and shrinks when
// quiescent.
//
// On the other hand, a free list maintained as part of a short-lived object is
// not a suitable use for a Pool, since the overhead does not amortize well in
// that scenario. It is more efficient to have such objects implement their own
// free list.
//
// A Pool must not be copied after first use.
fmt.Sprintf()
var ppFree = sync.Pool{New: func() interface{} { return new(pp) },
}// newPrinter allocates a new pp struct or grabs a cached one.
func newPrinter() *pp {p := ppFree.Get().(*pp)p.panicking = falsep.erroring = falsep.wrapErrs = falsep.fmt.init(&p.buf)return p
}
// free saves used pp structs in ppFree; avoids an allocation per invocation.
func (p *pp) free() {// Proper usage of a sync.Pool requires each entry to have approximately// the same memory cost. To obtain this property when the stored type// contains a variably-sized buffer, we add a hard limit on the maximum buffer// to place back in the pool.//// See https://golang.org/issue/23199if cap(p.buf) > 64<<10 {return}p.buf = p.buf[:0]p.arg = nilp.value = reflect.Value{}p.wrappedErr = nilppFree.Put(p)
}// Sprintf formats according to a format specifier and returns the resulting string.
func Sprintf(format string, a ...interface{}) string {p := newPrinter()p.doPrintf(format, a)s := string(p.buf)p.free()return s
}
var bytePool = sync.Pool{New: func() interface{} {buf := make([]byte, 0, 4096)return buf},
}
var ch = make(ch []byte,1000)func main(){go func(){for msg := range ch {fmt.Println("recv msg")msg = msg[:0]bytePool.Put(msg)}}...for i:0;i<=100000;i++ {lineBuf := bytePool.Get().([]byte)lineBuf = append(lineBuf, topic)lineBuf = append(lineBuf, position)lineBuf = append(lineBuf, info)lineBuf = append(lineBuf, data)ch <- lineBuf}time.Sleep(5 * time.Minute)
}
type Pool struct {noCopy noCopylocal unsafe.Pointer // local fixed-size per-P pool, actual type is [P]poolLocallocalSize uintptr // size of the local array// New optionally specifies a function to generate// a value when Get would otherwise return nil.// It may not be changed concurrently with calls to Get.New func() interface{}
}
type poolLocal struct {poolLocalInternal// Prevents false sharing on widespread platforms with// 128 mod (cache line size) = 0 .pad [128 - unsafe.Sizeof(poolLocalInternal{})%128]byte
}
// Local per-P Pool appendix.
type poolLocalInternal struct {private interface{} // Can be used only by the respective P.shared []interface{} // Can be used by any P.Mutex // Protects shared.
}
请看如下代码释义:
func (p *Pool) pinSlow() *poolLocal {// Retry under the mutex.// Can not lock the mutex while pinned.runtime_procUnpin()allPoolsMu.Lock()defer allPoolsMu.Unlock()//寻找新的Ppid := runtime_procPin()// poolCleanup won't be called while we are pinned.s := p.localSizel := p.local//如果存在这个P的对象池则直接获取if uintptr(pid) < s {return indexLocal(l, pid)}//如果当前P的对象池不存在,加入到对象池集合中if p.local == nil {allPools = append(allPools, p)}// If GOMAXPROCS changes between GCs, we re-allocate the array and lose the old one.// 如果P数量在GC的时候发生了变化,Go会重新生成匹配P数量的对象池,并且丢弃旧的。//获取当前的P数量size := runtime.GOMAXPROCS(0)//创建P数量大小的对象池,并返回相应pid的对象池local := make([]poolLocal, size)atomic.StorePointer(&p.local, unsafe.Pointer(&local[0])) // store-releaseatomic.StoreUintptr(&p.localSize, uintptr(size)) // store-releasereturn &local[pid]
}
runtime_procPin方法实现
//go:nosplit
func procPin() int {//获取当前goroutine_g_ := getg()//获取执行goroutine的Mmp := _g_.mmp.locks++//返回Preturn int(mp.p.ptr().id)
}//go:linkname sync_runtime_procPin sync.runtime_procPin
//go:nosplit
func sync_runtime_procPin() int {return procPin()
}
从以上两段代码可以清楚的了解到sync.Pool
的对象池是按照P进行分片,每个P都对应一个对象池,也就是poolLocal
。
如图:
// Get selects an arbitrary item from the Pool, removes it from the
// Pool, and returns it to the caller.
// Get may choose to ignore the pool and treat it as empty.
// Callers should not assume any relation between values passed to Put and
// the values returned by Get.
//
// If Get would otherwise return nil and p.New is non-nil, Get returns
// the result of calling p.New.
func (p *Pool) Get() interface{} {if race.Enabled {race.Disable()}//获取当前P对应的LocalPooll := p.pin()//获取当前P的私有对象池x := l.privatel.private = nilruntime_procUnpin()//如果私有对象池没有可用的对象if x == nil {l.Lock()//从当前P的共享对象池尾部获取对象last := len(l.shared) - 1if last >= 0 {x = l.shared[last]l.shared = l.shared[:last]}l.Unlock()//如果共享对象池尾部也没有可用的对象if x == nil {//此处去别的P的共享对象池去偷对象x = p.getSlow()}}if race.Enabled {race.Enable()if x != nil {race.Acquire(poolRaceAddr(x))}}//如果所有对象池里都无法拿到可用的对象,只能新创建对象if x == nil && p.New != nil {x = p.New()}return x
}// pin pins the current goroutine to P, disables preemption and returns poolLocal pool for the P.
// Caller must call runtime_procUnpin() when done with the pool.// pin方法主要是获取`PoolLocal`,当全局`sync.Pool`对象里没有对应P的`PoolLocal`时,触发`Pool.local`的重建,丢弃旧的。
func (p *Pool) pin() *poolLocal {pid := runtime_procPin()// In pinSlow we store to localSize and then to local, here we load in opposite order.// Since we've disabled preemption, GC cannot happen in between.// Thus here we must observe local at least as large localSize.// We can observe a newer/larger local, it is fine (we must observe its zero-initialized-ness).s := atomic.LoadUintptr(&p.localSize) // load-acquirel := p.local // load-consumeif uintptr(pid) < s {return indexLocal(l, pid)}return p.pinSlow()
}//getSlow方法的职责是任意`P`上找到可用的对象
func (p *Pool) getSlow() (x interface{}) {// See the comment in pin regarding ordering of the loads.//计算当前LocalPool的大小size := atomic.LoadUintptr(&p.localSize) // load-acquirelocal := p.local // load-consume// Try to steal one element from other procs.//获取当前P的idpid := runtime_procPin()runtime_procUnpin()//循环遍历所有P的LocalPool,从shared尾部获取可用对象for i := 0; i < int(size); i++ {l := indexLocal(local, (pid+i+1)%int(size))l.Lock()last := len(l.shared) - 1if last >= 0 {x = l.shared[last]l.shared = l.shared[:last]l.Unlock()break}l.Unlock()}return x
}
// Put adds x to the pool.// Put方法比较简单,通过找到一个合适的`PoolLocal`,然后把释放的对象优先给到`local.private`,如果`private`不空,则追加到`shared`的尾部
func (p *Pool) Put(x interface{}) {if x == nil {return}if race.Enabled {if fastrand()%4 == 0 {// Randomly drop x on floor.return}race.ReleaseMerge(poolRaceAddr(x))race.Disable()}l := p.pin()if l.private == nil {l.private = xx = nil}runtime_procUnpin()if x != nil {l.Lock()l.shared = append(l.shared, x)l.Unlock()}if race.Enabled {race.Enable()}
}
对象池的回收是在GC的时候调用注册方法poolCleanup
实现的,所以Go的sync.Pool
所实现的对象池功能的生命周期是两次GC间隔时间。
func init() {runtime_registerPoolCleanup(poolCleanup)
}
func poolCleanup() {// This function is called with the world stopped, at the beginning of a garbage collection.// It must not allocate and probably should not call any runtime functions.// Defensively zero out everything, 2 reasons:// 1. To prevent false retention of whole Pools.// 2. If GC happens while a goroutine works with l.shared in Put/Get,// it will retain whole Pool. So next cycle memory consumption would be doubled.//遍历全局allPools,allPools的的元素是通过pinSlow方法里重建Pool时添加的for i, p := range allPools {allPools[i] = nilfor i := 0; i < int(p.localSize); i++ {l := indexLocal(p.local, i)//清空privatel.private = nil//清空sharedfor j := range l.shared {l.shared[j] = nil}l.shared = nil}//清空pool对象p.local = nilp.localSize = 0}//重置allPools全局对象allPools = []*Pool{}
}var (allPoolsMu MutexallPools []*Pool
)
加入了两个新的属性,victim
和victimeSize
type Pool struct {noCopy noCopylocal unsafe.Pointer // local fixed-size per-P pool, actual type is [P]poolLocallocalSize uintptr // size of the local arrayvictim unsafe.Pointer // local from previous cyclevictimSize uintptr // size of victims array// New optionally specifies a function to generate// a value when Get would otherwise return nil.// It may not be changed concurrently with calls to Get.New func() interface{}
}
// Local per-P Pool appendix.
type poolLocalInternal struct {private interface{} // Can be used only by the respective P.shared poolChain // Local P can pushHead/popHead; any P can popTail.
}// poolChain is a dynamically-sized version of poolDequeue.
//
// This is implemented as a doubly-linked list queue of poolDequeues
// where each dequeue is double the size of the previous one. Once a
// dequeue fills up, this allocates a new one and only ever pushes to
// the latest dequeue. Pops happen from the other end of the list and
// once a dequeue is exhausted, it gets removed from the list.
type poolChain struct {// head is the poolDequeue to push to. This is only accessed// by the producer, so doesn't need to be synchronized.head *poolChainElt// tail is the poolDequeue to popTail from. This is accessed// by consumers, so reads and writes must be atomic.tail *poolChainElt
}type poolChainElt struct {poolDequeue// next and prev link to the adjacent poolChainElts in this// poolChain.//// next is written atomically by the producer and read// atomically by the consumer. It only transitions from nil to// non-nil.//// prev is written atomically by the consumer and read// atomically by the producer. It only transitions from// non-nil to nil.next, prev *poolChainElt
}// poolDequeue is a lock-free fixed-size single-producer,
// multi-consumer queue. The single producer can both push and pop
// from the head, and consumers can pop from the tail.
//
// It has the added feature that it nils out unused slots to avoid
// unnecessary retention of objects. This is important for sync.Pool,
// but not typically a property considered in the literature.
type poolDequeue struct {// headTail packs together a 32-bit head index and a 32-bit// tail index. Both are indexes into vals modulo len(vals)-1.//// tail = index of oldest data in queue// head = index of next slot to fill//// Slots in the range [tail, head) are owned by consumers.// A consumer continues to own a slot outside this range until// it nils the slot, at which point ownership passes to the// producer.//// The head index is stored in the most-significant bits so// that we can atomically add to it and the overflow is// harmless.headTail uint64// vals is a ring buffer of interface{} values stored in this// dequeue. The size of this must be a power of 2.//// vals[i].typ is nil if the slot is empty and non-nil// otherwise. A slot is still in use until *both* the tail// index has moved beyond it and typ has been set to nil. This// is set to nil atomically by the consumer and read// atomically by the producer.vals []eface
}type eface struct {typ, val unsafe.Pointer
}
Get方法中,从1.12版本中的从shared
尾部获取对象x变成了从头部获取。
func (p *Pool) Get() interface{} {if race.Enabled {race.Disable()}l, pid := p.pin()x := l.privatel.private = nilif x == nil {// Try to pop the head of the local shard. We prefer// the head over the tail for temporal locality of// reuse.x, _ = l.shared.popHead()if x == nil {x = p.getSlow(pid)}}runtime_procUnpin()if race.Enabled {race.Enable()if x != nil {race.Acquire(poolRaceAddr(x))}}if x == nil && p.New != nil {x = p.New()}return x
}
Put方法中,从1.12版本的把对象x放到shared
尾部变成了放到了头部。
// Put adds x to the pool.
func (p *Pool) Put(x interface{}) {if x == nil {return}if race.Enabled {if fastrand()%4 == 0 {// Randomly drop x on floor.return}race.ReleaseMerge(poolRaceAddr(x))race.Disable()}l, _ := p.pin()if l.private == nil {l.private = xx = nil}if x != nil {l.shared.pushHead(x)}runtime_procUnpin()if race.Enabled {race.Enable()}
}
getSlow方法中,变化如下:
PoolLocal
,并且从shared
尾部获取对象PoolLocal
,并且从shared
头部获取对象,如果无法获取,则找对应P
的victim
的private
获取对象,如果无法获取,则从所有P
对应的victim
里的shared
尾部获取对象。func (p *Pool) getSlow(pid int) interface{} {// See the comment in pin regarding ordering of the loads.size := atomic.LoadUintptr(&p.localSize) // load-acquirelocals := p.local // load-consume// Try to steal one element from other procs.//遍历所有localPool的shared获取可用对象for i := 0; i < int(size); i++ {l := indexLocal(locals, (pid+i+1)%int(size))if x, _ := l.shared.popTail(); x != nil {return x}}// Try the victim cache. We do this after attempting to steal// from all primary caches because we want objects in the// victim cache to age out if at all possible.size = atomic.LoadUintptr(&p.victimSize)if uintptr(pid) >= size {return nil}//PoolLocal无法获取对象,从victim获取locals = p.victiml := indexLocal(locals, pid)if x := l.private; x != nil {l.private = nilreturn x}//遍历所有victim的shared获取可用对象for i := 0; i < int(size); i++ {l := indexLocal(locals, (pid+i)%int(size))if x, _ := l.shared.popTail(); x != nil {return x}}// Mark the victim cache as empty for future gets don't bother// with it.atomic.StoreUintptr(&p.victimSize, 0)return nil
}
回收方法里对victim
做了一些工作,将所有对象池分为新的和旧的,每一次GC的时候,先把旧的对象池集合的victim
回收,将新的对象池集合的local赋值给victim
,然后回收新的对象池集合的local
,之后新的对象池集合赋值给旧对象池集合,旧对象池集合的victim
将在下一次GC的时候被回收。
func poolCleanup() {// This function is called with the world stopped, at the beginning of a garbage collection.// It must not allocate and probably should not call any runtime functions.// Because the world is stopped, no pool user can be in a// pinned section (in effect, this has all Ps pinned).// Drop victim caches from all pools.for _, p := range oldPools {p.victim = nilp.victimSize = 0}// Move primary cache to victim cache.for _, p := range allPools {p.victim = p.localp.victimSize = p.localSizep.local = nilp.localSize = 0}// The pools with non-empty primary caches now have non-empty// victim caches and no pools have primary caches.oldPools, allPools = allPools, nil
}var (allPoolsMu Mutex// allPools is the set of pools that have non-empty primary// caches. Protected by either 1) allPoolsMu and pinning or 2)// STW.allPools []*Pool// oldPools is the set of pools that may have non-empty victim// caches. Protected by STW.oldPools []*Pool
)
先从下图了解下内部数据读取流程的改变:
接下来我们分析一下上面的图,图中主要围绕了三种操作进行对比,这里我们做一个总结分析:
shared
的尾部,我们知道每个shared
其实是对所有协程都可见的,每次都得切割slice
,slice
并不是协程安全的,所以这里需要去加锁,所以会有性能损耗。shared
使用了无锁双向链表实现,当协程去对应的PoolLocal
去获取对象的时候,其实是先从shared
双向链表的头部去获取,Put操作也是一样,所以性能得到了提升。local
全部重置回收,对象复用的周期只有一个GC间隔时间
。victim
回收,将新的对象池集合的local赋值给victim
,然后回收新的对象池集合的local
,之后新的对象池集合赋值给旧对象池集合,旧对象池集合的victim
将在下一次GC的时候被回收,这里将次轮GC前积累的对象带到下轮GC再回收,再下轮GC之前,如果local
没有可用对象的话,依然可以去victim
里去获取,victim
的设计将对象池的复用周期扩大了一个GC间隔时间
。下一篇:vscode使用技巧(2)