From patchwork Mon Jul 8 05:39:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: k4lizen X-Patchwork-Id: 93487 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 831693861024 for ; Mon, 8 Jul 2024 05:40:10 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-4027.protonmail.ch (mail-4027.protonmail.ch [185.70.40.27]) by sourceware.org (Postfix) with ESMTPS id 6CF7A385F018 for ; Mon, 8 Jul 2024 05:39:46 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 6CF7A385F018 Authentication-Results: sourceware.org; dmarc=pass (p=quarantine dis=none) header.from=proton.me Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=proton.me ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 6CF7A385F018 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=185.70.40.27 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1720417189; cv=none; b=ncDsunfL1ePFaf2KlgqHitj65mGhQRmxDvknNxQTopr+SVqH0pFTVz2gd3wCeW3UCgEQIGXJccKzbZz/ds0FpaegYT+Z8vrJ/yabU2aIyhpd009d05+Ll6mwUy3Sbwluz/e5R5y/RRku5PPj1wvVDk6DBIO8BqtsBdqTOfVO5WA= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1720417189; c=relaxed/simple; bh=1YBWMtZyQlRFrS1wgppVHvh9jHmnofrefLMbKXNk8Ag=; h=DKIM-Signature:Date:To:From:Subject:Message-ID:MIME-Version; b=cjX78Kkx53w7z17iVAcsKNuwlBUI3aYKzRUIHatzugoJYk8mJeokdMRnFiDjFBt5s2Zr02zTKyYKMQTrDI12vGMUCC2T2/K0DgXl7Qmdw04qr5AMPWjkNx5NP6KmHx9UhmnBQ8UIYji0vNEVQwGYF5VF7r1Uvkl0IetMNAeyAfo= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=proton.me; s=protonmail; t=1720417184; x=1720676384; bh=1YBWMtZyQlRFrS1wgppVHvh9jHmnofrefLMbKXNk8Ag=; h=Date:To:From:Subject:Message-ID:Feedback-ID:From:To:Cc:Date: Subject:Reply-To:Feedback-ID:Message-ID:BIMI-Selector; b=amM/dtbvaW4z3ODvu2rSkKblfJDP7zY8UXct728LufXyX2vo79vs+EMRYiDJOuNX9 +14/LCOQMFZ+XZxECWO+UiFGOScRIC6y2EZD+B1xiW2gPAbGeaWlZ4TsdZLVyLgnOj k3nDQ7d94jJIg023hQBH9bQ7UDqvccYXNaJPkpak9oNBzJtgV5haFFqh1uDMdn0I6A Nwn437rfarosYviQ24iMNmpalEJROxl0wJMHMyjSesGB9Sf3+6V8RJ39R+DI2Xg+rs bnOKOP+p/3AThj8L+AAd3zLLJxLi0xhY4Mh2SDfQdl/xjD8tzefaAB1VHaf95bC9J2 T+FwKaU4evy0g== Date: Mon, 08 Jul 2024 05:39:40 +0000 To: "libc-alpha@sourceware.org" From: k4lizen Subject: [PATCH v2] malloc: send freed small chunks to smallbin Message-ID: Feedback-ID: 102038742:user:proton X-Pm-Message-ID: 5176d020637e6a6bdbe30d8cf324c6bbdbcd2365 MIME-Version: 1.0 X-Spam-Status: No, score=-9.4 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, HTML_MESSAGE, KAM_INFOUSMEBIZ, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_PASS, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~patchwork=sourceware.org@sourceware.org Large chunks get added to the unsorted bin since sorting them takes time, for small chunks the benefit of adding them to the unsorted bin is non-existant, actually hurting performance. Splitting and malloc_consolidate still add small chunks to unsorted, but we can hint the compiler that that is a relatively rare occurance. Benchmarking shows this to be consistently good. --- malloc/malloc.c | 59 +++++++++++++++++++++++++++++++++---------------- 1 file changed, 40 insertions(+), 19 deletions(-) -- 2.45.2 diff --git a/malloc/malloc.c b/malloc/malloc.c index bcb6e5b83c..ad77cd083e 100644 --- a/malloc/malloc.c +++ b/malloc/malloc.c @@ -4156,9 +4156,9 @@ _int_malloc (mstate av, size_t bytes) #endif } - /* place chunk in bin */ - - if (in_smallbin_range (size)) + /* Place chunk in bin. Only malloc_consolidate() and splitting can put + small chunks into the unsorted bin. */ + if (__glibc_unlikely (in_smallbin_range (size))) { victim_index = smallbin_index (size); bck = bin_at (av, victim_index); @@ -4723,23 +4723,45 @@ _int_free_create_chunk (mstate av, mchunkptr p, INTERNAL_SIZE_T size, } else clear_inuse_bit_at_offset(nextchunk, 0); - /* - Place the chunk in unsorted chunk list. Chunks are - not placed into regular bins until after they have - been given one chance to be used in malloc. - */ + mchunkptr bck, fwd; - mchunkptr bck = unsorted_chunks (av); - mchunkptr fwd = bck->fd; - if (__glibc_unlikely (fwd->bk != bck)) - malloc_printerr ("free(): corrupted unsorted chunks"); - p->fd = fwd; + if(!in_smallbin_range (size)) + { + /* + Place large chunks in unsorted chunk list. Large chunks are + not placed into regular bins until after they have + been given one chance to be used in malloc. + + This branch is first in the if-statement to help branch + prediction on consecutive adjacent frees. + */ + + bck = unsorted_chunks (av); + fwd = bck->fd; + if (__glibc_unlikely (fwd->bk != bck)) + malloc_printerr ("free(): corrupted unsorted chunks"); + p->fd_nextsize = NULL; + p->bk_nextsize = NULL; + } + else + { + /* + Place small chunks directly in their smallbin, so they + don't pollute the unsorted bin. + */ + + int chunk_index = smallbin_index (size); + bck = bin_at (av, chunk_index); + fwd = bck->fd; + + if (__glibc_unlikely (fwd->bk != bck)) + malloc_printerr ("free(): chunks in smallbin corrupted"); + + mark_bin (av, chunk_index); + } + p->bk = bck; - if (!in_smallbin_range(size)) - { - p->fd_nextsize = NULL; - p->bk_nextsize = NULL; - } + p->fd = fwd; bck->fd = p; fwd->bk = p; @@ -4748,7 +4770,6 @@ _int_free_create_chunk (mstate av, mchunkptr p, INTERNAL_SIZE_T size, check_free_chunk(av, p); } - else { /* If the chunk borders the current high end of memory,