Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@

import com.github.ambry.account.Account;
import com.github.ambry.account.AccountService;
import com.github.ambry.account.Container;
import com.github.ambry.clustermap.ClusterMap;
import com.github.ambry.clustermap.ClusterMapUtils;
import com.github.ambry.commons.ByteBufferAsyncWritableChannel;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -578,6 +578,11 @@ void setOperationCompleted() {
*/
public void cleanupChunks() {
releaseDataForAllChunks();
// At this point, if the channelReadBuf is not null it means it did not get fully read
// by the ChunkFiller in fillChunks and needs to be released.
if (channelReadBuf != null) {
channelReadBuf.release();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! This is a better clean fix with unit test than adding synchronized. thank you

}
}

/**
Expand Down Expand Up @@ -1642,7 +1647,7 @@ void onFillComplete(boolean updateMetric) {
* @param channelReadBuf the {@link ByteBuf} from which to read data.
* @return the number of bytes transferred in this operation.
*/
synchronized int fillFrom(ByteBuf channelReadBuf) {
int fillFrom(ByteBuf channelReadBuf) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we still need this function to be synchronized here, it's here to protect race condition.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets walk through to see if that's the case.

fillFrom is called in only one place, from fillChunks which itself is only called from ChunkFiller in PutManager. ChunkFiller is a runnable run by a single thread:

    chunkFillerThread = Utils.newThread("ChunkFillerThread-" + suffix, new ChunkFiller(), true);
    chunkFillerThread.start();

So fillChunks since it's only accessed by a single thread would only needs to be synchronized from concurrent access from error / cleanup threads. What is needed is that any objects used within the fillChunks routine which may also be concurrently accessed by those threads to be either behind a more narrowly scoped lock or declared as volatile. So lets look at that.

In PutManager.poll we have:

    for (PutOperation op : putOperations) {
      try {
        op.poll(requestRegistrationCallback);
      } catch (Exception e) {
        op.setOperationExceptionAndComplete(
            new RouterException("Put poll encountered unexpected error", e, RouterErrorCode.UnexpectedInternalError));
      }
      if (op.isOperationComplete() && putOperations.remove(op)) {
        // In order to ensure that an operation is completed only once, call onComplete() only at the place where the
        // operation actually gets removed from the set of operations. See comment within closePendingOperations().
        onComplete(op);
      }
    }

So

  1. setOperationExceptionAndComplete may be set with a RouterException.
  2. If isOperationComplete is true, then onComplete may be called which calls cleanupChunks

Therefore we need to

a) make sure anything that happens within setOperationExceptionAndComplete is not concurrent with anything that happens in fillChunks.
b) make sure that either i) nothing concurrent happens in fillChunks after isOperationComplete is true or ii) make sure what does happen is behind a lock.

In PutManager.completePendingOperations we have:

    for (PutOperation op : putOperations) {
      // There is a rare scenario where the operation gets removed from this set and gets completed concurrently by
      // the RequestResponseHandler thread when it is in poll() or handleResponse(). In order to avoid the completion
      // from happening twice, complete it here only if the remove was successful.
      if (putOperations.remove(op)) {
        op.cleanupChunks();
        Exception e = new RouterException("Aborted operation because Router is closed.", RouterErrorCode.RouterClosed);
        routerMetrics.operationDequeuingRate.mark();
        routerMetrics.operationAbortCount.inc();
        routerMetrics.onPutBlobError(e, op.isEncryptionEnabled(), op.isStitchOperation());
        nonBlockingRouter.completeOperation(op.getFuture(), op.getCallback(), null, e);
      }
    }

and completePendingOperations only runs as cleanup within the Chunkfiller thread, proving that it cannot be concurrent with fillChunks

So lets look at condition a:

  void setOperationExceptionAndComplete(Exception exception) {
    if (exception instanceof RouterException) {
      RouterUtils.replaceOperationException(operationException, (RouterException) exception, this::getPrecedenceLevel);
    } else if (exception instanceof ClosedChannelException) {
      operationException.compareAndSet(null, exception);
    } else {
      operationException.set(exception);
    }
    setOperationCompleted();
  }
  
  ...
  
    void setOperationCompleted() {
    operationCompleted = true;
    clearReadyChunks();
  }
  
  ...
  
    private synchronized void clearReadyChunks() {
    for (PutChunk chunk : putChunks) {
      logger.debug("{}: Chunk {} state: {}", loggingContext, chunk.getChunkIndex(), chunk.getState());
      // Only release the chunk in ready or complete mode. Filler thread will release the chunk in building mode
      // and the encryption thread will release the chunk in encrypting mode.
      if (chunk.isReady() || chunk.isComplete()) {
        chunk.clear();
      }
    }
  }

So for condition A we set the exception, set operation completed, and clear chunks which are provably finished. None of this will involve concurrent modification with fillChunks.

Lets looks at condition b:

fillChunks() {
  // a lot of channelReadBuf and chunk modification!
}

For condition b we can either add synchronized to fillChunks (instead of fillFrom) or we can add a lock around updating the operationCompleted value (and when we need to avoid TOCTOU). The synchronized on fillChunks should cause the least amount of complexity without too large an of an overhead as most of the work in fillChunks happens within an internal loop depending on the operationCompleted value.

int toWrite;
ByteBuf slice;
if (buf == null) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
package com.github.ambry.router;

import com.codahale.metrics.MetricRegistry;
import com.github.ambry.utils.NettyByteBufLeakHelper;
import com.github.ambry.account.Account;
import com.github.ambry.account.Container;
import com.github.ambry.clustermap.DataNodeId;
Expand All @@ -22,6 +23,7 @@
import com.github.ambry.clustermap.PartitionId;
import com.github.ambry.clustermap.ReplicaId;
import com.github.ambry.commons.BlobId;
import com.github.ambry.commons.ByteBufReadableStreamChannel;
import com.github.ambry.commons.ByteBufferReadableStreamChannel;
import com.github.ambry.commons.Callback;
import com.github.ambry.commons.LoggingNotificationSystem;
Expand All @@ -37,7 +39,6 @@
import com.github.ambry.frontend.Operations;
import com.github.ambry.messageformat.BlobProperties;
import com.github.ambry.messageformat.MessageFormatRecord;
import com.github.ambry.named.NamedBlobRecord;
import com.github.ambry.network.NetworkClient;
import com.github.ambry.network.NetworkClientErrorCode;
import com.github.ambry.network.NetworkClientFactory;
Expand Down Expand Up @@ -97,6 +98,8 @@
import java.util.stream.Collectors;
import java.util.stream.LongStream;
import javax.sql.DataSource;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.PooledByteBufAllocator;
import org.json.JSONObject;
import org.junit.AfterClass;
import org.junit.Assert;
Expand Down Expand Up @@ -4550,4 +4553,52 @@ static void verifyRepairRequestRecordInDb(MysqlRepairRequestsDb db, BlobId blobI
assertEquals(expectedRecord.getExpirationTimeMs(), record.getExpirationTimeMs());
}
}

/**
* Test for bytebuf memory leaks in PutOperation when operations are aborted in the middle of a put operation.
* This test verifies that PutOperation properly releases bytebuf when the operation completes/fails, even if
* the ChunkFiller thread hasn't processed some data yet.
*/
@Test
public void testPutOperationByteBufLeakOnAbort() throws Exception {
NettyByteBufLeakHelper testLeakHelper = new NettyByteBufLeakHelper();
testLeakHelper.beforeTest();

Properties props = getNonBlockingRouterProperties(localDcName);
int chunkSize = 512;
props.setProperty("router.max.put.chunk.size.bytes", Integer.toString(chunkSize));
setRouter(props, mockServerLayout, new LoggingNotificationSystem());

// Configure servers to succeed for first few chunks, then fail
List<ServerErrorCode> serverErrorList = new ArrayList<>();
serverErrorList.add(ServerErrorCode.NoError);
serverErrorList.add(ServerErrorCode.NoError);
for (int i = 0; i < 100; i++) {
serverErrorList.add(ServerErrorCode.PartitionReadOnly);
}
mockServerLayout.getMockServers().forEach(server -> server.setServerErrors(serverErrorList));

// The first two will run normally, but 3+ will get ServerErrorCode.PartitionReadOnly
int blobSize = 100 * chunkSize;
byte[] blobData = new byte[blobSize];
ThreadLocalRandom.current().nextBytes(blobData);
ByteBuf pooledBuf = PooledByteBufAllocator.DEFAULT.buffer(blobSize);
pooledBuf.writeBytes(blobData);
ByteBufReadableStreamChannel channel = new ByteBufReadableStreamChannel(pooledBuf);

BlobProperties blobProperties = new BlobProperties(blobSize, "serviceId", "ownerId", "contentType",
false, Utils.Infinite_Time, Utils.getRandomShort(ThreadLocalRandom.current()),
Utils.getRandomShort(ThreadLocalRandom.current()), false, null, null, null);

try {
router.putBlob(blobProperties, new byte[10], channel, PutBlobOptions.DEFAULT).get();
} catch (ExecutionException e) {
// Expected for operations that hit error responses
}
// If there are leaks, it will be detected in NettyByteBufLeakHelper and fail the test.
// Should be called before router close as closing of the router shouldn't be required to prevent leaks.
testLeakHelper.afterTest();
router.close();
router = null;
}
}
Loading