Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] add more broadcast logs to scroll-v5.7.11 #1075

Open
wants to merge 5 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
68 changes: 48 additions & 20 deletions core/tx_pool.go
Original file line number Diff line number Diff line change
Expand Up @@ -493,6 +493,7 @@ func (pool *TxPool) SetGasPrice(price *big.Int) {
// pool.priced is sorted by GasFeeCap, so we have to iterate through pool.all instead
drop := pool.all.RemotesBelowTip(price)
for _, tx := range drop {
log.Error("Dropping transactions below price threshold", "tx", tx.Hash(), "price", price)
pool.removeTx(tx.Hash(), false)
}
Comment on lines +496 to 498
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider using a lower log level for price-based transaction drops

The use of log.Error for dropping transactions below the price threshold may lead to excessive error logging. This operation is a normal part of transaction pool management, especially during periods of price volatility.

Suggestions:

  1. Use log.Debug or log.Info instead of log.Error.
  2. Consider implementing an aggregated logging approach for price-based drops to prevent log flooding.

Example:

if aggregatedLogs {
    droppedTxCounter.Inc(1)
} else {
    log.Debug("Dropping transaction below price threshold", "tx", tx.Hash(), "price", price)
}

Periodically log the aggregated count:

log.Info("Dropped transactions summary", "count", droppedTxCounter.Count())

pool.priced.Removed(len(drop))
Expand Down Expand Up @@ -783,13 +784,13 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e
// If the transaction is already known, discard it
hash := tx.Hash()
if pool.all.Get(hash) != nil {
log.Trace("Discarding already known transaction", "hash", hash)
log.Error("Discarding already known transaction", "hash", hash)
knownTxMeter.Mark(1)
Comment on lines +787 to 788
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consider using a less severe log level for discarding known transactions.

The log level for discarding already known transactions has been changed from log.Trace to log.Error. While it's important to track discarded transactions, using log.Error for what could be a common occurrence might lead to several issues:

  1. Log pollution: This could flood the error logs, making it harder to identify actual critical errors.
  2. Performance impact: Frequent error logging might have a slight performance overhead.
  3. Misleading severity: Discarding a known transaction is not necessarily an error condition.

Consider using log.Debug or log.Info instead, which would provide visibility without the drawbacks of using the error level.

-		log.Error("Discarding already known transaction", "hash", hash)
+		log.Debug("Discarding already known transaction", "hash", hash)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
log.Error("Discarding already known transaction", "hash", hash)
knownTxMeter.Mark(1)
log.Debug("Discarding already known transaction", "hash", hash)
knownTxMeter.Mark(1)

return false, ErrAlreadyKnown
}

if pool.IsMiner() && rawdb.IsSkippedTransaction(pool.chain.Database(), hash) {
log.Trace("Discarding already known skipped transaction", "hash", hash)
log.Error("Discarding already known skipped transaction", "hash", hash)
knownSkippedTxMeter.Mark(1)
return false, ErrAlreadyKnown
Comment on lines +793 to 795
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Adjust log level and add context for known skipped transactions

The current use of log.Error for discarding known skipped transactions may lead to unnecessary error log pollution. This operation is normal for miner nodes and doesn't necessarily indicate an error condition.

Recommendations:

  1. Use log.Debug or log.Info instead of log.Error.
  2. Add more context to the log message to explain why these transactions are skipped.

Example:

if pool.IsMiner() && rawdb.IsSkippedTransaction(pool.chain.Database(), hash) {
    log.Debug("Discarding known skipped transaction", "hash", hash, "reason", "previously skipped by miner")
    knownSkippedTxMeter.Mark(1)
    return false, ErrAlreadyKnown
}

}
Expand All @@ -800,15 +801,15 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e

// If the transaction fails basic validation, discard it
if err := pool.validateTx(tx, isLocal); err != nil {
log.Trace("Discarding invalid transaction", "hash", hash, "err", err)
log.Error("Discarding invalid transaction", "hash", hash, "err", err)
invalidTxMeter.Mark(1)
return false, err
Comment on lines +804 to 806
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Reconsider log level for invalid transactions and add rate limiting.

Changing the log level for invalid transactions from log.Trace to log.Error significantly increases visibility, which can be beneficial for detecting issues. However, this approach might have some drawbacks:

  1. Log Pollution: If there are many invalid transactions, this could flood the error logs.
  2. Performance: Frequent error logging might have a slight performance impact.

While invalid transactions are more concerning than known transactions, not all of them may warrant an error log. Consider the following improvements:

  1. Use log.Warn instead of log.Error for most cases.
  2. Implement a rate limiter for this log to prevent flooding.
  3. Consider categorizing errors and only use log.Error for severe cases.
-		log.Error("Discarding invalid transaction", "hash", hash, "err", err)
+		if isSevereError(err) {
+			log.Error("Discarding invalid transaction", "hash", hash, "err", err)
+		} else {
+			log.Warn("Discarding invalid transaction", "hash", hash, "err", err)
+		}
+		logInvalidTxCounter.Inc(1)

Also, consider adding a rate limiter to this logging call and a function isSevereError to categorize error severity.

Committable suggestion was skipped due to low confidence.

}
// If the transaction pool is full, discard underpriced transactions
if uint64(pool.all.Slots()+numSlots(tx)) > pool.config.GlobalSlots+pool.config.GlobalQueue {
// If the new transaction is underpriced, don't accept it
if !isLocal && pool.priced.Underpriced(tx) {
log.Trace("Discarding underpriced transaction", "hash", hash, "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap())
log.Error("Discarding underpriced transaction", "hash", hash, "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap())
underpricedTxMeter.Mark(1)
return false, ErrUnderpriced
Comment on lines +812 to 814
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Adjust log level for underpriced transactions and consider aggregating logs.

The changes to this log statement have both positive and potentially problematic aspects:

Positives:

  1. Including gasTipCap and gasFeeCap provides valuable context for understanding why transactions are considered underpriced.

Areas for improvement:

  1. Log Level: Using log.Error for underpriced transactions is likely too severe. This is a normal part of transaction pool management, especially during periods of gas price volatility.
  2. Potential for Log Flooding: In busy networks or during gas price spikes, this could generate an excessive number of log entries.

Suggestions:

  1. Change the log level to log.Debug or log.Info.
  2. Consider implementing an aggregated logging approach for underpriced transactions.
-			log.Error("Discarding underpriced transaction", "hash", hash, "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap())
+			log.Debug("Discarding underpriced transaction", "hash", hash, "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap())
+			if metrics.Enabled {
+				underpricedTxCounter.Inc(1)
+			}

Also, consider adding a periodic log that summarizes the number of underpriced transactions discarded in the last interval, rather than logging each one individually.

Committable suggestion was skipped due to low confidence.

}
Expand All @@ -817,6 +818,7 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e
// do too many replacements between reorg-runs, so we cap the number of
// replacements to 25% of the slots
if pool.changesSinceReorg > int(pool.config.GlobalSlots/4) {
log.Error("Discarding transaction due to too many changes since reorg", "hash", hash)
throttleTxMeter.Mark(1)
return false, ErrTxPoolOverflow
Comment on lines +821 to 823
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Adjust log level and add context for transactions discarded due to reorg frequency.

The change from log.Trace to log.Error for transactions discarded due to frequent reorgs is an improvement in visibility, but there are some considerations:

  1. Log Level: While this situation is more significant than regular transaction discards, log.Error might still be too severe. Consider using log.Warn instead.

  2. Context: The current message doesn't provide enough information about why this is happening. Adding more context would be beneficial for debugging and monitoring.

  3. Threshold: It would be helpful to log the current threshold and the actual count of changes since the last reorg.

-			log.Error("Discarding transaction due to too many changes since reorg", "hash", hash)
+			log.Warn("Discarding transaction due to high reorg frequency", "hash", hash, "changesSinceReorg", pool.changesSinceReorg, "threshold", int(pool.config.GlobalSlots/4))

Consider also adding a metric to track how often this situation occurs, as it might indicate network instability or other issues that require attention.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
log.Error("Discarding transaction due to too many changes since reorg", "hash", hash)
throttleTxMeter.Mark(1)
return false, ErrTxPoolOverflow
log.Warn("Discarding transaction due to high reorg frequency", "hash", hash, "changesSinceReorg", pool.changesSinceReorg, "threshold", int(pool.config.GlobalSlots/4))
throttleTxMeter.Mark(1)
return false, ErrTxPoolOverflow

}
Expand All @@ -828,52 +830,67 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e

// Special case, we still can't make the room for the new remote one.
if !isLocal && !success {
log.Trace("Discarding overflown transaction", "hash", hash)
log.Error("Discarding overflown transaction", "hash", hash)
overflowedTxMeter.Mark(1)
return false, ErrTxPoolOverflow
Comment on lines +833 to 835
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Refine logging for overflown transactions with appropriate level and additional context.

The change from log.Trace to log.Error for overflown transactions increases visibility, but there are opportunities for improvement:

  1. Log Level: While a full transaction pool is significant, it's not necessarily an error condition. Consider using log.Warn instead.

  2. Context: The current message doesn't provide information about the pool's state. Adding details about current capacity and limits would be beneficial for monitoring and debugging.

  3. Frequency: If this condition occurs often, it might lead to log pollution. Consider implementing a rate limiter or aggregating these logs.

-			log.Error("Discarding overflown transaction", "hash", hash)
+			log.Warn("Discarding transaction due to full pool", "hash", hash, "poolSize", pool.all.Count(), "capacity", pool.config.GlobalSlots+pool.config.GlobalQueue)
+			if metrics.Enabled {
+				overflownTxCounter.Inc(1)
+			}

Also, consider adding a periodic summary log that provides an overview of the pool's state and the number of overflown transactions in the last interval. This would provide valuable insights without the risk of log flooding.

Committable suggestion was skipped due to low confidence.

}
// Bump the counter of rejections-since-reorg
pool.changesSinceReorg += len(drop)
// Kick out the underpriced remote transactions.
for _, tx := range drop {
log.Trace("Discarding freshly underpriced transaction", "hash", tx.Hash(), "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap())
log.Error("Discarding freshly underpriced transaction", "hash", tx.Hash(), "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap())
underpricedTxMeter.Mark(1)
pool.removeTx(tx.Hash(), false)
}
Comment on lines +841 to 844
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Harmonize logging for underpriced transactions and consider aggregation.

This log statement is very similar to the one we reviewed earlier for underpriced transactions. For consistency and improved logging practices, consider the following:

  1. Log Level: As with the previous underpriced transaction log, log.Error is likely too severe for this normal operation. Use log.Debug or log.Info instead.

  2. Consistency: Ensure that the logging for all underpriced transactions (both fresh and existing) uses the same level and format.

  3. Aggregation: Given that this could occur frequently during gas price fluctuations, consider aggregating these logs to prevent flooding.

  4. Context: The additional gasTipCap and gasFeeCap information is valuable. Consider adding the minimum required values for comparison.

-			log.Error("Discarding freshly underpriced transaction", "hash", tx.Hash(), "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap())
+			log.Debug("Discarding freshly underpriced transaction", "hash", tx.Hash(), "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap(), "minTip", pool.gasPrice, "minFeeCap", pool.priced.urgent.baseFee)
+			if metrics.Enabled {
+				underpricedTxCounter.Inc(1)
+			}

Additionally, implement a periodic summary log that provides an overview of underpriced transactions discarded in the last interval, including both fresh and existing ones. This would offer valuable insights without risking log saturation.

Committable suggestion was skipped due to low confidence.

}
// Try to replace an existing transaction in the pending pool
from, _ := types.Sender(pool.signer, tx) // already validated

log.Info("already validated", "hash", hash, "from", from, "to", tx.To())

Comment on lines +848 to +850
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Reconsider logging level and content for transaction validation

While adding more visibility to the transaction validation process can be beneficial, the current implementation may lead to several issues:

  1. Log Verbosity: Using log.Info for these common operations could result in excessive logging, potentially impacting performance and log storage.
  2. Security Concerns: Logging detailed transaction information (hash, from, to) at the Info level might expose sensitive data unnecessarily.
  3. Debug vs Info: This level of detail is typically more appropriate for debug logging.

Recommendations:

  1. Consider using log.Debug instead of log.Info for these messages.
  2. Implement a mechanism to enable/disable verbose logging dynamically.
  3. Review the logged information to ensure no sensitive data is exposed.

Example:

func logVerbose(format string, args ...interface{}) {
    if verboseLogging {
        log.Debug(fmt.Sprintf(format, args...))
    }
}

// Usage
logVerbose("Transaction validated", "hash", hash)

if list := pool.pending[from]; list != nil && list.Overlaps(tx) {

log.Info("already pending", "hash", hash, "from", from, "to", tx.To())

// Nonce already pending, check if required price bump is met
inserted, old := list.Add(tx, pool.currentState, pool.config.PriceBump, pool.chainconfig, pool.currentHead)

log.Info("already pending", "hash", hash, "inserted", inserted, "old", old.Hash().Hex())

if !inserted {
log.Error("already pending, ErrReplaceUnderpriced", "hash", hash)
pendingDiscardMeter.Mark(1)
return false, ErrReplaceUnderpriced
}
// New transaction is better, replace old one
if old != nil {
log.Info("already pending, new transaction is better, replace old one", "hash", hash, "old", old.Hash().Hex())
pool.all.Remove(old.Hash())
pool.calculateTxsLifecycle(types.Transactions{old}, time.Now())
pool.priced.Removed(1)
pendingReplaceMeter.Mark(1)
}
log.Info("already pending 1", "hash", hash)
pool.all.Add(tx, isLocal)
pool.priced.Put(tx, isLocal)
pool.journalTx(from, tx)
pool.queueTxEvent(tx)
log.Trace("Pooled new executable transaction", "hash", hash, "from", from, "to", tx.To())
log.Info("Pooled new executable transaction", "hash", hash, "from", from, "to", tx.To())

// Successful promotion, bump the heartbeat
pool.beats[from] = time.Now()
return old != nil, nil
}
// New transaction isn't replacing a pending one, push into queue
log.Info("new transaction isn't replacing a pending one, push into queue", "hash", hash, "from", from, "to", tx.To())
replaced, err = pool.enqueueTx(hash, tx, isLocal, true)
if err != nil {
return false, err
}
// Mark local addresses and journal local transactions
log.Info("mark local addresses and journal local transactions", "hash", hash, "from", from, "to", tx.To())
if local && !pool.locals.contains(from) {
log.Info("Setting new local account", "hash", hash, "from", from, "to", tx.To())
log.Info("Setting new local account", "address", from)
pool.locals.add(from)
pool.priced.Removed(pool.all.RemoteToLocals(pool.locals)) // Migrate the remotes if it's marked as local first time.
Expand All @@ -883,7 +900,7 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e
}
pool.journalTx(from, tx)

log.Trace("Pooled new future transaction", "hash", hash, "from", from, "to", tx.To())
log.Info("Pooled new future transaction", "hash", hash, "from", from, "to", tx.To())
return replaced, nil
}

Expand All @@ -906,6 +923,7 @@ func (pool *TxPool) enqueueTx(hash common.Hash, tx *types.Transaction, local boo
// Discard any previous transaction and mark this
if old != nil {
pool.all.Remove(old.Hash())
log.Error("enqueueTx: Discarding any previous transaction", "hash", old.Hash())
pool.priced.Removed(1)
pool.calculateTxsLifecycle(types.Transactions{old}, time.Now())
queuedReplaceMeter.Mark(1)
Comment on lines +926 to 929
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Adjust log level for transaction replacement in enqueueTx

The use of log.Error for discarding a previous transaction during the enqueueTx process may lead to unnecessary error log pollution. This operation is a normal part of transaction pool management, particularly when replacing transactions with higher gas prices.

Recommendations:

  1. Use log.Debug or log.Info instead of log.Error.
  2. Add more context to the log message to clarify that this is a replacement operation.

Example:

if old != nil {
    log.Debug("Replacing transaction in queue", "old_hash", old.Hash(), "new_hash", hash, "reason", "higher gas price")
    pool.all.Remove(old.Hash())
    pool.priced.Removed(1)
    pool.calculateTxsLifecycle(types.Transactions{old}, time.Now())
    queuedReplaceMeter.Mark(1)
}

Expand Down Expand Up @@ -955,6 +973,7 @@ func (pool *TxPool) promoteTx(addr common.Address, hash common.Hash, tx *types.T
inserted, old := list.Add(tx, pool.currentState, pool.config.PriceBump, pool.chainconfig, pool.currentHead)
if !inserted {
// An older transaction was better, discard this
log.Error("promoteTx: Discarding this transaction, an older transaction was better", "hash", hash)
pool.all.Remove(hash)
pool.calculateTxsLifecycle(types.Transactions{tx}, time.Now())
pool.priced.Removed(1)
Expand All @@ -963,6 +982,7 @@ func (pool *TxPool) promoteTx(addr common.Address, hash common.Hash, tx *types.T
}
// Otherwise discard any previous transaction and mark this
if old != nil {
log.Error("promoteTx: Discarding any previous transaction and mark this", "old.Hash()", old.Hash())
pool.all.Remove(old.Hash())
pool.calculateTxsLifecycle(types.Transactions{old}, time.Now())
pool.priced.Removed(1)
Expand Down Expand Up @@ -1084,6 +1104,7 @@ func (pool *TxPool) addTxsLocked(txs []*types.Transaction, local bool) ([]error,
replaced, err := pool.add(tx, local)
errs[i] = err
if err == nil && !replaced {
log.Info("dirty.addTx", "hash", tx.Hash())
dirty.addTx(tx)
}
}
Expand Down Expand Up @@ -1145,6 +1166,7 @@ func (pool *TxPool) removeTx(hash common.Hash, outofbound bool) {
addr, _ := types.Sender(pool.signer, tx) // already validated during insertion

// Remove it from the list of known transactions
log.Error("removeTx: Remove it from the list of known transactions", "hash", hash)
pool.all.Remove(hash)
pool.calculateTxsLifecycle(types.Transactions{tx}, time.Now())
if outofbound {
Expand Down Expand Up @@ -1262,6 +1284,7 @@ func (pool *TxPool) scheduleReorgLoop() {
} else {
dirtyAccounts.merge(req)
}
// TODO: print dirtyAccounts related in reorg
launchNextRun = true
pool.reorgDoneCh <- nextDone

Expand Down Expand Up @@ -1487,6 +1510,7 @@ func (pool *TxPool) promoteExecutables(accounts []common.Address) []*types.Trans
forwards := list.Forward(pool.currentState.GetNonce(addr))
for _, tx := range forwards {
hash := tx.Hash()
log.Error("Drop all transactions that are deemed too old (low nonce)", "hash", hash)
pool.all.Remove(hash)
pool.calculateTxsLifecycle(types.Transactions{tx}, time.Now())
}
Expand All @@ -1497,6 +1521,7 @@ func (pool *TxPool) promoteExecutables(accounts []common.Address) []*types.Trans
drops, _ := list.FilterF(costLimit, pool.currentMaxGas, pool.executableTxFilter(costLimit))
for _, tx := range drops {
hash := tx.Hash()
log.Error("Drop all transactions that are too costly (low balance or out of gas)", "hash", hash)
pool.all.Remove(hash)
pool.calculateTxsLifecycle(types.Transactions{tx}, time.Now())
}
Expand All @@ -1516,16 +1541,17 @@ func (pool *TxPool) promoteExecutables(accounts []common.Address) []*types.Trans

// Drop all transactions over the allowed limit
var caps types.Transactions
if !pool.locals.contains(addr) {
caps = list.Cap(int(pool.config.AccountQueue))
for _, tx := range caps {
hash := tx.Hash()
pool.all.Remove(hash)
pool.calculateTxsLifecycle(types.Transactions{tx}, time.Now())
log.Trace("Removed cap-exceeding queued transaction", "hash", hash)
}
queuedRateLimitMeter.Mark(int64(len(caps)))
}
// if !pool.locals.contains(addr) {
// caps = list.Cap(int(pool.config.AccountQueue))
// for _, tx := range caps {
// hash := tx.Hash()
// log.Error("Drop all transactions over the allowed limit", "hash", hash)
// pool.all.Remove(hash)
// pool.calculateTxsLifecycle(types.Transactions{tx}, time.Now())
// log.Trace("Removed cap-exceeding queued transaction", "hash", hash)
// }
// queuedRateLimitMeter.Mark(int64(len(caps)))
// }
// Mark all the items dropped as removed
pool.priced.Removed(len(forwards) + len(drops) + len(caps))
queuedGauge.Dec(int64(len(forwards) + len(drops) + len(caps)))
Expand Down Expand Up @@ -1603,6 +1629,7 @@ func (pool *TxPool) truncatePending() {
for _, tx := range caps {
// Drop the transaction from the global pools too
hash := tx.Hash()
log.Error("Drop the transaction from the global pools too 1", "hash", hash)
pool.all.Remove(hash)
pool.calculateTxsLifecycle(types.Transactions{tx}, time.Now())

Expand Down Expand Up @@ -1631,6 +1658,7 @@ func (pool *TxPool) truncatePending() {
for _, tx := range caps {
// Drop the transaction from the global pools too
hash := tx.Hash()
log.Error("Drop the transaction from the global pools too 2", "hash", hash)
pool.all.Remove(hash)
pool.calculateTxsLifecycle(types.Transactions{tx}, time.Now())

Expand Down Expand Up @@ -1713,14 +1741,14 @@ func (pool *TxPool) demoteUnexecutables() {
hash := tx.Hash()
pool.all.Remove(hash)
pool.calculateTxsLifecycle(types.Transactions{tx}, time.Now())
log.Trace("Removed old pending transaction", "hash", hash)
log.Error("Removed old pending transaction", "hash", hash)
}
// Drop all transactions that are too costly (low balance or out of gas), and queue any invalids back for later
costLimit := pool.currentState.GetBalance(addr)
drops, invalids := list.FilterF(costLimit, pool.currentMaxGas, pool.executableTxFilter(costLimit))
for _, tx := range drops {
hash := tx.Hash()
log.Trace("Removed unpayable pending transaction", "hash", hash)
log.Error("Removed unpayable pending transaction", "hash", hash)
pool.all.Remove(hash)
pool.calculateTxsLifecycle(types.Transactions{tx}, time.Now())
}
Expand Down
4 changes: 4 additions & 0 deletions eth/fetcher/tx_fetcher.go
Original file line number Diff line number Diff line change
Expand Up @@ -795,6 +795,10 @@ func (f *TxFetcher) scheduleFetches(timer *mclock.Timer, timeout chan struct{},
})

log.Debug("Scheduling transaction retrieval", "peer", peer, "len(f.announces[peer])", len(f.announces[peer]), "len(hashes)", len(hashes))
for _, hash := range hashes {
log.Info("Scheduling transaction retrieval", "peer", peer, "len(f.announces[peer])", len(f.announces[peer]), "len(hashes)", len(hashes), "hash", hash)
}

Comment on lines +798 to +801
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adjusting log levels and reducing log verbosity

The added logging statements provide valuable information for debugging the transaction retrieval process. However, there are a few considerations:

  1. The log.Info level might be too verbose for production environments, especially when inside a loop that could potentially generate a large number of log entries.
  2. Logging each transaction hash individually could impact performance if there are many transactions being processed.

Consider the following improvements:

  1. Use log.Debug instead of log.Info for detailed per-transaction logs:
-log.Info("Scheduling transaction retrieval", "peer", peer, "len(f.announces[peer])", len(f.announces[peer]), "len(hashes)", len(hashes), "hash", hash)
+log.Debug("Scheduling transaction retrieval", "peer", peer, "len(f.announces[peer])", len(f.announces[peer]), "len(hashes)", len(hashes), "hash", hash)
  1. Add a summary log at the Info level outside the loop:
if len(hashes) > 0 {
    log.Info("Scheduled transaction retrievals", "peer", peer, "announcements", len(f.announces[peer]), "retrievals", len(hashes))
}

These changes will maintain the detailed logging for debugging purposes while reducing the verbosity and potential performance impact in production environments.

peerAnnounceTxsLenGauge.Update(int64(len(f.announces[peer])))
peerRetrievalTxsLenGauge.Update(int64(len(hashes)))

Expand Down
6 changes: 6 additions & 0 deletions eth/handler.go
Original file line number Diff line number Diff line change
Expand Up @@ -519,12 +519,18 @@ func (h *handler) BroadcastTransactions(txs types.Transactions) {
directCount += len(hashes)
peer.AsyncSendTransactions(hashes)
log.Debug("Transactions being broadcasted to", "peer", peer.String(), "len", len(hashes))
for _, hash := range hashes {
log.Info("Transactions being broadcasted to", "peer", peer.String(), "len", len(hashes), "hash", hash)
}
Comment on lines +522 to +524
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consider optimizing the new logging statements.

While the additional logging provides valuable information for debugging, there are a few concerns:

  1. Using log.Info for each transaction might be too verbose. Consider using log.Debug instead.
  2. Logging inside loops could impact performance, especially with large transaction sets.
  3. Some information is duplicated from the existing debug logs.

Consider the following improvements:

  1. Change log.Info to log.Debug to reduce verbosity in production.
  2. Move the logging outside the loops and aggregate the information:
 for peer, hashes := range txset {
 	directPeers++
 	directCount += len(hashes)
 	peer.AsyncSendTransactions(hashes)
 	log.Debug("Transactions being broadcasted to", "peer", peer.String(), "len", len(hashes))
-	for _, hash := range hashes {
-		log.Info("Transactions being broadcasted to", "peer", peer.String(), "len", len(hashes), "hash", hash)
-	}
 }
+log.Debug("Transactions broadcast details", "directPeers", directPeers, "directCount", directCount)

 for peer, hashes := range annos {
 	annoPeers++
 	annoCount += len(hashes)
 	peer.AsyncSendPooledTransactionHashes(hashes)
 	log.Debug("Transactions being announced to", "peer", peer.String(), "len", len(hashes))
-	for _, hash := range hashes {
-		log.Info("Transactions being announced to", "peer", peer.String(), "len", len(hashes), "hash", hash)
-	}
 }
+log.Debug("Transactions announcement details", "annoPeers", annoPeers, "annoCount", annoCount)

This approach will provide the necessary information for debugging while minimizing the performance impact and log verbosity.

Also applies to: 531-533

}
for peer, hashes := range annos {
annoPeers++
annoCount += len(hashes)
peer.AsyncSendPooledTransactionHashes(hashes)
log.Debug("Transactions being announced to", "peer", peer.String(), "len", len(hashes))
for _, hash := range hashes {
log.Info("Transactions being announced to", "peer", peer.String(), "len", len(hashes), "hash", hash)
}
}
log.Debug("Transaction broadcast", "txs", len(txs),
"announce packs", annoPeers, "announced hashes", annoCount,
Expand Down
10 changes: 8 additions & 2 deletions eth/protocols/eth/broadcast.go
Original file line number Diff line number Diff line change
Expand Up @@ -106,9 +106,12 @@ func (p *Peer) broadcastTransactions() {
done = make(chan struct{})
go func() {
log.Debug("Sending transactions", "count", len(txs))
for _, tx := range txs {
log.Info("Sending transactions", "count", len(txs), "tx", tx.Hash().Hex())
}
broadcastSendTxsLenGauge.Update(int64(len(txs)))
if err := p.SendTransactions(txs); err != nil {
log.Debug("Sending transactions", "count", len(txs), "err", err)
log.Error("Sending transactions", "count", len(txs), "err", err)
broadcastSendTxsFailMeter.Mark(1)
fail <- err
return
Expand Down Expand Up @@ -181,9 +184,12 @@ func (p *Peer) announceTransactions() {
done = make(chan struct{})
go func() {
log.Debug("Sending transaction announcements", "count", len(pending))
for _, tx := range pending {
log.Info("Sending transaction announcements", "count", len(pending), "tx", tx.Hex())
}
broadcastAnnoTxsLenGauge.Update(int64(len(pending)))
if err := p.sendPooledTransactionHashes(pending); err != nil {
log.Debug("Sending transaction announcements", "count", len(pending), "err", err)
log.Error("Sending transaction announcements", "count", len(pending), "err", err)
broadcastAnnoTxsFailMeter.Mark(1)
fail <- err
return
Expand Down
21 changes: 15 additions & 6 deletions eth/protocols/eth/handlers.go
Original file line number Diff line number Diff line change
Expand Up @@ -338,12 +338,15 @@ func handleNewPooledTransactionHashes(backend Backend, msg Decoder, peer *Peer)
}
ann := new(NewPooledTransactionHashesPacket)
if err := msg.Decode(ann); err != nil {
log.Debug("Failed to decode `NewPooledTransactionHashesPacket`", "peer", peer.String(), "err", err)
log.Error("Failed to decode `NewPooledTransactionHashesPacket`", "peer", peer.String(), "err", err)
newPooledTxHashesFailMeter.Mark(1)
return fmt.Errorf("%w: message %v: %v", errDecode, msg, err)
}
// Schedule all the unknown hashes for retrieval
log.Debug("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann))
for _, hash := range *ann {
log.Info("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann), "hash", hash.Hex())
}
Comment on lines +341 to +349
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consider the performance impact of verbose logging

The changes improve error visibility by upgrading the log level for decoding errors from Debug to Error, which is good. However, the new Info level log for each transaction hash (line 348) might be too verbose and could potentially impact performance, especially when dealing with a large number of transactions.

Consider one of the following options:

  1. Remove the loop that logs each hash individually.
  2. Change the log level from Info to Debug for individual hash logging.
  3. Add a condition to only log individual hashes when their count is below a certain threshold.

Example implementation for option 3:

 log.Debug("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann))
-for _, hash := range *ann {
-    log.Info("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann), "hash", hash.Hex())
+if len(*ann) <= 10 {
+    for _, hash := range *ann {
+        log.Debug("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann), "hash", hash.Hex())
+    }
 }

This change will limit the logging of individual hashes to cases where there are 10 or fewer transactions, reducing the verbosity for larger batches.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
log.Error("Failed to decode `NewPooledTransactionHashesPacket`", "peer", peer.String(), "err", err)
newPooledTxHashesFailMeter.Mark(1)
return fmt.Errorf("%w: message %v: %v", errDecode, msg, err)
}
// Schedule all the unknown hashes for retrieval
log.Debug("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann))
for _, hash := range *ann {
log.Info("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann), "hash", hash.Hex())
}
log.Error("Failed to decode `NewPooledTransactionHashesPacket`", "peer", peer.String(), "err", err)
newPooledTxHashesFailMeter.Mark(1)
return fmt.Errorf("%w: message %v: %v", errDecode, msg, err)
}
// Schedule all the unknown hashes for retrieval
log.Debug("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann))
if len(*ann) <= 10 {
for _, hash := range *ann {
log.Debug("handleNewPooledTransactionHashes", "peer", peer.String(), "len(ann)", len(*ann), "hash", hash.Hex())
}
}

newPooledTxHashesLenGauge.Update(int64(len(*ann)))
for _, hash := range *ann {
peer.markTransaction(hash)
Expand All @@ -355,12 +358,15 @@ func handleGetPooledTransactions66(backend Backend, msg Decoder, peer *Peer) err
// Decode the pooled transactions retrieval message
var query GetPooledTransactionsPacket66
if err := msg.Decode(&query); err != nil {
log.Debug("Failed to decode `GetPooledTransactionsPacket66`", "peer", peer.String(), "err", err)
log.Error("Failed to decode `GetPooledTransactionsPacket66`", "peer", peer.String(), "err", err)
getPooledTxsFailMeter.Mark(1)
return fmt.Errorf("%w: message %v: %v", errDecode, msg, err)
}
hashes, txs := answerGetPooledTransactions(backend, query.GetPooledTransactionsPacket, peer)
log.Debug("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes))
for _, hash := range hashes {
log.Info("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes), "hash", hash.Hex())
}
Comment on lines +361 to +369
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consistent logging strategy needed

Similar to the changes in handleNewPooledTransactionHashes, this function has improved error visibility by upgrading the log level for decoding errors. However, it also introduces potentially verbose logging for individual transaction hashes.

To maintain consistency and prevent performance issues, consider applying the same logging strategy as suggested for handleNewPooledTransactionHashes. For example:

 log.Debug("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes))
-for _, hash := range hashes {
-    log.Info("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes), "hash", hash.Hex())
+if len(hashes) <= 10 {
+    for _, hash := range hashes {
+        log.Debug("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes), "hash", hash.Hex())
+    }
 }

This change will limit the logging of individual hashes to cases where there are 10 or fewer transactions, reducing the verbosity for larger batches while maintaining consistency with the previous function.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
log.Error("Failed to decode `GetPooledTransactionsPacket66`", "peer", peer.String(), "err", err)
getPooledTxsFailMeter.Mark(1)
return fmt.Errorf("%w: message %v: %v", errDecode, msg, err)
}
hashes, txs := answerGetPooledTransactions(backend, query.GetPooledTransactionsPacket, peer)
log.Debug("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes))
for _, hash := range hashes {
log.Info("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes), "hash", hash.Hex())
}
log.Error("Failed to decode `GetPooledTransactionsPacket66`", "peer", peer.String(), "err", err)
getPooledTxsFailMeter.Mark(1)
return fmt.Errorf("%w: message %v: %v", errDecode, msg, err)
}
hashes, txs := answerGetPooledTransactions(backend, query.GetPooledTransactionsPacket, peer)
log.Debug("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes))
if len(hashes) <= 10 {
for _, hash := range hashes {
log.Debug("handleGetPooledTransactions", "peer", peer.String(), "RequestId", query.RequestId, "len(query)", len(query.GetPooledTransactionsPacket), "retrieved", len(hashes), "hash", hash.Hex())
}
}

getPooledTxsQueryLenGauge.Update(int64(len(query.GetPooledTransactionsPacket)))
getPooledTxsRetrievedLenGauge.Update(int64(len(hashes)))
return peer.ReplyPooledTransactionsRLP(query.RequestId, hashes, txs)
Expand Down Expand Up @@ -403,16 +409,19 @@ func handleTransactions(backend Backend, msg Decoder, peer *Peer) error {
var txs TransactionsPacket
if err := msg.Decode(&txs); err != nil {
handleTxsFailMeter.Mark(1)
log.Debug("Failed to decode `TransactionsPacket`", "peer", peer.String(), "err", err)
log.Error("Failed to decode `TransactionsPacket`", "peer", peer.String(), "err", err)
return fmt.Errorf("%w: message %v: %v", errDecode, msg, err)
}
log.Debug("handleTransactions", "peer", peer.String(), "len(txs)", len(txs))
for _, tx := range txs {
log.Info("handleTransactions", "peer", peer.String(), "len(txs)", len(txs), "tx", tx.Hash().Hex())
}
Comment on lines +412 to +418
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Improve logging consistency and nil transaction handling

The changes in this function follow a similar pattern to the previous ones, with improved error visibility and potentially verbose transaction logging. Additionally, there's a new check for nil transactions.

  1. Verbose logging:
    The Info level log for each transaction (lines 416-418) might impact performance for large transaction batches.

  2. Nil transaction handling:
    The new error log for nil transactions (line 424) is a good addition for improved error reporting.

Consider the following changes:

  1. Adjust the transaction logging to be consistent with previous functions:
 log.Debug("handleTransactions", "peer", peer.String(), "len(txs)", len(txs))
-for _, tx := range txs {
-    log.Info("handleTransactions", "peer", peer.String(), "len(txs)", len(txs), "tx", tx.Hash().Hex())
+if len(txs) <= 10 {
+    for _, tx := range txs {
+        log.Debug("handleTransactions", "peer", peer.String(), "len(txs)", len(txs), "tx", tx.Hash().Hex())
+    }
 }
  1. Consider adding a metric for nil transactions:
 if tx == nil {
     handleTxsNilMeter.Mark(1)
     log.Error("handleTransactions: transaction is nil", "peer", peer.String(), "i", i)
+    nilTxCounter.Inc(1)
     return fmt.Errorf("%w: transaction %d is nil", errDecode, i)
 }

These changes will maintain consistency with previous functions and provide additional metrics for monitoring nil transactions.

Also applies to: 424-425

handleTxsLenGauge.Update(int64(len(txs)))
for i, tx := range txs {
// Validate and mark the remote transaction
if tx == nil {
handleTxsNilMeter.Mark(1)
log.Debug("handleTransactions: transaction is nil", "peer", peer.String(), "i", i)
log.Error("handleTransactions: transaction is nil", "peer", peer.String(), "i", i)
return fmt.Errorf("%w: transaction %d is nil", errDecode, i)
}
peer.markTransaction(tx.Hash())
Expand All @@ -429,7 +438,7 @@ func handlePooledTransactions66(backend Backend, msg Decoder, peer *Peer) error
var txs PooledTransactionsPacket66
if err := msg.Decode(&txs); err != nil {
pooledTxs66FailMeter.Mark(1)
log.Debug("Failed to decode `PooledTransactionsPacket66`", "peer", peer.String(), "err", err)
log.Error("Failed to decode `PooledTransactionsPacket66`", "peer", peer.String(), "err", err)
return fmt.Errorf("%w: message %v: %v", errDecode, msg, err)
}
log.Debug("handlePooledTransactions66", "peer", peer.String(), "len(txs)", len(txs.PooledTransactionsPacket))
Expand All @@ -438,7 +447,7 @@ func handlePooledTransactions66(backend Backend, msg Decoder, peer *Peer) error
// Validate and mark the remote transaction
if tx == nil {
pooledTxs66NilMeter.Mark(1)
log.Debug("handlePooledTransactions: transaction is nil", "peer", peer.String(), "i", i)
log.Error("handlePooledTransactions: transaction is nil", "peer", peer.String(), "i", i)
return fmt.Errorf("%w: transaction %d is nil", errDecode, i)
}
peer.markTransaction(tx.Hash())
Expand Down
4 changes: 4 additions & 0 deletions eth/protocols/eth/peer.go
Original file line number Diff line number Diff line change
Expand Up @@ -426,6 +426,10 @@ func (p *Peer) RequestTxs(hashes []common.Hash) error {
id := rand.Uint64()

log.Debug("Requesting transactions", "RequestId", id, "Peer.id", p.id, "count", len(hashes))
for _, hash := range hashes {
log.Info("Requesting transaction", "RequestId", id, "Peer.id", p.id, "count", len(hashes), "hash", hash)
}

peerRequestTxsCntGauge.Update(int64(len(hashes)))

requestTracker.Track(p.id, p.version, GetPooledTransactionsMsg, PooledTransactionsMsg, id)
Expand Down
Loading