Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #4235 +/- ##
===========================================
- Coverage 57.24% 32.40% -24.84%
===========================================
Files 483 483
Lines 57539 57575 +36
===========================================
- Hits 32939 18658 -14281
- Misses 19694 35584 +15890
+ Partials 4906 3333 -1573 |
ae96608 to
9f01974
Compare
❌ 6 Tests Failed:
View the top 3 failed tests by shortest run time
📣 Thoughts on this report? Let Codecov know! | Powered by Codecov |
9f01974 to
041fbdf
Compare
041fbdf to
e5eb39c
Compare
635f047 to
f0ee6c2
Compare
# Conflicts: # arbnode/node.go
6235f36 to
064f48a
Compare
e0c620d to
d996b94
Compare
d996b94 to
2370090
Compare
| for i := 0; i < c.workerCount; i++ { | ||
| c.LaunchThread(func(ctx context.Context) { | ||
| c.worker(ctx) | ||
| }) | ||
| } | ||
| } |
There was a problem hiding this comment.
I spent some time thinking about this approach vs spawning workers on demand with a limit of 4 concurrent Goroutines, and I think this is the better approach despite being slightly less idiomatic Go, because the larger queue absorbs bursty traffic much better.
| default: | ||
| // queue full: process synchronously to avoid dropping | ||
| s.checker.processAddress(addr, s) | ||
| } |
There was a problem hiding this comment.
I am not convinced about falling back to synchronous, since the most likely bottleneck causing the backlog would be CPUs doing SHA instructions, and there are a limited number of CPUs, which means that new requests would "jump the queue" and starve older ones stuck in the queue.
I think it is reasonable instead to not have a default case and block on posting to the queue and ctx.Done().
There was a problem hiding this comment.
Agree. I leaved sync path only fallback when checker is stopped or not started
…-filter-hashed-address-filter
tsahee
left a comment
There was a problem hiding this comment.
generally seems good. small comments.
Also - I haven't yet reviewd the "big picture" of what happens when we get an update from S3
| // Copyright 2026, Offchain Labs, Inc. | ||
| // For license information, see https://github.com/OffchainLabs/nitro/blob/master/LICENSE.md | ||
|
|
||
| package addressfilter |
There was a problem hiding this comment.
move entire package into gethexec
addressfilter/address_checker.go
Outdated
| const ( | ||
| defaultRestrictedAddrWorkerCount = 4 | ||
| defaultRestrictedAddrQueueSize = 8192 | ||
| ) |
There was a problem hiding this comment.
use a standard config struct and have config values for these. Also, remove NewDefaultHashedAddressChecker.
We will want to be able to play with these without having to compile a new binary.
| select { | ||
| case s.checker.workChan <- workItem{addr: addr, state: s}: | ||
| // ok | ||
| case <-s.checker.GetContext().Done(): |
There was a problem hiding this comment.
both for checker.stopped and checker context done can be handled together.
In both cases I think best action is - don't do anything, just mark filtered=true. Sequencer should not be sequencing any new blocks at this point, and filtering to be on the safe side means we won't be doing damage.
There was a problem hiding this comment.
changed to more conservative with s.report(true)
|
@Tristan-Wilson @mahdy-nasr fyi: I moved addressfilter into execution/gethexec |
…-filter-hashed-address-filter
Close NIT-4299
Waits: #4234
Changes: