SeAT - πŸ’¬-general - Page 4

FremontDango 5 Dec 2025 07:58
I think notifications and journals contribute huge part into this
I was able to shrink 107k tokens turnover time from 80hrs to 25hrs
now my server can queue 105514 tokens in past 24 hrs
Originally I did this because I had lots of duplicated primary errors
realized firstOrCreate is actually not atomic
Crypta Electrica 5 Dec 2025 08:23
Interesting.... I will have to have a look into that at some point
As in is there a laravel function for those inserts? Or is that something you wrote raw?
FremontDango 5 Dec 2025 08:25
I use the already deleted CanUpsertReplaceIgnore trait that was once in old version of eveapi
it is MySQL syntax though… things are different in case there is plan to move to PG
oh and a good thing on that trait - I can do batch import without waiting for n+1 responses
that way I chunk data to 100 and insert all at once
Wibla 6 Dec 2025 18:16
what kind of storage setup is this? ZFS or something else?
FremontDango 6 Dec 2025 18:17
ZFS on 14 x Seagate Exos X22
Wibla 6 Dec 2025 18:17
raidz-something?
FremontDango 6 Dec 2025 18:18
image: image.png
Wibla 6 Dec 2025 18:20
ah
Asrik 6 Dec 2025 18:50
Zfs doesn't like to have a large pool like that. You might want to consider spliting into two pools instead.. 7 disk pool..
FremontDango 6 Dec 2025 18:52
hmm thats a new thing to know
Wibla 6 Dec 2025 19:28
eh, that depends
but running databases on spinning rust is a bad idea
what kind of nvme drives are those, and how big are they?
FremontDango 6 Dec 2025 19:45
8TB KIOXIA each
What this video
FremontDango 6 Dec 2025 19:50
honestly it works (write and well indexed read) until it doesn’t (full table scan)
Wibla 6 Dec 2025 19:51
yeah
so those kioxia drives...
how big are the partitions for log and cache?
FremontDango 6 Dec 2025 19:52
36G for log, rest for cache
Wibla 6 Dec 2025 19:52
you have 7TB of cache...?
FremontDango 6 Dec 2025 19:52
but those cache doesnt really work tho
Wibla 6 Dec 2025 19:52
yeah no shit πŸ˜‚
how big is the SeAT VM?
FremontDango 6 Dec 2025 19:54
the reason why it does not work remain unknown
l2 header size stuck at 0
there is like 800 GB db
Wibla 6 Dec 2025 19:55
right
FremontDango 6 Dec 2025 19:55
less than 100G code I think
Wibla 6 Dec 2025 19:55
I'd nuke the cache
FremontDango 6 Dec 2025 19:55
the main reason I dont put it on SSD straight
I once had a SSD install and it wears at light speed
I might did smth wrong at some point
Wibla 6 Dec 2025 19:56
ehhh
maybe
what's the kioxia drive model you've got?
Asrik 6 Dec 2025 19:57
ok.. to understand right.. you have the Seat VM on the 14 zfs pool?
FremontDango 6 Dec 2025 19:57
KCD8XRUG7T68
Wibla 6 Dec 2025 19:57
that's what it sounds like
FremontDango 6 Dec 2025 19:57
yeah
theres like 40 more other VM on it as well
Wibla 6 Dec 2025 19:58
okay so
Asrik 6 Dec 2025 19:58
ok.. having that large pool will have performace issue.. The video above explains that why.. and the best use for it..
Wibla 6 Dec 2025 19:58
remove the existing cache partitions, make new ones, say 200GB on each drive, then make a 3TB partition on each drive and make a raidz mirror on those for SeAT
your drives are rated for 3 DWPD and with ~50% overprovisioning they will handle that just fine
(plus you're already doing slog so the SSD write load is not going to change much)
FremontDango 6 Dec 2025 20:00
i actually only notice very minor write load on that
ah seat vm has cache: metadata on it
Wibla 6 Dec 2025 20:01
ah yep
then your l2arc isn't going to be happy
πŸ™‚
FremontDango 6 Dec 2025 20:01
the l2arc has problem in its entirety i think
Wibla 6 Dec 2025 20:01
no, you've set it up wrong πŸ˜„
FremontDango 6 Dec 2025 20:02
i once remove it from the pool and readd them
and now header size keep at 0
the only difference between that would be mfuonly=2
Wibla 6 Dec 2025 20:04
yeah
I'd do this
and see how it goes
Asrik 6 Dec 2025 20:07
are you referring to the catch pool?
FremontDango 6 Dec 2025 20:07
i wont pick this machine type if i were to do that😐
Wibla 6 Dec 2025 20:07
wut
I'm referring to the l2arc he added
FremontDango 6 Dec 2025 20:09
i buy this 14 hdd metal for the sole reason of putting seat on HDD
Asrik 6 Dec 2025 20:09
??? the L2ARC is the ram... how would you remove that?
Wibla 6 Dec 2025 20:09
no it's not.
ARC is ram, L2arc is on disk
so?
FremontDango 6 Dec 2025 20:10
why i would move it back to ssd
Wibla 6 Dec 2025 20:10
if you wanted any sort of performance on spinning rust, you'd be using mirrors, not raidz
I dunno... to get performance?
what kind of SSD did you use before you moved to this machine?
FremontDango 6 Dec 2025 20:12
samsung MZQLW960HMJP
i have no performance issue in 80% use cases on this hdd setup
it just makes bad db decision super apparent
Wibla 6 Dec 2025 20:15
heh..
I think we know what the problem is πŸ˜›
also that's a very small read-intensive drive, as far as I know, no wonder it didn't like the workload...
FremontDango 6 Dec 2025 20:21
that might explain why the hdd actually makes it work faster than that ssd setup
Wibla 6 Dec 2025 20:27
how much memory do you have?
FremontDango 6 Dec 2025 20:28
256G on this
Wibla 6 Dec 2025 20:28
apropos arc vs l2arc
image: zfs_stats_utilization-week.png
image: zfs_stats_l2utilization-week.png
FremontDango 6 Dec 2025 20:28
64G allocated for mariadb
64G for ARC
Wibla 6 Dec 2025 20:39
ah
Collega 7 Dec 2025 10:13
I'm having a problem with the SRP module. We implemented the Janice API, which I had, but the SRP value was much higher than the value I manually checked on Janice. I won't even mention the fact that with 505 registered characters, the seat uses 24 GB of memory ^^
Wibla 7 Dec 2025 10:21
how
how the f
this is with several thousand tokens...
image: memory-pinpoint17305425971765102597.png
Crypta Electrica 7 Dec 2025 10:35
To the SRP module. Can you share a killmail and your rule setups that you have observed this for? Also 24GB of memory seems way too high, I feel like there is something else at play there
FremontDango 7 Dec 2025 10:39
you gotta pinpoint who is using memory
my 170k registered character instance uses less than 10G
Collega 7 Dec 2025 10:50
https://zkillboard.com/kill/131729862/ I don't know if this is an API issue or something with a misconfiguration.
image: image.png
Crypta Electrica 7 Dec 2025 10:52
Can you share what your SRP is configured for
There is also the test page that can be used to show the stats we need too
Collega 7 Dec 2025 10:56
i sended in your DMs
Crypta Electrica 7 Dec 2025 10:57
I'll have a look in a bit.. Normally dont check my DMs due to qty... At least you will be at the top for now
Collega 7 Dec 2025 10:57
I noticed because I tried to contact you a few months ago also through some friends ^^
Crypta Electrica 7 Dec 2025 10:58
Yeah my DMs are generally constantly overful so get mostly ignored
Elder Thorn 7 Dec 2025 11:19
am i like.. blind or is the optio to impersonbae a user to check permissions gone? πŸ˜„
Crypta Electrica 7 Dec 2025 11:46
It is not gone.. So long as you are admin you should have that ability
Kaper 7 Dec 2025 14:01
i have question becuse i can upgrade plugins below version 5.0 and dont know why?
image: image.png
image: image.png
recursive_tree 7 Dec 2025 14:02
Plugins might not follow the same numbering scheme as the seat core
Kaper 7 Dec 2025 14:05
okay thank you i repair
i have another question becuse after updtade crypta say to me to delete that old kilmail but it not working
image: image.png
Crypta Electrica 7 Dec 2025 14:19
Try first issuing
sql
DELETE FROM killmail_details WHERE killmail_id = 131729862;
then
sql
DELETE FROM killmails WHERE killmail_id = 131729862;
Kaper 7 Dec 2025 14:21
image: image.png
Crypta Electrica 7 Dec 2025 14:22
Sorry extra delete froms.. Bad copy. updated
Kaper 7 Dec 2025 14:23
image: image.png
Crypta Electrica 7 Dec 2025 14:23
Done πŸ™‚
Kaper 7 Dec 2025 14:27
i test srp and looks like that
image: image.png
Crypta Electrica 7 Dec 2025 14:28
Check your browser console for any error logs. But that usually means that you dont have some data in the DB up to date
Kaper 7 Dec 2025 14:30
okay i write to priv to not spam
Crypta Electrica 7 Dec 2025 14:30
Lets just head to #channel_821361546608508938 as the best place for the convo