-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running Locally #2
Comments
Thank you ! There are many of us trying to do this it seems Sure, what seems wrong ? It should be very clear that it's running. What do you see in the terminal ? There were ( That would be very appreciated, thank you very much ! I'll be more helpful once I'll be able to be in front of my laptop |
I ran it differently. From the /satonomics/parser dir I ran: cargo build --release It's running for a 3-4 days, but I'm a bit unclear where it's storing the data, how to see it and measure the progress. I this output at the terminal. 2024-09-03 05:03:05 - binance: fetch 1mn Are you planning to use docker containers in the future? |
The It's very weird, your output reminds me of a bug that I fixed a while ago, are you sure that you're up to date ? I just ran the parser with a clean slate and everything seems fine on my end What you should see is something like this:
Yes docker support is definitely on the roadmap ! Just have few things to do first EDIT: Oh and the datasets are stored in |
Thanks! I have it running now, parser is running, server is running and app is available in cloudflare. Super cool. If the primary goal is to build this system and make it FOSS for anyone to run how are you viewing the need for the server? Is it to improve your ability to develop on a dedicated machine making it easier for you to push new updates? |
Another question...are you planning to create UTXO based profit and loss metrics, coin days destroyed, etc type metrics? I started working on building a historical view of all UTXOs by going through each block, recording inputs, outputs, price at the time UTXOs are created and spent. It has been a bit difficult to get that working correctly. I see you have a UTXO folder, perhaps it's planned for the future. Great work by the way...this is awesome. |
Amazing ! As you saw, the app is a bit different from the main instance, still working on it before doing an official release The server is needed for several things:
Not really sure what you mean, could you provide an example ? Most of the charts and thus datasets that aren't in the For example:
But many more datasets related to inputs and outputs will come with time. And thank you ! EDIT: You can see a list of all the datasets available here, though I would recommend the open the link in a Firefox based browser for improved visibility |
I'm really interest in this project as well! I could write a Dockerfile and compose with no time. I just need this new version you said would come hopefully soon :) |
Regarding the UTXO metrics, I am thinking of profit and loss based UTXO metrics, Glassnode.com has these for example, percentage of UTXOs in profit. Maybe you already have this in there and I just didn't see it yet. I need to spend more time looking at all the metrics you do have. Quite impressive, keep it up! |
Dropped 300k sats in your geyser! LFG! |
Right ! Then yes they're here too and you can have raw numbers (absolute) and percentages (relative):
The last two are the same since it's for the whole supply (all utxos) anyway Now if you want those but for the STH:
Though I do need to make the names clearer
Thank you so much ! Really appreciate it |
You are welcome dude! I'm looking forward to the next release with the docker containers. |
Hey @mroxso ! happy to say that the new version has been released 🤙 |
When initially running the parser, as one would expect, it takes some time to process the chain and produce the dataset. After it finishes, and you restart it or if it crashes and restarts is it designed to pick up where it left off or does the parser start over from the beginning? |
To answer your question it is designed to pick up where it left off but it all depends on the scenario. If you stop it properly (ctrl+c, or kill PID after a recent update) and run it again, it will be fine. But there are not only datasets but also databases which are exported and used and they only hold the latest state, otherwise you'd need many many TB of disk space to store everything and if they get corrupted (by killing the program during their export), the program will detect that and recompute them which takes pretty much the same time as running the program for the first time. If you run it after an update, if there are no new datasets, it will be all good. |
Right on, thanks for replying so quickly. I'll keep playing around, trying to get into the details of how designed it all. Good stuff. Any updates on when the docker integration? |
Hopefully soon but honestly it's better to run it natively, you only need rust and the performance will be better |
@mroxso did you have a chance to look into it ? I'm close but not quite there yet |
Nah sorry. Had no time yet. Do you have a Work in Progress branch? Maybe I can take a look at it. |
No worries and yes but only the parser container not yet the server (which will be much easier) and not in a branch but in the docker folder
It seems to work but I haven't yet found how to set ulimit via docker run, my usual values make it crash What's really important and that's why there is an exec is that is responds correctly to SIGINT (ctrl+c) and SIGTERM (kill) to not corrupt the databases. I think there is a timeout when stopping a container, which kills it if it's taking too long ? If there a way to increase it that'd be great, to avoid any frustration from users |
I took a look at it. For now, I think the compiling on runtime is okay. You can take a look at the PR (#5 ) |
After cloning the directory onto my system, I run the parser giving it my bitcoin node credentials, datadir, etc. Parser starts but immediately says datasets are missing but continues to run. I then start the server and it panic crashes saying it can't find datasets and some directories. Is it expected for the parser to report missing datasets because it hasn't created them yet? Will the server also crash until the parser has created some minimum amount of datasets? |
So when it says dataset missing it means that it'll need to compute (or recompute if the version changed) it, which is one of the reasons why it can restart from the beginning. I understand how it can be confusing, would you word it differently ? When it panics it's because the parser hasn't had the time to generate a particular file ( |
Got it, because it just started it's looking for certain metrics/files that haven't been created yet but will be created later. Seeing "missing" on the initial run had me thinking maybe I missed something, or a misconfiguration. All good, thanks for confirming. This build I used ./run.sh, good to know it restarts. |
Where can I find the code that orders the metric list for the front end? The metric tree in the side bar for the charts. Thx! |
It's here: https://github.com/kibo-money/kibo/blob/main/website/scripts/options.js ( but please contact me on nostr for something like this |
Great project, I've been working on something similar, I think you've got a better approach here.
I'm trying to get this running locally on my server, the parser is running, but I'm not sure it's correct.
Are there step by step instructions to get it fully up and running?
I also see you are trying to get funding for a server, after I get this running and unpack how you built it maybe I can help out with the server.
The text was updated successfully, but these errors were encountered: