mjw
(Here I am with a brain the size of a planet and they ask me to pick up a piece of paper. Call that job satisfaction? I don't.)
1
I’ve had a quick play and spun up containers for Glances, InfluxDB, and Grafana, which are all running. But, getting communication with InfluxDB working is proving a challenge – getting unauthorized message.
Any tips?
PS. If this gets a little off-topic, I’ll create a linked topic for setting up the containers.
I’m happy to help with this but let’s create a new thread if you have additional questions.
I used these two sources:
I set this up a year or so ago and I recall having issues with a couple of things. I recall there being a couple of bugs in the Grafana dashboard template, I don’t recall the other.
It’s a complicated stack.
You need to get the Glances → InfluxDB pipe working. This is what you see in the glances service definition below and also configuration in the glances.conf file in the [influxdb2] section. There’s an [influxdb] section, too. I’m running InfluxDB2, so that’s the section I modified and it needs to be populated with the right org, bucket, and token. If you get that part right, your data will flow into InfluxDB2.
Then you need an influx-configs file that specifies a correct URL (localhost:port), the same token that’s in the Glances config, and the same org in the Glances config.
You need all the Secrets files below, too.
I believe there may be additional configuration in Grafana where you point it at an InfluxDB2 source. And then you have to fix the dashboard.
You’ll see below that I use macvlan and per-service IPs. You can do this with everything on the same IP and just map the ports you need.
The IPs below are non-routable and are not PII. You don’t have to worry about them being posted.
This really is a pain to set up. I wish I knew an easier approach. Maybe @scidoner can tell us more about what they do.
Sure thing - I am running Zabbix. My entire infrastructure is monitored with it - network (including load balancers) via SNMP templates and Windows and Linux via their v2 agent. VMware via vSphere is also connected.
Docker containers are natively supported by the Zabbix v2 agent, so its just a matter of adding the template to the docker nodes and Zabbix does the rest.
Zabbix old school charts aren’t as pretty as Grafana but boy was it simpler to stand up
Regarding the issue at hand, it has started to be an absolute turd again (as expected) so I have capped the container upper memory limit and we will see what happens from here.
I wonder if that may be an easier start for @mjw. What I’m doing isn’t easy to replicate. At least for single-machine monitoring of containers, Zabbix may be more turnkey.
Here’s 6 hours worth of prod vs. EA. They’re virtually indistinguishable. Check out those synchronized CPU spikes, though. Makes you wonder if they’re doing something like syncing Qobuz based on clock time intervals which would basically turn Roon into a DDOS attacker. Hmmmm…
It’s been a long time since I set this up. I’m glad you caught that I’m using the dev branch of Glances. I recall needing to switch to that branch.
You’re looking at this the right way - start by getting Glances running and pushing data into InfluxDB.
A few additional things:
Your influxdb instance uses an org of ateles. Have you also set up the glances “.config” file and made sure that the org specified there is “ateles”? This is how you establish agreement between glances and influx on the bucket name
You’ve specified data retention of 1w. That seems short to me. Influx will delete everything older than a week. You can easily set the retention in InfluxDB2 - once you’re able to log into the WebUI, it’s just a settiing on the bucket. Personally, I’d take it out of this script, let it start at “forever” and adjust it later if you find your InfluxDB is getting larger than you want it to
I’m not sure how you’re setting up the InfluxDB username/password/admin token. The admin token is very important and may be the source of your issue. Username and Password are the credentials you’ll use to log into the InfluxDB web UI. It sounds like you may already be able to do that. The admin token is the token that Glances will use as its credential. It’s set in Glances’ config file in the InfluxDB2 section but you also have to set the same value for InfluxDB2.
My approach uses secrets using Docker compose’s “secrets” capabilities. Since you’re not using secrets files, you should probably just use the values you want to use directly in your podman commands. So rather than than use , , , use the actual quoted literals that you want to use. And then make very sure that the value you’re passing to InfluxDB2 as the admin token is the exact value that you’ve got in your Glances confg.
I hope that all makes sense. I wish I believed that this is the last of your troubles. It isn’t. But you’re on the right track
1 Like
mjw
(Here I am with a brain the size of a planet and they ask me to pick up a piece of paper. Call that job satisfaction? I don't.)
7
Well, this seemed to be the crucial change. I’ve not gone any further than seeing this:
I wonder if I should try to get you an exported copy of my Grafana dashboard. I can’t remember what the issues I fixed were or how I fixed them. If you run into trouble with the Grafana display, I can try to do that.
1 Like
mjw
(Here I am with a brain the size of a planet and they ask me to pick up a piece of paper. Call that job satisfaction? I don't.)
9
I noticed that ~/.config/containers/influxdb/config/influx-configs had a unique admin token, so I copied this to the relevant Podman script environment variable. Likewise, for the InfluxDB username and password, I did the same.
To add the dashboard was simplicity itself. I simply clicked on Add under Dashboards, and typed 23211 in the appropriate field.