Skip to content

Conversation

@MrLight
Copy link
Contributor

@MrLight MrLight commented Oct 6, 2024

This implements Connection Pooling for the Snowflake Connection.
The code today doesn't reuse the snowflake connection. After each query the connection is dropped. This PullRequest will add ConnectionPooling to store open connections for later reuse. The poolsize and the connection timeout can be configured in the PluginConfig.
When the snowflake warehouse is in hard load it could be possible, that too much queries will be forwarded to snowflake, which will end in an overloaded Snowflake Warehouse. The poolsize will limit the concurrent queries which are forwarded to snowflake, but all additional Grafana queries will be stored in the query queue of the plugin. To drop new queries in the situation of overload a max. queued queries option is available. If more queries as configured are waiting a error will be returned to the panel. This will give snowflake the possibility to recover.
To implement the options in the ConfigEditor the editor code is changed from the legacy forms to the newer grafana forms.

If a Panel has multiple queries, the snowflake query will be done in multiple threads after this change.

All changes are speeding up the query execution and will reduce snowflake cloud query costs for connection creation.

This is the first pull-request. A second one implementing plugin-side caching is prepared. Please let me know if you are willing to implement such huge changes into your code base.

@devnied
Copy link
Collaborator

devnied commented Oct 7, 2024

Thanks, for this MR, I will review it asap.
For your second MR "implementing plugin-side caching" I don't think this should be a functionality of this plugin as Snowflake already perform cache query result, without extra cost.

@devnied devnied added the enhancement New feature or request label Oct 7, 2024
@MrLight
Copy link
Contributor Author

MrLight commented Oct 8, 2024

Thanks, for looking into this.
Regarding the plugin-side caching. I have a running test version of that. I do some cleaning at the moment and will publish that in my fork first. Possibly you like to take a look into it.
Out of my opinion it really helps in terms of usability, especially if you are browsing back n force or if someone sets (too) high refresh rates ...
Will ping you if its ready... (btw. thanks for merging my other pull request)

@MrLight
Copy link
Contributor Author

MrLight commented Oct 11, 2024

I have published a beta release in "my" fork. If you like you can have look there...
preview version

Copy link
Collaborator

@devnied devnied left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this great PR!
I have a few pieces of feedback based on my review.
I also need to test a few edge cases before I can approve it.

@sonarqubecloud
Copy link

sonarqubecloud bot commented Nov 6, 2024

@devnied devnied added this to the 2.0.0 milestone Dec 2, 2024
@MrLight
Copy link
Contributor Author

MrLight commented Mar 1, 2025

I have reworked the code, so that it should fit into the new structure. I can't test the oauth stuff, but should work as expected.

@devnied
Copy link
Collaborator

devnied commented Mar 7, 2025

Thanks for the rebase.
Before merging, I need to check whether using the default Go connection pool settings is a better approach: (DB.SetMaxOpenConns, DB.SetMaxIdleConns, DB.SetConnMaxIdleTime, DB.SetConnMaxLifetime) https://go.dev/doc/database/manage-connections

@MrLight
Copy link
Contributor Author

MrLight commented Mar 10, 2025

Thanks for the rebase. Before merging, I need to check whether using the default Go connection pool settings is a better approach: (DB.SetMaxOpenConns, DB.SetMaxIdleConns, DB.SetConnMaxIdleTime, DB.SetConnMaxLifetime) https://go.dev/doc/database/manage-connections

Not really sure, what you are referencing to. The idea is exactly this. The Connection creation logic has moved into the snowflake.go file.

        db, err := sql.Open("snowflake", connectionString)
	if err != nil {
		return nil, err
	}

	db.SetMaxOpenConns(int(config.IntMaxOpenConnections))
	db.SetMaxIdleConns(int(config.IntMaxOpenConnections))
	db.SetConnMaxLifetime(time.Duration(int(config.IntConnectionLifetime)) * time.Minute)

With this go-lang will create a connection pool with multiple connections.
In my prod-setup I have a high amount of clients sending queries to snowflake. I had cases when the warehouse was overloaded and a lot of request where queued in snowflake. To avoid a deadlook in grafana a second counter ( called queryCounter) is implemented. If the querycounter limit is exceeded we will cancel the grafana query.

        if int(qs.intMaxQueuedQueries) > 0 && int(qs.queryCounter.get()) >= (int(qs.intMaxQueuedQueries)) {
		err := errors.New("too many queries in queue. Check Snowflake connectivity or increase MaxQueuedQeries count")
		log.DefaultLogger.Error("Poolsize exceeded", "query", qs.qc.FinalQuery, "err", err)
		return result, err
	}

Concept is that we do not need to execute each and every query in case of Warehouse overload. We will drop all new queries until the warehouse is responsive again.

@devnied devnied removed this from the 2.0.0 milestone Mar 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants