How to use long int type in pygenn? #687
-
|
In the generated code, it is necessary to divide the block parallel update by defining the start and ID of each group of synapses, for example: |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
|
What model are you trying to run here? Intended limits are:
However, we don't have many models that test these so 3 might be broken somewhere |
Beta Was this translation helpful? Give feedback.
-
|
Oh wow, that's a big model - I suspect increasing To manually select a GPU you can do this: from pygenn.cuda_backend import DeviceSelect
...
model = GeNNModel("float", "my_model", device_select_method=DeviceSelect.MANUAL, manual_device_id: my_device_id) |
Beta Was this translation helpful? Give feedback.
-
|
Typically, the issue of a missing CUDA backend arises if the ```CUDA_PATH``` environment variable is not set or not set to the correct CUDA path.
|
Beta Was this translation helpful? Give feedback.
Oh wow, that's a big model - I suspect increasing
num_threads_per_spikewon't actually help very much at that scale.To manually select a GPU you can do this: