How to run simple python function on GPU using cuda and numba?

0
39


I have a simple code that I would like to run on GPU.
I have seen many tutorial and it seems like I have to put @cuda.jit before my function to use it on gpu, but it is not working.

For the moment, this code is runing in 15 seconds on average, I feel it could get faster on GPU.

This is the code I try to run on gpu, how can I do it?

from numba import jit, njit, vectorize, cuda, uint32, f8, uint8
from timeit import default_timer as timer
import ecdsa

#@cuda.jit(device=True)
def gw_func(sign, w):
    gw = sign * w
    return gw

#@cuda.jit(device=True)
def cpu_func(sign, w):
    gw = gw_func(sign=sign, w=w)
    ecdsa.ecdsa.Public_key(sign, gw)


start_time = timer()

sign = ecdsa.SECP256k1.generator
w = 115266572177641278964404328229048999421718285241533625253996968996219219877622


for ii in range(10000):
    cpu_func(sign=sign, w=w)

print(f'elapsed time : {timer() - start_time}')



Source link

Leave a reply