Skip to content

sesolve slower for larger matrices #1020

@etienney

Description

@etienney

Hi,

TLDR:
Sesolve is running slower for a projection of my equation on a smaller subspace while those problems computes the same thing.

More details:
I have a Schrodinger equation to solve for a squeezing operator of order n 1j*(adag.powm(n) - a.powm(n))
I start with the void state, and one can convince itself that by doing so one stays on the subspace spanned by {|0>, |n>, |2xn>, ..., |(N-1)xn>} if one takes a Fock state {|0>, |1>, |2>, ..., |(N-1)xn>} as the truncation of the problem.
I thought that, introducing P the projector from the Fock space on the space spanned by {|0>, |n>, |2xn>, ..., |(N-1)xn>}, I could save some computing time by using the projection of my Hamiltonian on the space spanned by {|0>, |n>, |2xn>, ..., |(N-1)xn>}, that is np.array(np.array(projector_fock(n, N)) @ np.array(Hsqueeze(N, n)) @ np.array(projector_fock(n, N)).conj().T), and apply this on a starting void state of size N rather than (N-1)xn + 1.

The result is that the simulation takes more time when projecting the problem in such a way. While not a bug I guess it's a problem of performance, we would expect the problem to be at least as effective once projected.

A minimal working exemple is the following

import jax.numpy as jnp
import dynamiqs as dq
dq.set_precision('double') 
import numpy as np


def projector_fock(n, N):
    """ Projects the Fock space on the space spanned by {|0>, |n>, |2xn>, ..., |(N-1)xn>} """
    size_fock_space = n*(N-1) + 1
    size_working_space = N 
    P = jnp.zeros((size_working_space, size_fock_space), dtype=jnp.complex128)
    for i in range(size_working_space):
            P = P.at[i, i*n].set(1.0)
    return P

def Hsqueeze(N, n):
    a = dq.destroy(n*(N-1) + 1)
    adag = dq.create(n*(N-1) + 1)
    return 1j*(adag.powm(n) - a.powm(n))

def start_psi(size):
    psi0 = jnp.zeros((size, 1), dtype=jnp.complex128)
    psi0 = psi0.at[0, 0].set(1.0)
    return psi0

def run_Burgarth_proj(tf, N, n):
    t_span = jnp.linspace(0, tf, 200)
    Hproj = np.array(np.array(projector_fock(n, N)) @ np.array(Hsqueeze(N, n)) @ np.array(projector_fock(n, N)).conj().T)
    print("size : ",Hproj.shape)
    res = dq.sesolve(Hproj, start_psi(N), t_span, method = dq.method.Tsit5(atol= 1e-12, rtol= 1e-12, max_steps = None))  
    return res

def run_Burgarth(tf, N, n):
    t_span = jnp.linspace(0, tf, 1000)
    print("size : ", Hsqueeze(N, n).shape)
    res = dq.sesolve(Hsqueeze(N, n), start_psi(n*(N-1) + 1), t_span, method = dq.method.Tsit5(atol= 1e-14, rtol= 1e-14, max_steps = None))
    return res

if __name__ == "__main__":
    tf=0.6
    n=3
    N=500
    res1 = np.array(run_Burgarth(tf, N, n).states[-1])
    res2 = np.array(run_Burgarth_proj(tf, N, n).states[-1])
    difference = jnp.linalg.norm(projector_fock(n, N)@res1 - res2)
    print(f"Norm of the difference between res1 and res2: {difference}")

The result are the following
size : (1498, 1498)
|████| 100.0% ◆ elapsed 5.95s ◆ remaining 0.00ms
size : (500, 500)
|████| 100.0% ◆ elapsed 01m25s ◆ remaining 0.00ms
Norm of the difference between res1 and res2: 2.842256064743062e-07

The difference in computation time is large and stays large when N increases. You can see that the same thing is actually computed (if you put a smaller atol and rtol this difference will decrease).

Do you know why I have this surprising behavior ?
Have a nice day !

Metadata

Metadata

Assignees

No one assigned

    Labels

    performanceSomething could be faster

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions