AccelerationService

  • The AccelerationService API helps identify the optimal acceleration configuration for TensorFlow Lite models.

  • It provides methods to create the service, generate and select the best configuration, and validate configurations through benchmarking.

  • generateBestConfig automatically finds the best performing, accuracy-validated acceleration configuration.

  • validateConfig and validateConfigs enable testing specific configurations or a collection of them.

  • Developers can create an AccelerationService instance using a context and an optional executor.

public class AccelerationService extends Object

Acceleration Service API

Public Method Summary

static AccelerationService
create(Context context)
Creates AccelerationService instance.
static AccelerationService
create(Context context, Executor executor)
Creates AccelerationService instance.
Task<ValidatedAccelerationConfigResult>
generateBestConfig(Model model, ValidationConfig validationConfig)
Generates a list of candidate AccelerationConfigs and runs Mini-benchmark over them.
Task<ValidatedAccelerationConfigResult>
selectBestConfig(Model model, Iterable<AccelerationConfig> configs, ValidationConfig validationConfig)
Runs Mini-benchmark over a collection of configs.
Task<ValidatedAccelerationConfigResult>
validateConfig(Model model, AccelerationConfig accelerationConfig, ValidationConfig validationConfig)
Runs Mini-benchmark with the given model, accelerationConfig, and validationConfig.
Task<Iterable<ValidatedAccelerationConfigResult>>
validateConfigs(Model model, Iterable<AccelerationConfig> configs, ValidationConfig validationConfig)
Runs Mini-benchmark over a collection of AccelerationConfig.

Inherited Method Summary

Public Methods

public static AccelerationService create (Context context)

Creates AccelerationService instance.

public static AccelerationService create (Context context, Executor executor)

Creates AccelerationService instance. Validation tests will run with the given executor.

public Task<ValidatedAccelerationConfigResult> generateBestConfig (Model model, ValidationConfig validationConfig)

Generates a list of candidate AccelerationConfigs and runs Mini-benchmark over them. Among the ones that passed accuracy checks, returns the one with the best performance. Returns a Task of null if none of the configs passes validation check. Returns a failed Task if benchmarking failed.

public Task<ValidatedAccelerationConfigResult> selectBestConfig (Model model, Iterable<AccelerationConfig> configs, ValidationConfig validationConfig)

Runs Mini-benchmark over a collection of configs. Among the ones that passed accuracy checks, returns the one with the best performance. Returns a Task of null if none of the configs passes validation check. Returns a failed Task if benchmarking failed.

public Task<ValidatedAccelerationConfigResult> validateConfig (Model model, AccelerationConfig accelerationConfig, ValidationConfig validationConfig)

Runs Mini-benchmark with the given model, accelerationConfig, and validationConfig. The benchmark result will also be cached by the acceleration service.

public Task<Iterable<ValidatedAccelerationConfigResult>> validateConfigs (Model model, Iterable<AccelerationConfig> configs, ValidationConfig validationConfig)

Runs Mini-benchmark over a collection of AccelerationConfig.