loki2.inference.inference_disk

Loki2 Inference Method for Patch-Wise Inference on a patches test set/Whole WSI.

Detect Cells with our Networks. Patches dataset needs to have the following requirements: Patch-Size must be 1024, with overlap of 64.

Module Contents

loki2.inference.inference_disk.current_dir
loki2.inference.inference_disk.project_root
loki2.inference.inference_disk.project_root
class loki2.inference.inference_disk.CellViTInference(model_path: pathlib.Path | str, gpu: int, classifier_path: pathlib.Path | str | None = None, binary: bool = False, batch_size: int = 8, patch_size: int = 1024, overlap: int = 64, geojson: bool = False, graph: bool = False, compression: bool = False, subdir_name: str | None = None, enforce_mixed_precision: bool = False)

Cell Segmentation Inference class.

After setup, a WSI can be processed by calling process_wsi method

Parameters:
  • model_path (Union[Path, str]) – Path to model checkpoint

  • gpu (int) – CUDA GPU id to use

  • classifier (Union[Path, str], optional) – Path to a classifier (.pth) to exchange to UniSeg classification results with new scheme. Defaults to None.

  • binary (bool, optional) – If just a binary detection/segmentation should be performed. Cannot be used with classifier. Defaults to False.

  • batch_size (int, optional) – Batch-size for inference. Defaults to 8.

  • patch_size (int, optional) – Patch-Size. Defaults to 1024.

  • overlap (int, optional) – Overlap between patches. Defaults to 64.

  • geojson (bool, optional) – If a geojson export should be performed. Defaults to False.

  • graph (bool, optional) – If a graph export should be performed. Defaults to False.

  • compression (bool, optional) – If a snappy compression should be performed. Defaults to False.

  • subdir_name (str, optional) – If provided, a subdir with the given name is created in the cell_detection folder. Helpful if you need to store different cell detection results next to each other. Defaults to None (no subdir).

  • enforce_mixed_precision (bool, optional) – Using PyTorch autocasting with dtype float16 to speed up inference. Also good for trained amp networks. Can be used to enforce amp inference even for networks trained without amp. Otherwise, the network setting is used. Defaults to False.

logger

Logger for logging events.

Type:

Logger

model

The model used for inference.

Type:

nn.Module

run_conf

Configuration for the run.

Type:

dict

inference_transforms

Transforms applied during inference.

Type:

Callable

mixed_precision

Flag indicating if mixed precision is used.

Type:

bool

num_workers

Number of workers used for data loading.

Type:

int

model_path

Path to the model checkpoint.

Type:

Path

device

Device used for inference.

Type:

str

batch_size

Batch size used for inference.

Type:

int

patch_size

Size of the patches used for inference.

Type:

int

overlap

Overlap between patches.

Type:

int

geojson

Flag indicating if a geojson export should be performed.

Type:

bool

graph

Flag indicating if a graph export should be performed.

Type:

bool

compression

If a snappy compression should be performed. Defaults to False

Type:

bool

subdir_name

Name of the subdirectory for storing cell detection results.

Type:

str

label_map

Label map for cell types

Type:

dict

classifier

Classifier module if provided. Default is Npone

Type:

nn.Module

binary

If just a binary detection/segmentation should be performed. Defaults to False.

Type:

bool

model_arch

Model architecture as str

Type:

str

ray_actors

Number of ray actors

Type:

int

num_workers

Number of workers for DataLoader

Type:

int

_instantiate_logger() None

Instantiate logger

_load_model() None

Load model and checkpoint and load the state_dict

_load_classifier(classifier_path

Union[Path, str]) -> None: Load the classifier if provided

_get_model(model_type

Literal[“CellViT”, “CellViTSAM”]) -> Union[CellViT, CellViTSAM]: Return the trained model for inference

_load_inference_transforms() None

Load the inference transformations from the run_configuration

_setup_amp(enforce_mixed_precision

bool = False) -> None: Setup automated mixed precision (amp) for inference.

_setup_worker() None

Setup the worker for inference

process_wsi(wsi

WSI, resolution: float = 0.25) -> None: Process WSI file

apply_softmax_reorder(predictions

dict) -> dict: Apply softmax and reorder the predictions

_post_process_edge_cells(cell_list

List[dict]) -> List[int]: Use the CellPostProcessor to remove multiple cells and merge due to overlap

_reallign_grid(cell_dict_wsi

list[dict], cell_dict_detection: list[dict], rescaling_factor: float) -> Tuple[list[dict],list[dict]]: Reallign grid if interpolation was used (including target_mpp_tolerance)

_convert_json_geojson(cell_list

list[dict], polygons: bool = False) -> List[dict]: Convert a list of cells to a geojson object

_check_wsi(wsi

WSI, resolution: float = 0.25) -> None: Check if provided patched WSI is having the right settings

logger: loki2.utils.logger.Logger
model: torch.nn.Module
run_conf: dict
inference_transforms: Callable
mixed_precision: bool
num_workers: int
label_map: dict
classifier: torch.nn.Module = None
binary: bool
model_arch: str
ray_actors: int
model_path
device
batch_size
patch_size
overlap
geojson
graph
compression
subdir_name
process_wsi(wsi: loki2.data.dataclass.wsi.WSI, resolution: float = 0.25) None

Process WSI file

Parameters:
  • wsi (WSI) – WSI object

  • resolution (float, optional) – Resolution for inference. Defaults to 0.25.

apply_softmax_reorder(predictions: Dict[str, torch.Tensor]) Dict[str, torch.Tensor]

Reorder and apply softmax on predictions.

Parameters:

predictions – Predictions dictionary with tensors.

Returns:

Reordered predictions with softmax applied.

Return type:

Dict[str, torch.Tensor]