Metadata-Version: 2.1
Name: aliby
Version: 0.1.38
Summary: Process and analyse live-cell imaging data
Author: Alan Munoz
Author-email: alan.munoz@ed.ac.uk
Requires-Python: >=3.7.1,<3.11
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Requires-Dist: aliby-agora (>=0.2.30,<0.3.0)
Requires-Dist: aliby-baby (>=0.1.13,<0.2.0)
Requires-Dist: aliby-post (>=0.1.34,<0.2.0)
Requires-Dist: dask (>=2021.12.0,<2022.0.0)
Requires-Dist: h5py (==2.10)
Requires-Dist: imageio (==2.8.0)
Requires-Dist: numpy (==1.21.6)
Requires-Dist: omero-py (>=5.6.2)
Requires-Dist: opencv-python
Requires-Dist: p-tqdm (>=1.3.3,<2.0.0)
Requires-Dist: pandas (==1.3.3)
Requires-Dist: pathos (>=0.2.8,<0.3.0)
Requires-Dist: py-find-1st (>=1.1.5,<2.0.0)
Requires-Dist: requests-toolbelt (>=0.9.1,<0.10.0)
Requires-Dist: scikit-image (>=0.18.1)
Requires-Dist: scikit-learn (==0.22.2.post1)
Requires-Dist: tqdm (>=4.62.3,<5.0.0)
Requires-Dist: xmltodict (>=0.13.0,<0.14.0)
Requires-Dist: zeroc-ice (==3.6.5)
Description-Content-Type: text/markdown

# ALIBY (Analyser of Live-cell Imaging for Budding Yeast)

[![PyPI version](https://badge.fury.io/py/aliby.svg)](https://badge.fury.io/py/aliby)
[![readthedocs](https://readthedocs.org/projects/aliby/badge/?version=latest)](https://aliby.readthedocs.io/en/latest)
[![pipeline status](https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/badges/master/pipeline.svg)](https://git.ecdf.ed.ac.uk/swain-lab/aliby/aliby/-/pipelines)

The core classes and methods for the python microfluidics, microscopy, data analysis and reporting.

### Installation
See [INSTALL.md](./INSTALL.md) for installation instructions.

## Quickstart Documentation
### Setting up a server
For testing and development, the easiest way to set up an OMERO server is by
using Docker images.
[The software carpentry](https://software-carpentry.org/) and the [Open
 Microscopy Environment](https://www.openmicroscopy.org), have provided
[instructions](https://ome.github.io/training-docker/) to do this.

The `docker-compose.yml` file can be used to create an OMERO server with an
accompanying PostgreSQL database, and an OMERO web server.
It is described in detail
[here](https://ome.github.io/training-docker/12-dockercompose/).

Our version of the `docker-compose.yml` has been adapted from the above to
use version 5.6 of OMERO.

To start these containers (in background):
```shell script
cd pipeline-core
docker-compose up -d
```
Omit the `-d` to run in foreground.

To stop them, in the same directory, run:
```shell script
docker-compose stop
```

### Raw data access

 ```python
from aliby.io.dataset import Dataset
from aliby.io.image import Image

server_info= {
            "host": "host_address",
            "username": "user",
            "password": "xxxxxx"}
expt_id = XXXX
tps = [0, 1] # Subset of positions to get.

with Dataset(expt_id, **server_info) as conn:
    image_ids = conn.get_images()

#To get the first position
with Image(list(image_ids.values())[0], **server_info) as image:
    dimg = image.data
    imgs = dimg[tps, image.metadata["channels"].index("Brightfield"), 2, ...].compute()
    # tps timepoints, Brightfield channel, z=2, all x,y
```

### Tiling the raw data

A `Tiler` object performs trap registration. It may be built in different ways but the simplest one is using an image and a the default parameters set.

```python
from aliby.tile.tiler import Tiler, TilerParameters
with Image(list(image_ids.values())[0], **server_info) as image:
    tiler = Tiler.from_image(image, TilerParameters.default())
    tiler.run_tp(0)
```

The initialisation should take a few seconds, as it needs to align the images
in time.

It fetches the metadata from the Image object, and uses the TilerParameters values (all Processes in aliby depend on an associated Parameters class, which is in essence a dictionary turned into a class.)

#### Get a timelapse for a given trap
```python
fpath = "h5/location"

trap_id = 9
trange = list(range(0, 30))
ncols = 8

riv = remoteImageViewer(fpath)
trap_tps = riv.get_trap_timepoints(trap_id, trange, ncols)
```

This can take several seconds at the moment.
For a speed-up: take fewer z-positions if you can.

If you're not sure what indices to use:
```python
seg_expt.channels # Get a list of channels
channel = 'Brightfield'
ch_id = seg_expt.get_channel_index(channel)

n_traps = seg_expt.n_traps # Get the number of traps
```

#### Get the traps for a given time point
Alternatively, if you want to get all the traps at a given timepoint:

```python
timepoint = 0
seg_expt.get_traps_timepoints(timepoint, tile_size=96, channels=None,
                                z=[0,1,2,3,4])
```


### Contributing
See [CONTRIBUTING.md](./CONTRIBUTING.md) for installation instructions.

