In vivo skin imaging prototypes “made in Latvia”

Janis SPIGULIS

Front. Optoelectron. ›› 2017, Vol. 10 ›› Issue (3) : 255-266.

PDF(559 KB)
Front. Optoelectron. All Journals
PDF(559 KB)
Front. Optoelectron. ›› 2017, Vol. 10 ›› Issue (3) : 255-266. DOI: 10.1007/s12200-017-0717-5
REVIEW ARTICLE
REVIEW ARTICLE

In vivo skin imaging prototypes “made in Latvia”

Author information +
History +

Abstract

This paper briefly reviews the operational principles and designs of portable in vivo skin imaging prototypes developed at the Biophotonics Laboratory of the Institute of Atomic Physics and Spectroscopy, University of Latvia. Four types of imaging devices are presented. Multi-spectral imagers ensure distant mapping of specific skin parameters (e.g., distribution of skin chromophores). Autofluorescence photobleaching rate imagers show potential for skin tumor assessment and margin delineation. Photoplethysmography video-imagers remotely detect cutaneous blood pulsations and provide real-time information on the human cardiovascular state. Multimodal skin imagers perform the above-mentioned functions by acquiring several spectral and video images using the same image sensor.

Keywords

multispectral skin imaging / autofluorescence photobleaching / remote photoplethysmography

Cite this article

Download citation ▾
Janis SPIGULIS. In vivo skin imaging prototypes “made in Latvia”. Front. Optoelectron., 2017, 10(3): 255‒266 https://doi.org/10.1007/s12200-017-0717-5

1 Introduction

The conventional inspection of power transmission lines by patrol personnel has been gradually replaced by unmanned aerial vehicles (UAVs). However, viewing the numerous images provided by UAVs individually is a time-consuming and complex task. Therefore, examining how to use computer technology to perform automatic recognition has become a popular topic.
During the early stage of the machine learning, traditional image recognition and machine learning were often used to locate and detect faults. Sun constructed a slope model based on the appearance model of the insulators [1]. Zhang extracted the H vector from the hue, saturation, and value color space to perform contour matching [2]. After deep learning was proposed by Hinton and Salakhutdinov in 2006 [3], convolutional neural network (CNN) [47] and object detection [814] algorithms have become increasingly powerful and effective. Using the images captured by UAVs, Wang et al. applied the multi-object detection algorithm to electrical components, achieving an accuracy of 92.7% [15].
In this study, we detect several types of common faults in power transmission lines using an object detection algorithm. However, there are two problems associated with this algorithm that must be solved. The first problem is that a single object has multiple labels, and the second problem is that the detection capability of small objects is low. To solve the first problem, the traditional non-maximum suppression (NMS) algorithm is used to handle universal objects [10], the polygonal non-maximum suppression algorithm is used to perform curve text detection [16], and the mask non-maximum suppression algorithm is used to perform oriented scene text detection based on the segmentation method [17] and other methods. To improve the ability to detect small objects, the length and width of which are less than 5% of the original scale, feature pyramid networks predict objects by fusing different feature layers [9]. In addition, single shot detector (SSD) generates anchors on multiple feature maps [13], whereas Cascade regional CNN (R-CNN) provides a multi-regression architecture to train high-quality detectors [14]. The structure of the detection network is presented in Fig. 1.
Fig.1 Structure of the detection network, where the detector is followed by a classifier

Full size|PPT slide

To solve the two aforementioned problems, we use faster R-CNN as a benchmark and propose an area-based non-maximum suppression (A-NMS) algorithm to delete redundant labels for a single object and improve the ability to detect small objects during the box fusion stage of cropping detection. We also discuss the selection of different parameters and compare the performance of the combinational algorithms. The experimental results demonstrate the feasibility of using object detection algorithms to detect the faults in transmission lines and achieve accurate localization and discrimination of the multiple common faults displayed in photos captured using UAVs.

2 Basic problem

Our study involves the detection of string-off insulators and shedding dampers. This requires our system to perform fault localization and recognition in one step. Therefore, insulators are classified as intact insulators, labeled “good”, and string-off insulators, labeled “bad”, as illustrated in Figs. 2(a) and 2(b), respectively. Dampers are classified as intact dampers, labeled “double”, and shedding dampers, labeled “single”, as illustrated in Figs. 2(c) and 2(d), respectively. These are the four types of objects that have to be detected.
Fig.2 Four objects to be detected. (a) Intact insulator labeled “good”; (b) string-off insulator labeled “bad”; (c) intact damper labeled “double”; (d) shedding damper labeled “single”

Full size|PPT slide

Faster R-CNN is a state-of-the-art two-stage object detection algorithm. First, the input image is processed using a simple CNN to obtain a feature map. Then, this feature map passes through two branches. One branch is the region proposal network (RPN) [13] used to generate default boxes and perform preliminary regression of the bounding boxes, whereas the other branch performs region of interest pooling with respect to the feature map and bounding boxes. Finally, completely connected layers are introduced to perform classification and precise regression. In the faster R-CNN algorithm, RPN is the core module. It initially generates many default boxes and subsequently deletes the boxes that are out of bounds. Thereafter, NMS is utilized to remove the overlapping boxes and select the first N bounding boxes for the next network.
Image processing methods include histogram equalization [18] and filtering image [19]. The traditional NMS algorithm requires the coordinates and the scores of the detected boxes belonging to a certain type of object. All the detected boxes are ranked according to their scores, and the box with the highest score in the current set is extracted after each iteration. Then, the intersection-over-union (IoU) is calculated with respect to the extracted box and each remaining box. If the IoU is larger than the overlap threshold (usually set to 0.5), the two boxes are considered to be the same object. The box with the lower score is then deleted. In the next iteration, this procedure is repeated until all the boxes are processed.
For one object, only one label is expected as the output. Figure 3 presents the detection results of a string-off insulator after the traditional NMS process. Three boxes have been labeled in case of this insulator; however, only the blue box is the expected label, and the upper and lower parts have been unexpectedly labeled as intact insulators (displayed as red boxes). However, traditional NMS can only process the detected boxes belonging to the same class and cannot remove redundant labels belonging to different classes. To solve this problem, we propose the A-NMS algorithm.
Fig.3 Detection results of a string-off insulator. The blue box is the expected “good” label, whereas the red boxes are unexpected “bad” labels

Full size|PPT slide

3 Area-based non-maximum suppression algorithm

The A-NMS algorithm considers intact and string-off insulators as belonging to the same class. Similarly, it considers intact and shedding dampers as belonging to the same class. The class set of detected boxes must be obtained before applying the A-NMS algorithm. In faster R-CNN, NMS is used twice, i.e., once during the RPN phase and once during the fast R-CNN phase. In the RPN phase, the only information that can be obtained is the probability that the detected boxes belong to the foreground; there is no specific classification. Therefore, the A-NMS algorithm replaces the NMS algorithm in the fast R-CNN phase. Based on the class set C, the A-NMS algorithm extracts the detected box set B and box score set S belonging to the insulators and then calculates the area of all the boxes and selects the box with the maximum area. Suppose the IoU of the upper red box and blue box in Fig. 3 is less than the threshold of 0.5, these two boxes would not be regarded as the same object, which is unexpected. Therefore, we propose the intersection-over-smaller (IoS) estimation rule provided in Eq. (1), which denotes the percentage that the smaller box is covered by the largest box. The IoS of the upper red box and the blue box in Fig. 3 is approximately 1.0. Therefore, the two boxes are regarded as the same object.
IoS(bi,bj)=inter(bi,bj)min(area(bi),area(bj)),
where IoS(bi,bj) is the IoS value of the boxes bi and bj, inter(bi,bj) is the intersection area of the boxes bi and bj, min(area(bi),area(bj)) means the minimum area between the areas (bi) and (bj).
Similar to the traditional NMS algorithm, the A-NMS algorithm has a hyperparameter that should be set, i.e., the overlap threshold T. If the IoS is above the overlap threshold T, the two boxes are regarded as the same object. Then, if the absolute score difference is smaller than a certain value (0.1), the box with the smaller area is deleted; otherwise, the box with the lower score is deleted. The same condition is applicable to dampers. The selection of the overlap threshold T is discussed in Section 5.1. The A-NMS algorithm is the basis of the box fusion algorithm discussed in Section 4.

4 A-NMS algorithm for box fusion during cropping detection

CNN serves as the image feature extractor in an image detection network. From the bottom layer to the top layer, the size of the feature maps becomes increasingly smaller. In most conventional CNNs, such as ResNet [10] and DenseNet [20], the final feature maps are scaled at least 32 times before the pooling layer. This signifies that an object with an area of 32 × 32 pixels in the original image becomes a pixel in the final feature map. Therefore, such small objects are difficult to detect. If the scaling factor is considerably small, the network may not extract the semantic features in a higher layer, which may hinder the improvement of classifiers.
In this study, a cropping detection method is proposed for the efficient detection of small objects. The larger the size of the input image in the object detection algorithm, the more effective will be the detection of small objects. However, the time required for model inference increases. To accelerate model inference, the short edges of all the input images are set to 600 pixels. The parallel processing frame can process several images simultaneously; thus, no additional time is required for cropping detection.
Fig.4 Diagram of image cropping. (a) Original image, where the 1/4 and 3/4 points on the x-axis and y-axis are the cropping points; (b) top left subpicture cropped by the yellow lines from the original image; (c) top right subpicture cropped by the purple lines from the original image; (d) bottom left subpicture cropped by the green lines from the original image; (e) bottom right subpicture cropped by the orange lines from the original image

Full size|PPT slide

As illustrated in Fig. 4, the points located at 1/4 and 3/4 of the x-axis and y-axis are the cropping points. The original input image is cropped into four subpictures: the top left subpicture, the top right subpicture, the bottom left subpicture, and the bottom right subpicture. Because the width of the input image is set to 600, the cropped pictures must be zoomed to their original size. After this process, the area of small objects in the processed image becomes approximately twice that in the original image. In addition, this process improves the ability of the image detection network to detect small objects.
In the fusion phase, the coordinates in the subpictures must be initially transformed to those in the original images. Specifically, the coordinates must be multiplied by 3/4 to zoom out and must be given an offset of 1/4 based on different locations. Equation (2) provides a matrix equation to achieve coordinate translation of the top right subpicture. The offset is the displacement along the x-axis in case of the subpictures. Other subpictures can be processed using a similar method.
[XoldYold]=[0.75000.75][XcurYcur]+[0.250],
where Xold and Yold mean the sizes of the original image, Xcur and Ycur mean the sizes of the cut image.
When the transformation results of the four subpictures are marked on the original image, one object contains multiple detected boxes, which is suitable for the fusion of the overlapping detected boxes using the A-NMS algorithm. However, the A-NMS algorithm cannot be used directly in the box fusion algorithm because it will delete the detected boxes with small areas. Because objects exist in more than one cropped image, the obtained detection boxes may be incomplete. Therefore, before deleting the boxes with small areas, they must be fused into the largest box. This signifies that the final output box is the outermost contour of the two detection boxes, as shown in Fig. 5.
Fig.5 Box fusion algorithm. (a) Two detection boxes; (b) fused detection box. The background green box is the output box obtained by fusing the two detection boxes in (a)

Full size|PPT slide

In the box fusion algorithm, a hyperparameter pertaining to the overlap threshold T must be set. The selection of this threshold is discussed in the experiment section.

5 Experimental results

In this study, we used faster R-CNN combined with the ResNet101 network as the benchmark model. The data in this study were obtained from the images taken during the daily patrol inspection of an electric company and comprised approximately 8000 pictures. The data covered a variety of geographical environments, weather conditions, shooting angles, and shooting distances that may exist in the normal range. The pixel size varied from 400 × 300 to 2000 × 1500. All the short edges in the images were zoomed to 600 pixels during training. In practical application scenarios, the pixels of the patrol images should not be less than 300 × 300 and their aspect ratio should be approximately 4:3 or 16:9. The dataset was divided into a training set, validation set, and test set in a 7:2:1 ratio. The network was initialized using network weights pre-trained by the MS COCO [21] and VOC2007 [22] datasets. Random horizontal flip, random clipping, random noise, and other data augmentation methods were used. The loss function followed that of faster R-CNN, and the optimization method used was stochastic gradient descent. The initial learning rate was 0.001, and the number of iterations was 100.

5.1 Selection of the A-NMS algorithm and box fusion overlap threshold

Selecting the overlap threshold T is important in the traditional NMS algorithm, the proposed A-NMS algorithm, and the box fusion algorithm. In this subsection, we initially discuss the influence of T on the performance of different schemes.
We use the conventional standard, i.e., the mean average precision (mAP), to measure the algorithm’s performance. The experimental results are presented in Fig. 6. The overlap threshold T is 0.3–0.9 at intervals of 0.1. The test results of the benchmark model are displayed as a black bar, whereas the bars of other colors represent the mAP for different T values obtained using different algorithms. The performance of the traditional NMS algorithm is optimal when T is 0.5 or 0.6, whereas the performances of the A-NMS and box fusion algorithms are optimal when T is 0.7–0.9. The reason for this result is that the traditional NMS algorithm estimates the IoU, whereas the A-NMS and box fusion algorithms estimate the IoS. According to the algorithm, various boxes are regarded as the same object only when majority of a smaller box is covered by a larger box.
Fig.6 Sensitivity of different methods to the threshold T in NMS, A-NMS, and cropping detection. Threshold T is 0.3–0.9 with intervals of 0.1. Black bars indicate the traditional NMS algorithm, orange bars indicate the A-NMS algorithm, and green bars indicate the cropping detection method

Full size|PPT slide

In the following comparison experiments, the NMS overlap threshold T is set to the optimal value to avoid multiple variables. Thus, T is 0.5 for the traditional NMS algorithm, whereas it is 0.7 for the A-NMS and box fusion algorithms.
Fig.7 Detection results of the NMS and A-NMS algorithms. The green box represents a string-off insulator labeled “bad”, whereas the blue box represents an intact insulator labeled “good”. (a) Detection results of the traditional NMS algorithm. The “bad” box is correct, whereas the “good” box is an additional box; (b) detection results of the A-NMS algorithm. The “bad” box is correct

Full size|PPT slide

Figures 7(a) and 7(b) present the detection results obtained using the NMS and A-NMS algorithms, respectively. The A-NMS algorithm deletes the additional box and solves the problem that cannot be handled by the traditional NMS algorithm.

5.2 Cropping detection test

Figure 8 presents the experimental results obtained when the cropping detection method was used and not used. In the experiment, we evaluated the effect of cropping detection using four objects. Figure 8 reveals that the cropping detection method performed better in case of small dampers. After cropping and magnification, the dampers were more likely to be correctly detected. However, in case of insulators, the detection of intact insulators improved by 3%, whereas the detection of string-off insulators worsened by 1%. This is because insulators may occupy a large part of the images, which can cause errors during box fusion. Therefore, our algorithm requires further improvement.
Fig.8 Impact of cropping detection on four types of objects: “good”, “bad”, “double”, and “single”. Red bars indicate the results of the benchmark method, whereas blue bars indicate the results of the cropping detection method. Here, AP refers to average precision

Full size|PPT slide

Figure 9 presents the results of the benchmark model and cropping detection. Cropping detection magnifies small objects, increasing their identification probability. The detector identified the objects located in the upper part of the picture, which were not detected previously.
Fig.9 Results of the benchmark and cropping detection methods. Detection results of (a) benchmark algorithm and (b) cropping detection method

Full size|PPT slide

5.3 Comprehensive test

In this subsection, we verify the performances of different methods and evaluate various indicators, including the detection speed.
Tab.1 mAP and recall for different methods. “√” indicates that the corresponding algorithm is used
benchmark A-NMS cropping detection mAP recall
faster R-CNN
+
ResNet101
0.8142 0.8421
0.8594 0.8875
0.8858 0.9123
Tab.2 Detection time of different methods in different GPU environments
detection scheme number of GPU time/ms
benchmark 1 210
benchmark+ A-NMS 1 212
A-NMS+ cropping detection 1 850
A-NMS+ cropping detection 4 220
Table 1 lists the mAP and recall values in case of different methods, which are obtained based on the benchmark model of faster R-CNN with ResNet101. Compared with NMS algorithm, the experimental results reveal that the mAP and recall values obtained using the A-NMS algorithm increased by 4.52% and 3.54%, respectively. This indicates that the A-NMS algorithm can decrease the probability of error during the detection of insulators and dampers.
The aim of cropping detection is to enhance the ability to detect small objects. Compared with the benchmark model, the cropping detection method achieved a 7.16% increase in mAP and a 7.02% increase in the recall value. These results demonstrate the effectiveness of the proposed A-NMS algorithm.
Different detection methods require different amounts of time in different GPU environments. Table 2 lists the detection times of different detection schemes. All the GPUs are NVIDIA GeForce GTX 1070. The benchmark model with the A-NMS requires only one GPU. The experimental results indicate that the A-NMS algorithm required only approximately 2 ms. Using only one GPU, the cropping detection method requires approximately 850 ms, which is very slow. Therefore, four GPUs are required to accelerate detection because four subpictures have to be detected in parallel. The time required by the parallel architecture with four GPUs is 220 ms (approximately 4.5 frames per second (FPS)), which is only 10 ms more than that required by the benchmark model.

6 Conclusions

By focusing on detecting faults in the electrical components of transmission lines, we propose an A-NMS algorithm to solve the problems of a single object having multiple labels and the difficulty of detecting small objects. We conduct a detailed comparison and analysis for different schemes. The experimental results indicate that the proposed A-NMS algorithm not only correctly removes additional and incorrect labels but also increases the detectors’ ability to sense small objects. The proposed method achieves a mAP value of 88.58%, a recall value of 91.23, and a detection speed of 4.5 FPS. However, there are cases in which the background is mislabeled as an object. Therefore, further research is required on how to remove erroneous objects and develop a more robust detector.

References

[1]
Spigulis J. Biophotonic technologies for noninvasive assessment of skin condition and blood microcirculation. Latvian Journal of Physics and Technical Sciences 2012, 49(5): 63–80
[2]
http://www.imaging.org/site/PDFS/Reporter/Articles/REP27_4_CIC20_TOMINAGA_p177.pdf (accessed on 12.03.2017)
[3]
Jakovels D, Spigulis J, Rogule L. RGB mapping of hemoglobin distribution in skin. Proceedings of the Society for Photo-Instrumentation Engineers, 2011, 8087: 80872B
CrossRef Google scholar
[4]
Jakovels D, Kuzmina I, Berzina A, Valeine L, Spigulis J. Noncontact monitoring of vascular lesion phototherapy efficiency by RGB multispectral imaging. Journal of Biomedical Optics, 2013, 18(12): 126019
CrossRef Pubmed Google scholar
[5]
Jakovels D, Spigulis J. 2-D mapping of skin chromophores in the spectral range 500−700 nm. Journal of Biophotonics, 2010, 3(3): 125–129
CrossRef Pubmed Google scholar
[6]
Jakovels D, Spigulis J. RGB imaging device for mapping and monitoring of hemoglobin distribution in skin. Lithuanian Journal of Physics, 2012, 52(1): 50–54
CrossRef Google scholar
[7]
Philips Vital Signs Camera. http://www.vitalsignscamera.com/ (accessed on 12.03.2017)
[8]
The best heart disease iPhone & Android Apps of the year. http://www.healthline.com/health-slideshow/top-heart-disease-iphone-android-apps#5 (accessed on 12.03.2017)
[9]
Skinvision. https://www.skinvision.com/technology-skin-cancer-melanoma-mobile-app (accessed on 12.03.2017)
[10]
Spigulis J, Lacis M, Kuzmina I, Lihacovs A, Upmalis V, Rupenheits Z. Method and device for smartphone mapping of tissue compounds. WO 2017/012675 A1, 2017
[11]
Kuzmina I, Lacis M, Spigulis J, Berzina A, Valeine L. Study of smartphone suitability for mapping of skin chromophores. Journal of Biomedical Optics, 2015, 20(9): 090503
CrossRef Pubmed Google scholar
[12]
http://www.dino-lite.com/applications_list.php?index_id=8 (accessed on 12.03.2017)
[13]
http://www.dino-lite.com/products_detail.php?index_m1_id=0&index_m2_id=0&index_id=61 (accessed on 12.03.2017)
[14]
Diebele I, Kuzmina I, Lihachev A, Kapostinsh J, Derjabo A, Valeine L, Spigulis J. Clinical evaluation of melanomas and common nevi by spectral imaging. Biomedical Optics Express, 2012, 3(3): 467–472
CrossRef Pubmed Google scholar
[15]
Bekina A, Diebele I, Rubins U, Zaharans J, Derjabo A, Spigulis J. Multispectral assessment of skin malformations by modified video-microscope. Latvian Journal of Physics and Technical Sciences, 2012, 49(5): 4–8
[16]
Bekina A, Rubins U, Lihacova I, Zaharans J, Spigulis J. Skin chromophore mapping by means of a modified video-microscope for skin malformation diagnosis. Proceedings of the Society for Photo-Instrumentation Engineers, 2013, 8856: 88562G
CrossRef Google scholar
[17]
Rubins U, Zaharans J, Lihacova I, Spigulis J. Multispectral video-microscope modified for skin diagnostics. Latvian Journal of Physics and Technical Sciences, 2014, 51(5): 65–70
[18]
Spigulis J, Elste L. Method and device for imaging of spectral reflectance at several wavelength bands. WO2013135311 (A1), 2012
[19]
Spigulis J, Jakovels D, Rubins U. Multi-spectral skin imaging by a consumer photo-camera. Proceedings of the Society for Photo-Instrumentation Engineers, 2010, 7557: 75570M
CrossRef Google scholar
[20]
Spigulis J, Oshina I. Snapshot RGB mapping of skin melanin and hemoglobin. Journal of Biomedical Optics, 2015, 20(5): 050503
CrossRef Pubmed Google scholar
[21]
Spigulis J, Oshina I, Berzina A, Bykov A. Smartphone snapshot mapping of skin chromophores under triple-wavelength laser illumination. Journal of Biomedical Optics, 2017, 22(9): 091508
CrossRef Pubmed Google scholar
[22]
Prahl S. Tabulated molar extinction coefficient for hemoglobin in water. http://omlc.ogi.edu/spectra/hemoglobin/summary.html (accessed 30 November 2016)
[23]
Sarna T, Swartz H M. The physical properties of melanin. http://omlc.ogi.edu/spectra/melanin/eumelanin.html (accessed 30 November 2016)
[24]
Spigulis J, Elste L. Single-snapshot RGB multispectral imaging at fixed wavelengths: proof of concept. Proceedings of the Society for Photo-Instrumentation Engineers, 2014, 8937: 89370L
CrossRef Google scholar
[25]
Spigulis J, Oshina I. Method and device for chromophore mapping under illumination by several spectral lines. LV patent 15106 B, 2016
[26]
Rubins U, Kviesis-Kipge E, Spigulis J. Device for obtaining speckle-free images at illumination by scattered laser beams. LV patent application P-17–17, 2017
[27]
Oshina I, Spigulis J, Rubins U, Kviesis-Kipge E, Lauberts K. Express RGB mapping of three to five skin chromophores. OSA Technical Digests, 2017 (ECBO Proceedings, Munich, in press)
[28]
Lihachev A, Lesins Jh D, Jakovels J, Spigulis. Low power cw-laser signatures on human skin. Quantum Electronics, 2011, 40(12): 1077–1080
CrossRef Google scholar
[29]
Stratonnikov A A, Polikarpov V S, Loschenov V B. Photobleaching of endogenous fluorochroms in tissues in vivo during laser irradiation. Proceedings of the Society for Photo-Instrumentation Engineers, 2001, 4241: 13–24
CrossRef Google scholar
[30]
Lesinsh J, Lihachev A, Rudys R, Bagdonas S, Spigulis J. Skin autofluorescence photobleaching and photo-memory. Proceedings of the Society for Photo-Instrumentation Engineers, 2011, 8092: 80920N
CrossRef Google scholar
[31]
Spigulis J, Lihachev A, Erts R. Imaging of laser-excited tissue autofluorescence bleaching rates. Applied Optics, 2009, 48(10): D163–D168
CrossRef Pubmed Google scholar
[32]
Lihachev A, Derjabo A, Ferulova I, Lange M, Lihacova I, Spigulis J. Autofluorescence imaging of basal cell carcinoma by smartphone RGB camera. Journal of Biomedical Optics, 2015, 20(12): 120502
CrossRef Pubmed Google scholar
[33]
Allen J. Photoplethysmography and its application in clinical physiological measurement. Physiological Measurement, 2007, 28(3): R1–R39
CrossRef Pubmed Google scholar
[34]
Spigulis J. Optical noninvasive monitoring of skin blood pulsations. Applied Optics, 2005, 44(10): 1850–1857
CrossRef Pubmed Google scholar
[35]
Rubins U, Upmalis V, Rubenis O, Jakovels D, Spigulis J. Real-time photoplethysmography imaging system. Proceedings of IFMBE, 2011, 34: 183–186
[36]
Rubins U, Spigulis J, Miscuks A. Photoplethysmography imaging algorithm for continuous monitoring of regional anesthesia. In: Proceedings of the 14th ACM/IEEE Symposium on Embedded Systems for Real-Time Multimedia, ESTIMedia'16. 2016: 67–71
[37]
Rubins U, Spigulis J, Miscuks A. Application of color magnification technique for revealing skin microcircuration changes under regional anaesthetic input. Proceedings of the Society for Photo-Instrumentation Engineers, 2013, 9032: 903203
CrossRef Google scholar
[38]
Spigulis J, Gailite L, Lihachev A, Erts R. Simultaneous recording of skin blood pulsations at different vascular depths by multiwavelength photoplethysmography. Applied Optics, 2007, 46(10): 1754–1759
CrossRef Pubmed Google scholar
[39]
Marcinkevics Z, Rubins U, Zaharans J, Miscuks A, Urtane E, Ozolina-Moll L. Imaging photoplethysmography for clinical assessment of cutaneous microcirculation at two different depths. Journal of Biomedical Optics, 2016, 21(3): 035005
CrossRef Pubmed Google scholar
[40]
Spigulis J, Garancis V, Rubins U, Zaharans E, Zaharans J, Elste L. A device for multimodal imaging of skin. Proceedings of the Society for Photo-Instrumentation Engineers, 2013, 8574: 85740J
CrossRef Google scholar
[41]
Spigulis J, Rubins U, Kviesis-Kipge E, Rubenis O. SkImager: a concept device for in-vivo skin assessment by multimodal imaging. Proceedings of the Estonian Academy of Sciences, 2014, 63(3): 213–220
CrossRef Google scholar
[42]
Embedded linux on board computer decsription, https://www.raspberrypi.org/ (accessed on 12.03.2017)
[43]
Industrial USB cameras description, https://en.ids-imaging.com/ (accessed on 12.03.2017)
[44]
Bliznuks D, Jakovels D, Saknite I, Spigulis J. Mobile platform for online processing of multimodal skin optical images: using online Matlab server for processing remission, fluorescence and laser speckle images, obtained by using novel handheld device. In: Proceedings of BioPhotonics 2015 (Florence). 2015: 7304024
[45]
Jakovels D, Saknite I, Bliznuks D, Spigulis J, Kadikis R. Benign-atypical nevi discrimination using diffuse reflectance and fluorescence multispectral imaging system. In: Proceedings of BioPhotonics 2015 (Florence). 2015: 7304026

Acknowledgements

The above presented prototypes were created in teamwork, so author first would thank the current and previous laboratory colleagues for the significant contributions- see their names in the reference list. The financial support by European Regional Development Fund and by Latvian national research program SOPHIS under the grant agreement #10-4/VPP-4/11 is highly appreciated.

RIGHTS & PERMISSIONS

2017 Higher Education Press and Springer-Verlag Berlin Heidelberg
AI Summary AI Mindmap
PDF(559 KB)

Accesses

Citations

Detail

Sections
Recommended

/