REVIEW ARTICLE

Recent advances in holographic data storage

  • Hao RUAN
Expand
  • Research Laboratory for High Density Optical Storage, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China

Received date: 28 Jun 2014

Accepted date: 04 Sep 2014

Published date: 12 Dec 2014

Copyright

2014 Higher Education Press and Springer-Verlag Berlin Heidelberg

Abstract

Nowadays, big-data centers still rely on hard drives. However, there is strong evidence that these surface-storage technologies are approaching fundamental limits that may be difficult to overcome, as ever-smaller bits become less thermally stable and harder to access. An intriguing approach for next generation data-storage is to use light to store information throughout the three-dimensional (3D) volume of a material. Holographic data storage (HDS) is poised to change the way we write and retrieve data forever. After many years of developing appropriate recording media and optical read–write architectures, this promising technology is now moving industriously to the market. In this paper, a review of the major achievements of HDS in the past ten years is presented and the key technique details are discussed. The author concludes that HDS technology is an attractive candidate for big data centers in the future. On the other hand, there are many challenges ahead for HDS technology to overcome in the years to come.

Cite this article

Hao RUAN . Recent advances in holographic data storage[J]. Frontiers of Optoelectronics, 2014 , 7(4) : 450 -466 . DOI: 10.1007/s12200-014-0458-7

Introduction

Many corporations, banks, government agencies and scientific research institutions now handle petabytes (PB) of information, and the data storage technology that has served us well for a generation is difficult to meet the needs now. Today, big-data centers still rely on hard drives—actually hoards of them in vast arrays. However, there is strong evidence that these surface-storage technologies are approaching fundamental limits that may be difficult to overcome, as ever-smaller bits become less thermally stable and harder to access. Forecasts call for a 50-fold increase in global data by 2020, but the capacity of hard drives is not increasing fast enough to keep up with the explosion of digital data worldwide [ 1]. Another key challenge of today’s big data center relying on hard drives is the power consumption. Gartner Inc. says energy costs may increase from 10% of the IT budget today to over 50% in the next few years [ 2]. In an article of Oracle and HP, the analysis shows that 79.1% of all power is consumed by the storage subsystem [ 3]. Furthermore, the endurance of hard drives is about 3-5 years, which is too short for archival storage for big data center.
Optical storage offers long lifetime (above 50 years), low power consumption (about 1/15 of that of hard drivers) and low cost. But the capacity of the commercialized optical discs is low as compared to hard drivers. The largest capacity of blu-ray disc (BD disc) is 128 GB now. For used in big data center in the future, it is assumed that each disc should has the capacity of 1 TB. An intriguing approach for next generation data-storage is to use light to store information throughout the three-dimensional (3D) volume of a material. By distributing data within the volume of the recording medium, it should be possible to achieve far greater storage densities than current technologies can offer [ 4]. In 2003, Burr from IBM gave an exhaustive review of volumetric optical storage that had been proposed and developed in the last ten years of 20 century and the beginning of this century [ 4]. Holographic data storage (HDS) is a volumetric approach, which although conceived decades ago, made great progress in the last three decades or so. In this paper, a review of the major achievements of HDS in the last decade is presented and the key technique details are discussed. The author concludes that HDS technology is an attractive candidate for big data center. On the other hand, there are many challenges ahead for HDS technology to overcome in the years to come.

Common features and challenges

Optical holography enables the capturing of the entire information (phase and amplitude) contained in the light reflected from an object. Such information capture is achieved by storing the interference field of the emitted object waves and a coherent reference wave in a photoactive medium. This storage takes place preferentially as a modulation of the refractive index in the medium, which is proportional to the interference field, and is called a hologram. Figure 1 shows this process. The spherical wave from a single pixel interferes with a coherent plane wave in the reference beam. The resulting interference pattern changes the refractive properties of the photosensitive medium. The hologram is read out using the original reference beam, which is diffracted by the stored interference pattern to reconstruct the original spherical wavefront. An image of this beam can be formed on a single detector pixel, resulting in the retrieval of a single bit.
Fig.1 How to record and read data using holograms: (a) holographic storage of a single data bit; (b) read out of the hologram (After Ref. [4])

Full size|PPT slide

To use volume holography as a storage technology, digital data must be imprinted onto the object beam for recording and then retrieved from the reconstructed object beam during readout. Although the research of HDS was began in the 1960s and followed through 1970s [ 59], but no commercial products came out of these efforts because of the significant technical challenges, including poor media performance and a lack of input and output devices such as spatial light modulators (SLMs) and cameras. In the mid 1990s, the Defense Advanced Research Program Agency (DARPA) of United States formed a consortium of companies and universities, led by IBM and Stanford University, to develop high performance holographic storage systems [ 10, 11]. The goal of the consortium was to demonstrate high density and transfer rate by developing the necessary technology and components, such as custom high speed cameras and SLMs. Research in data channel modulation and detection schemes was also undertaken. A multiple page fully digital HDS system was developed (Fig. 2). The device for putting data into the system is a SLM, a planar array consisting of thousands of pixels. Each pixel is independent microscopic shutter that can either block or pass light using liquid-crystal or micro-mirror technology. The pixels in both types of devices can be refreshed over 1000 times per second, allowing the holographic storage system to reach an input data rate of 1 Gbit per second. The data are read using an array of detector pixels, such as a charge coupled device (CCD) camera or complementary metal oxide semiconductor (CMOS) sensor array. The object beam often passes through a set of lenses that image the SLM pixel pattern onto the output pixel array. To maximize the storage density, the hologram is usually recorded where the object beam is tightly focused. When the hologram is reconstructed by the reference beam, a weak copy of the original object beam continues along the imaging path to the camera, where the optical output can be detected and converted to digital data.
Fig.2 Digital holographic data storage (HDS) scheme (After Refs. [4,10,11])

Full size|PPT slide

The system can record encoded data page with around a couple of million pixels with a single light pulse. Furthermore, in order to achieve high storage densities, hundreds of data pages can be multiplexed at the same location in the media. The theoretical limits for the storage density of this technique are around tens of terabits per cubic centimeter [ 11, 12]. Optical physics affords a great variety of multiplexing methods. The strategies fall roughly into three categories, driven by the dominant physical contribution: Bragg-based, momentum-based, and correlation-based methods. Bragg-based methods depend on the multi-scatter properties of holograms. Momentum-based approaches depend on the directionality of the diffraction, and finally correlation-based methods depend on the specific spatial and polarization structure of the reference fields employed [ 12]. Often many of the multiplexing methods are used together to optimize the geometry used.
As a promising storage technology that has been worked on over the past 50 years, HDS has not yet achieved commercial success. For commercial products, desirable features might include capacity, input and output data rates, latency, cost, system volume, and power consumption. Other defining characteristics might include removability of the storage media, the ability to erase and rewrite data and high fidelity data retrieval. The requirement of high fidelity retrieval is an absolute must in the big data center for long storage lifetime. In the past ten years, HDS made great progress toward practicality with the appearance of lower-cost enabling technologies, significant results from long-standing research efforts, and progress in holographic recording materials.

Optical architectures

The primary optical architectural decision required in designing a HDS system is the selection of the appropriate multiplexing scheme. Up until now, several optical architectures for HDS system have been presented [ 1216]. These are classified broadly into three categories as shown in Fig. 3 [ 17]. The optical architecture for 2-axis HDS generally has two optical paths for an object beam and a reference beam with an off-axis optical configuration. Regarding the optical architecture for collinear HDS, a reference beam and an object beam are bundled on the same optical axis and irradiated on a holographic medium through a single objective lens. Regarding the optical architecture for microholographic storage, two counter-propagating object and reference beams are configured to record bitwise information similar to the DVD system.
Fig.3 Representative optical architectures for holographic data storage (HDS) system. SLM: spatial light modulator

Full size|PPT slide

2-axis holographic storage

Also in the mid 1990s, Bell Laboratories, Lucent Technologies (and later, InPhase Technologies, a company spun out of Lucent Technologies) aimed at developing a suitable recording media in conjunction with a practically implementable drive [ 12]. The group at InPhase Technologies took a conventional optical architecture, which has two optical paths for a signal beam and a reference beam with an off-axis optical configuration [ 12, 13]. Angle multiplexing is the most common technique in 2-axis holographic systems, which use a simple plane wave reference beam offer three distinct advantages: 1) it is possible to compensate for media temperature changes (which is extremely important for polymer media); 2) the system can be modeled in a tractable manner; and 3) these systems offer improved signal-to-noise ratio (SNR) when compared with systems employing high bandwidth reference beams. While angle multiplexing has these advantages, it suffers from a limited achievable storage density [ 18]. It shows that the achievable user capacity (considering only the optical system limits) reaches a maximum of around 140 GB for a media thickness of about 800 µm [ 18]. The invention of polytopic multiplexing [ 13] allowed InPhase Technologies to achieve high density storage beyond the limits imposed by angle multiplexing alone (Fig. 4). Combining polytopic multiplexing with angle multiplexing allows books to be overlapped within the media, which can increases the storage density by more than a factor of 10 [ 13]. Architecting the drive to incorporate phase conjugation with angle-polytopic multiplexing was another significant innovation, which simplified the optical path by placing all the optical elements on one side of the media, and allowed simpler lenses to be used because of the inherent aberration correction. Strictly, the drive does not read out using pure phase conjugation, which typically requires a nonlinear crystal. Instead, since the reference beams are plane waves, and if their optical quality is adequate, a simple retro reflection provides a practical means of achieving quasi phase conjugation. The first product has a capacity of 300 GB and a transfer rate of 20 MB/s for both recording and reading [ 12, 19].
Fig.4 Illustration of the packing density increase by using (b) polytopic multiplexing over (a) traditional angle multiplexing (After Refs. [12,13])

Full size|PPT slide

On the other hand, however, the 2-axis HDS tended to be more complicated than other architectures such as collinear HDS or microholographic recording. Proposed by InPhase Technologies (and later, Akonia Holographics LLC) and Hitachi Corporation, the monocular architecture is promising because it reduces the drive size and complexity by placing both the data and reference beams through a single objective lens [ 17, 2022]. In the architecture, both signal and reference beams pass through a single objective lens with numerical aperture (NA) 0.85 for realizing angularly multiplexed recording. The geometry of the architecture brings a high affinity with an optical architecture in the BD system because the objective lens can be placed parallel to a holographic medium. Through the comparison of experimental results with theory, the validity of the optical architecture was verified and demonstrated that the conventional objective lens motion technique in the BD system is available for angularly multiplexed recording [ 17]. The recording density of 1 Tbits/in.2, which has possibility of realizing 1 TB disk, was achieved with monocular architecture [ 22].

Collinear holographic storage

In 1996, a team at Stanford University proposed using a storage lens with a NA large enough to allow both the signal beam and reference beam to pass through it [ 11, 12, 23]. This idea was the genesis of the collinear and coaxial architectures. The Stanford team used correlation multiplexing and a pulsed laser system to record holograms on a continuously spinning disk. In 1998, Sony Corporation presented a similar concept using shift multiplexing [ 24]. Following this, Optware Corporation and a number of others in the industry began developing systems based on these concepts [ 14, 2532]. Collinear technologies have significant advantages compared with 2-axis holography: good write and read performance, uniform shift selectivity for both radial and tangential directions, and fairly large system tolerances have been experimentally reported [ 14, 25, 26].
In the early 2000s, Optware Corporation developed systems using the collinear HDS. Figure 3 illustrates the collinear architecture [ 14]. The unique feature of this technology is that 2D page data patterns are recorded as volume holograms generated by a coaxially aligned information beam and a reference beam, which are produced simultaneously by the same SLM, the information pattern in the center and reference pattern circled it, and interfere with each other in the media through a single objective lens. In the reconstructing process, only the reference pattern, the outer-ring pattern, is used for creating a reference beam. The reconstructed image beam is sent back to the same lens as used in the recording process by a reflective interlayer in the media and reflected to a CMOS image sensor by a polarize beam splitter. However, the seriously blurring of the point-spread function (PSF) in a collinear system could not be resolved until the radial line (RL) amplitude modulation in reference was proposed [ 33]. The RL amplitude modulation avoids the Bragg degeneracy from point sources in the reference region to tighten the PSF; however, 80% of energy is wasted [ 32]. As images are binary amplitude data pages modulated with a SLM, the zero order spatial frequency (dc) component in the back focal plane of the Fourier lens takes a much higher percentage of the light intensity than any other spatial frequency components [ 28]. The high-intensity dc component will saturate the dynamic range of the recording medium, which consequently decreases the data storage density with shift multiplexing techniques [ 28]. While no fully functioning system came out of Optware’s work, collinear HDS techniques represented a tremendous step forward in the optical storage industry’s understanding and participation in holographic storage, and stimulated advances aimed at leveraging traditional optical storage technologies in holography. In addition, an industry standard for the collinear architecture, European Computer Manufacturers’ Association (ECMA) International standard ECMA-375, ECMA-377 and ECMA-378 were granted to Optware Corporation. ECMA-377 is the first standard for holographic versatile disc (HVD) recordable cartridges with capacity of 200 Gbytes per disk. ECMA-378 is the first standard for a read-only memory holographic versatile disc (HVD-ROM) with capacity of 100 Gbytes per disk.
Early systems used shift multiplexing, but more recent systems have switched to phase correlation multiplexing [ 28, 3236]. The phase structure is either embedded into the SLM, or imaged onto it, which homogenizes the Fourier transform of the data beam in addition to providing the phase information needed for phase correlation. It is believed once phase can be read out as the information, the storage capacity will increase a magnitude [ 36].
A team at Sony Corporation has also designed recording platforms using this type of architecture, and its work has led to several implementation improvements [ 37, 38]. Using correlation multiplexing, the team has demonstrated channel densities around 400 Gbit/in.2 [ 39, 40].

Microholographic storage

The microholographic storage approach offers a compromise, which combines the advantages of bit-oriented CD/DVD technology and holographic volume recording [ 15]. On a microholographic disk, the pit-land structure of a CD or DVD is replaced by microscopic volume gratings. These “microgratings” are holographically induced in the focal region of two counter propagating, highly focused laser beams: one beam is focused into the photosensitive layer and reflected back (Fig. 3). The interference pattern of the incident and reflected beam results in a grating-like modulation of the refractive index of the storage medium. Such the reflectivity of the disk is varied for light fulfilling the Bragg condition of the gratings. To retrieve the stored data, the original signal beam is reconstructed by reflection of the read beam at the gratings. The microholographic approach advantageously utilizes the third dimension while the optical system design remains very similar to a standard optical drive [ 41]. Multilayer recording relies on a volumetrically localized index modulation of microgratings while multiple data layers are addressed by simple confocal movement to different depths within the photopolymer layer. As photopolymers are widely transparent, the number of microholographic layers can become quite high.
Initially, Eichler et al. [ 15] proposed micro holographic storage using wavelength multiplexing to achieve the total-density more than 100 Gbytes on a 12-cm disc size. However, wavelength multiplexing can cause chromatic aberration and requires many laser sources. To overcome these limitations, McLeod et al. [ 42] presented the first experimental results of a 12-layer recording and reading with a NA of 0.55 and a single wavelength of 532 nm. The experimental results predicted a capacity limit of 140 Gbytes in a millimeter-thick disk or over 1 Tbyte with the wavelength and numerical aperture of Blu-Ray. In the static and linear-dynamic tester, Orlic et al. demonstrated an areal data density of 15 bits/μm2 and multilayer recording with a total number of 75 layers spaced by 4 μm in a 300 μm thick photopolymer [ 43]. Recently, Orlic et al. [ 44] showed the influence of the microgratings on a 350 nm spacing when many microgratings were recorded close to each other. Readout yielded a micrograting width of 306 nm at 532 nm and 197 nm at 405 nm. However, even at spot distances shorter than the wavelength, the individual microgratings are clearly distinguishable but the visibility becomes weaker [ 45].
Since microholographic storage approach is bitwise recording as well as a conventional optical disc, it has difficulty achieving larger surface recording density than that of current optical discs. This fact implies that the low data transfer rate is an intrinsic problem in microholograms. Additionally, reflectivity of a microhologram is in general much lower than that of current optical discs. Thus, a low SNR is another problem that occurs with microhologram. A team from Hitachi Ltd. proposed a new concept of optical phase multi-level recording in microhologram that clearly solves the above problems [ 46, 47]. Four-level phase modulation was successfully regenerated from recorded microholograms [ 47]. In this technique, multilevel phase signals are stored as the fringe shifts along the optical axis and recovered from the arctangent of two homodyne-detected signals [ 47]. They also proposed a new novel multiplexing technique using spatial modes of an optical beam to enhance data capacity [ 48]. Numerical simulation was performed to validate the proposal and it was estimated that the net increase of data capacity by applying two- and four-mode multiplexing with Hermite–Gaussian modes was 1.3 and 1.7 times, respectively [ 48]. A proof-of-principle experiment for two-mode multiplexing was performed by using spatial modes equivalent to Laguerre–Gaussian modes and it was demonstrated that signal output was selectively obtained by choosing an appropriate spatial mode for reference light of phase-diversity homodyne detection [ 48]. Recently, Katayama proposed a novel angular momentum technology [ 49]. The multiplexing is carried out by changing the order of the phase singularity of beams for recording and readout in multiple states. A readout signal simulation demonstrated that the multiplexing of at least five bits is feasible if we allow a 50% decrease in the signal level. It is expected that a terabyte-order recording capacity will be achieved in microholographic recording by combining this technology with 3D recording technology.
Successful development of microholographic storage also relies on development of a robust HDS drive and data format. However, realization of a first disk and drive system is nevertheless a huge challenge as almost every detail of the setup has to be adapted to the dynamic operation regime. Also the recording and readout algorithms have to be appropriately adapted. The control software developed for the existing setups have to be adapted and a suitable method of modular coding has to be implemented. A further important issue is the development and integration of a servo system for automatic tracking and focusing [ 50]. A team from Sony successfully recorded ten layers in monolithic recording material [ 51]. The readout signal processing improved the quality of the readout signal and, as a result, no erroneous bit in more than 4.2 × 105 consecutive channel bits were observed in 6 layers out of 10. Even in the worst layer, moderate bit error rate (BER) of 2.5 × 10-4 was confirmed. A system with single-sided recording and readout optics is preferred due to its simplicity and compatibility with current optical storage technologies. They also proposed a single-sided optical layout for microholographic storage [ 52]. The calculation results showed that the 3D readout characteristic of a microholographic disc is similar to that of a conventional reflection mark rather than holographic readout. Recently, Boden et al. from GE and Stanford University reported recent improvements in threshold material performance, as well as demonstration of another system concept that enables single sided recording via selective erasure of micro-holograms from a pre-populated disc [ 53].

Media

Requirements of materials for holographic data storage (HDS)

Media for holographic storage has long been one of the primary focus points for researchers in this field. The storage density is a function of the number of holograms multiplexed into a volume of the recording media, principally determined by its refractive index contrast and thickness. The recording and read-out transfer rates depend on the photosensitivity and the diffraction efficiency. If the sensitivities are very nonuniform, the transfer rate and exposures will vary significantly as more holograms are recorded. A change in exposures is needed to keep the diffraction efficiency of all the holograms approximately equal [ 12].
Material properties such as dimensional stability, optical quality, scatter level, linearity and volatility affect the fidelity of the recording and read-out process. The dimensional stability of the material is reflected in the resistance of the material to shrinkage or expansion during the recording process and thermal changes. Most recording processes such as a photo-initiated polymerization result in some physical shrinkage and/or bulk index change of the material system used. If the shrinkage and bulk index changes are small enough, they can be compensated for by changing the wavelength and angle of incidence of the reference beam (if they are multiplexed with a plane wave reference). Temperature changes also cause the media to expand and contract. Humidity is another mechanism that can affect the media volume.
For commercial systems, the manufacturing process and usage must also fit the needs of the application. For example, archival storage demands long (≥50 years) lifetimes. Adequate shelf life is also required. Thermal or solvent-based post-recording processing (common for display materials) should be avoided. Finally, the manufacturing must be inexpensive.

Candidate material systems

Early efforts focus on photorefractive crystals. While valuable demonstration vehicles, photorefractive crystals suffer from limited refractive index contrast, slow writing times, and volatility. Attention soon turned to more commercially realistic alternatives such as photochromic, photoaddressable, and photopolymer materials where, similarly, optical interference patterns could be imaged as refractive index modulations [ 12].
The photorefractive effect in organic materials was first reported in 1990 [ 54] and their refractive index contrast can match that of inorganic materials. Although the materials can exhibit high refractive index contrasts, their use for storage is limited due to the high poling voltages required to induce charge diffusion. In addition, volatility remains an issue with recorded materials typically stored in the dark to preserve the holograms. Fast writing is not possible, because the chromophore dynamics is on the order of seconds, even in systems with low glass transition Tg [ 55].
Photoaddressable systems were introduced in the 1990s by Bayer A G [ 56]. In these materials, holographic writing occurs through photoinduced birefringence. The materials typically consist of copolymer systems with alternating side groups of an isomerizing chromophore and a stabilizing mesogenic chain. These materials can be designed to exhibit high refractive index contrast and long time and high temperature stability but are limited for data storage applications by their low photosensitivity, high optical absorption and volatility [ 12].
Holographic recording in photochromic systems relies on direct absorption by chromophores in the material. These materials typically image the optical interference pattern of holographic recording as absorption changes within the material or phenomena such as modulations in birefringence resulting from absorption-induced isomerization of the chromophore [ 57]. However, as with any absorption-based material, issues such as low photosensitivity, volatility (at the recording wavelength), and high optical absorption limit their utility for HDS. Only General Electric (GE) claims to having developed a photochromic material that shows a threshold behavior for microholographic storage applications. In 2008, GE Global Research published the first data on polymer discs doped with photochromic dyes, which showed a threshold behavior with the energy dose [ 58]. With an NA value of 0.2, pits 1.5 μm apart could be resolved along the track in a quasi-static experiment. The depth of the pits was reported as approximately 12.9 μm full width at half maximum (FWHM) and the minimum dose required to write was 0.5 µJ. In 2009, the same group announced a 100-fold higher reflectivity of such microholograms, which are supposed to be readable by Blu-ray optics [ 59]. The materials were written at a wavelength of 405 nm. A possible data capacity of 500 GB on a 12 cm disc has been extrapolated.
Photopolymers are very interesting as optically sensitive recording media due to the fact that they are inexpensive, self-processing materials with the ability to capture low-loss, high-fidelity volume recordings of 3D illuminating patterns. While many materials have been tried over the past 40 years, so far only photopolymers have been shown to be commercially viable materials for HDS.

Photopolymer materials

Holographic photopolymers were first described in 1969 as a mixture of acrylic monomers and a photoinitiator [ 60]. Typically, photopolymers are composed of a total of three components: the photoinitiator, one or more monomers, and a polymeric binder. During holographic recording, polymerization is induced in the light intensity maxima of the interference pattern while no polymerization occurs in the nulls. Unpolymerized species diffuse from the nulls to the maxima of the interference pattern to equalize their concentration in the recording area, creating a refractive index modulation set up by the difference between the refractive indices of the photosensitive component and the host material [ 12, 61]. On the other hand, the polymerization changes the dimensions and the bulk refractive index of the material. The design of photopolymer media for holographic storage applications must balance the advantages of photopolymers against the impact of the changes that accompany the polymerization. Figure 5 illustrates simple one-dimensional (1D) grating formation process in such a photopolymer material [ 62]. It can be seen from the schematic that polymerisation occurs most strongly in the bright regions of high exposure due to the sinusoidal illuminating intensity pattern. As monomer is consumed in these regions due to polymerisation, a monomer concentration gradient is created. In the weakly illuminated regions, the excess monomer diffuses into the brighter regions in order to eliminate the resulting concentration gradient. This is illustrated in Fig. 5(c). As a result, a sinusoidal polymer concentration distribution is formed. Assuming all the monomer is converted to polymer by the end of the recording process, the spatial variation in the permittivity of the material will be related to the polymer concentration distribution. Thus a permanent modulation of the layer permittivity is generated; that is, a volume refractive index grating structure is produced.
Fig.5 Grating formation in photopolymer (After Ref. [62]). (a) Sinusoidal illuminating intensity distribution at the plate; (b) photopolymer layer before recording; (c) photopolymer layer during recording

Full size|PPT slide

Various types of photopolymer for use with a range of read/write technologies have been developed. Increased material sensitivities, resolutions, diffraction efficiencies, and material stabilities have been reported [ 62]. Such improvements include: 1) two-photon absorption techniques to store and retrieve information in the volume of the material layer [ 63]; 2) the introduction of nanoparticles to reduce polymerisation shrinkage [ 64]; 3) the introduction of thermally curable matrix networks suitable for mass production [ 65, 66]; 4) the introduction of chain transfer agents to improve the high spatial frequency performance of the materials [ 67, 68]; 5) the introduction of quantum dots to improve the materials’ refractive index modulation and fluorescence [ 63, 69]; 6) the introduction of polymethylmethacrylate (PMMA) based photopolymer materials to increase the thermal stability [ 70]. Clearly many promising materials, providing high storage capacities, are being developed for HDS media.
There have been two main approaches to designing photopolymer systems for HDS systems. One approach, pioneered by Polaroid and then later Aprilis Inc., uses a low-shrinkage chemistry known as a cationic ring-opening polymerization (CROP) to control the dimensional changes that occur during holographic recording [ 71]. These materials consist of a photoacid generator, sensitizing dye, CROP monomers and binder [ 72]. The CROP monomers typically have a functionality of greater than two and are based on cyclohexene oxides. In this strategy, a pre-imaging exposure is used to increase the viscosity of the material to prepare for holographic recording. The CROP imaging chemistry is also tuned for high photosensitivity.
The other approach, introduced by Bell Laboratories, Lucent Technologies, and InPhase Technologies, is known as a two-chemistry approach [ 73]. The material is composed of two independently polymerizable systems; one system reacts to form a 3D cross-linked polymer matrix in the presence of the second photopolymerizable monomer system (Fig. 6). The photopolymerizable monomer is the imaging component, as it reacts during holographic recording. This strategy produces high-performance recording media as a result of several important attributes. The matrix is formed in situ, which allows thick and optically flat formats to be formed. The 3D cross-linked nature of the polymer matrix creates a mechanically robust and stable medium. The matrix and photopolymerizable monomer system are chosen to be compatible in order to yield media with low levels of light scatter. The independence of the matrix and monomer systems avoids cross-reactions between the two that can dilute the refractive index contrast due to premature polymerization of the imaging component. Recent work [ 64, 74] has indicated the benefits of using nanoparticle dispersed monomer (acrylate) systems. In this approach, materials such as TiO2 and ZrO2 nanoparticles are incorporated into the photopolymer formulation providing both increases in dynamic range and decreases in recording-induced shrinkage. The nanoparticles counter-diffuse with respect to the monomer system providing enhancements in performance. One challenge in using these materials is the level of scatter caused by the particles. Efforts to reduce the scatter have focused on the use of ZrO2 rather than TiO2 materials.
Fig.6 Writing mechanism of two-chemistry approach

Full size|PPT slide

Both types of materials have been used to demonstrate digital HDS [ 75, 76]. Recent work on the CROP materials has focused on fingerprint imaging and correlators rather than data storage applications [ 77]. The two-chemistry materials were the basis of InPhase’s Tapestry media and the most recent efforts were by InPhase Technologies between 2001 and 2009 resulting in 52 functioning prototypes capable of 300 GB/disk, 20 MB/s transfer rates and 50 years lifetime [ 78]. Because dynamic aperture multiplexing increases the number of multiplexed holograms, a recording medium with increased dynamic range will be required. A new company, named Akonia Holographics LLC, has already developed such a medium and is currently testing and refining the formulation [ 78]. The DRED formulation (for dynamic range enhancing dopant) represents a major advance over the two chemistry recording medium developed by Bell Laboratories and InPhase. The DRED formulation improves on the two chemistry approach by partially recombining the two chemistries. Instead of merely being entangled, the photopolymer chains are provided bonding sites to attach to the matrix covalently. In addition to dramatically increasing dynamic range, DRED technology will improve the already long archival life of holographic recordings [ 79].

Rewritable photopolymer materials

InPhase also developed rewritable holographic media based on the architecture of the “two-chemistry” materials used for its write-once media [ 12]. In these preliminary systems, over 300 plane wave holographic cycles have been demonstrated. Digital pages were multiplexed and cycling of entire books of holograms was demonstrated with extremely high fidelity digital recovery. Environmental robustness was demonstrated as the system was cycled from atmospheric conditions to 80°C, 95% relative humidity, maintaining its holographic performance. The largest research and development problem that remains is to improve the fatigue resistance of the reversible systems.

Data channel

Typical data storage devices record one bit at a time onto the surface of a disk. In HDS, typically a million or more bits are recorded and recovered at the same time from the volume of the media. This requires a very different system and therefore a data channel very different from what is used in other storage devices. Many problems exist when reading data on a HDS system such as two-dimensional (2D) inter-symbol interference (ISI), inter-page interference (IPI), misalignment of read data pages, isolated pixel patterns, and diminishing light intensity at the edge of the page. Specifically, ISI and IPI increase the noise of the output and limit the storage capacity of a holographic storage system because each page and symbols are affected by others pages and symbols, respectively. Moreover, one pixel can have more than 1 bit because it is multi-level. As a result, HDS is prone to various errors. Many researchers have studied various ways of overcoming the noise associated with HDS systems, such as error correction codes (ECCs), modulation codes, equalization methods, and detection methods [ 80]. Figure 7 shows a diagram of the data channel. First the data bit stream must have ECCs applied to it and then be arranged into a block of data which we call a page. During the formatting of a page, modulation coding and other page features are added. It is this page that is presented to the optical system by a SLM (see Fig. 2). The SLM typically modulates the light with a dark (zeros) or bright (ones) pixel pattern that corresponds to the data page. This optical pattern is low pass filtered and stored in the media. It is then read-out from the media and detected on a camera sensor. This pattern must be detected and converted back into the original data bits with BER of less than 10-12 for consumer devices and 10-18 for professional or enterprise storage devices [ 12].
Fig.7 Overview of holographic data storage (HDS) data channel

Full size|PPT slide

Channel modeling and equalization

Signal intensity is a detected signal at the CCD. This must be transformed to digital data. The first step toward this task is channel equalization to undo the effects of distortion. To equalize the channel distortion, it is beneficial to have a channel model that accurately represents the recording and reading mechanism. Several models for holographic channels have been proposed.
Heanue et al. developed a model by for the dominant optical scattering noise (Rician noise) [ 81]. This assumption holds true for data storage systems with thick holographic recording media producing more optical scattering noise and large diffraction power dominating the detector noise. Vadde et al. proposed two different channel models (the magnitude model and the intensity model) for a pixel-matched HDS system that employs the 4-focal-length architecture [ 82]. Their results suggest that the magnitude model leads to better performance when the fill factors are small, whereas the intensity model appears to be more appropriate for the high-fill-factor cases. The magnitude model, when suitable, appears to provide a storage density improvement of as great as 65%, whereas the intensity model seems capable of providing as much as 15% density gain through deconvolution. However, those models do not provide any functional or mathematical relationship to the physical model; i.e., those models are not derived from the physical model. Heanue et al. developed a model for translation, but they assumed only a linear channel [ 83]. The physical model developed by Chugg et al. considers only the aperture as a source of ISI, and the model requires a four-dimensional kernel, which makes it too complex [ 84]. Keskinoz et al. proposed the discrete magnitude-squared channel (DMSC) model by considering quadratic nonlinearity in holographic systems [ 85]. While the physical models above are useful for better understanding of the channel and making trade-offs, they are rather slow. It is useful to be able to quickly generate a lot of data that are similar to what comes out of the actual channel. InPhase has provided ECC code developers a simpler model so they could test the performance of their codes [ 12]. They started doing this with a version of the amplitude model.
The aim of equalization is to reduce the effect of ISI. For linear systems, linear equalizers can be designed that use different criteria. For example, in zero forcing equalization, the aim is to obtain zero ISI; i.e., the overall desired channel response (including the equalizer) is a Dirac delta function. However, zero forcing equalization tends to amplify noise because this strict zero-ISI constraint causes the equalizer to amplify high frequencies. Another class of linear equalizer, called linear minimum mean-squared error (LMMSE) equalizers, attempts to minimize the effect of ISI by taking both noise and input data statistics into account and thereby not amplifying the noise as much as the zero forcing equalization. Keskinoz et al. introduced an advanced equalization method called the iterative magnitude-squared decision feedback equalization (IMSDFE), which takes the channel nonlinearity into account [ 85]. The performance of IMSDFE is quantified for optical-noise-dominated channels as well as for electronic-noise-dominated channels. Results indicate that IMSDFE is a good candidate for a high-density, high-intersignal-interference HDS channel. Chugg et al. developed minimum mean squared error (MMSE) equalizers for 2D finite contrast space-invariant ISI channels and studied the improvement of storage densities after equalization [ 84]. The equalization is shown to facilitate a capacity increase for HDS systems, providing a 47% increase in the number of stored pages and the storage density for a system operating at the Rayleigh resolution. The maximum storage density in HDS system is increased by 20% through the use of the equalization [ 84].
Recently, Kim et al. proposed new 2D nonlinear equalizers developed on the basis of a bilinear recursive polynomial (BRP) model for HDS systems [ 86]. The 2D BRP equalizer (BRPE) and the 2D BRP decision feedback equalizer (BRPDFE) were proposed. The schemes have a better BER performance in quadratic HDS channels than the conventional adaptive DFE and the linear equalizer. Therefore, the BRPE and BRPDFE are regarded as good alternative nonlinear equalizers for HDS systems.

Signal detection

Threshold detection and partial response maximum likelihood (PRML) detection

Signal detection is the next step after modeling and equalization. The goal of a signal detection algorithm is to recover information bits from the detected and equalized signal values.
In threshold detection, the input pixels take values from an M-ary alphabet corresponding to the different gray-level pixel values. Based on the probability distribution of the received intensity as a function of the gray-level, optimum threshold levels are chosen for decoding the detected pixel values by minimizing the probability of bit-error rates. In cases where combined noise effects occur as a result of scattering and detector electronics, deciding optimum thresholds is often difficult. When spatial variations in intensity occur, threshold detection can perform poorly. In such cases, modulation codes can be beneficial to facilitate threshold detection [ 87]. If the noise is independent from pixel to pixel and the SNR exhibits a 1/N2 dependence on the number of stored holograms, binary threshold detection provides better BER performance than gray-scale threshold techniques or differential techniques at the same capacity [ 81]. The optimal performance of the threshold technique requires accurate setting of the signal-dependent threshold. If the threshold cannot be reliably determined, either a priori or dynamically, differential techniques are preferred. At the cost of added complexity and possibly slower transfer rates, sequence based detection schemes may enhance BER performance, particularly when nonlinear effects that are due to the holographic recording process or to ISI are significant. Chen et al. proposed the structural similarity method to improve the thresholding method in the HDS system [ 88]. The benefit of the structural similarity method offers the human vision, high sensitivity to the image structure, and therefore the recognition method can give a better result than thresholding method in the BER. The high sensitivity of the image structure can increase the successful outcome in the image comparison. The result shows that the BER of using the recognition method is able to have 26% decrease less than using the thresholding method. Chen and Chiueh developed a method to recover gray-scale data pixels of images that have undergone interpixel interference in HDS systems [ 89]. The adopted algorithm, called the turbo receiver using interference-aware dual-list (TRIDL) detection, enjoys benefits of low error rate performance and low complexity.
In maximum likelihood detection, information bits are sequentially decoded based on a sequence of observations. 2D Viterbi detection with decision feedback equalization (DF-VA) was studied in Ref. [ 83]. The detection method employs Viterbi detection in one dimension and DF in the other. The detector operates row by row and is therefore compatible with the manner in which data are typically output from a CCD array. The incorporation of feedback improves the BER performance and reduces the complexity of previously proposed Viterbi detection methods for 2D systems. The Viterbi detection schemes enable looser tolerances on the optical system, resulting in either less expensive designs or the ability to increase storage capacity by the use of larger data pages. This scheme works fine as long as there is a constant lateral misalignment in two dimensions. However, the DF-VA technique cannot accommodate the time-varying changes as a result of rotational misalignments and nonuniform material shrinkages. More powerful techniques and generalizations are needed for handling time-varying channel effects. Error propagation is another limitation of this technique. Any error made in the decisions in the previous rows will affect the decisions in decoding the current row. Such error propagations can be catastrophic, leading to a large number of decoding errors, especially under low SNRs. Kong and Choi proposed an effective 2D PRML detection scheme for HDS systems [ 90]. The proposed scheme adopts the simplified trellis diagram, uses a priori information, and detects the data in two directions from the previously proposed detection schemes. The simplified trellis diagram, which has 4 states and 8 branches, yields a dramatic complexity reduction, while the simplified 2D PRML detector shows serious performance degradation in the high density HDS channels. To prevent performance degradation, the proposed detector uses a priori information in order to give higher reliability to the branch metric. Furthermore, the proposed scheme detects the data in the vertical and horizontal directions to fully utilize the characteristics of the channel detection with a 2D partial response target. By effective combination of these three techniques, the proposed scheme with a simple structure has more than 2 dB gains compared to the conventional detection schemes. The previously modified 2D soft output Viterbi algorithm (SOVA) for HDS consists of four 1D SOVAs in accordance with two different 1D partial response (PR) targets (vertical and horizontal directions). Recently, Koo et al. proposed a detection method in structural accordance with a 2D PR target through the use of the modified 2D SOVA [ 91]. The proposed method shows superior BER performance owing to the use of a 2D PR target that contains more information than the 1D PR target. Moreover, a reduction in power consumption, computational complexity, and the system-on-chip size for HDS system can be expected as a result of the use of only one 2D equalizer. In a practical HDS system, the reconstruction process for a data page should account for the processing time as well as the BER performance. To improve both aspects, they introduced 2D PRML composed of a 2D PR target including diagonal elements and a 2D SOVA with a variable reliability factor [ 92]. The 2D SOVA performs two 1D SOVAs in structural accordance with the 2D PR target where extrinsic information uses the expected value calculated on a synchronization pattern. Finally, the 2D SOVA exports a weighted average using the reliability factor that is updated similarly as the optimization scheme for each page. The simulation results show that the proposed method has superior BER performance, despite using only two 1D SOVAs as compared with the modified 2D SOVA composed of four 1D SOVAs.

Misalignment-compensation scheme

High storage densities require high NA lenses, operating near the diffraction limit over a wide field. A pixel-matched system, where each SLM pixel is imaged onto a single camera pixel, is simple conceptually but difficult in practice (see Fig. 8) [ 93]. However, compensating for these errors with dynamic image alignment and magnification using microactuators leads to prohibitively large, expensive and slow storage systems. For all these reasons, pixel-matched systems are not commercially viable. Chen et al. presented an efficient solution to recovering data pixels of images that have undergone optical and electrical channel impairments in HDS systems [ 94]. The proposed misalignment-compensation scheme, consisting of realignment and rate conversion, can effectively eliminate misalignment with more than 84% reduction in additions and 74% reduction in multiplications. In addition, several low-complexity techniques were introduced to reduce the complexity of a 2D maximum a posteriori pixel detection method by up to 95% and do so with negligible degradation in detection performance. Gu et al. developed a method based on a three-pixel model, which can be used to compensate arbitrarily misaligned data pages [ 95]. The compensation method uses prior information of the pixels on the input SLM. Recursive solutions are carried out to recover the real values of the SLM pixels. Both simulation and experimental results show that the SNR can be doubled approximately.
Fig.8 Simulated data pixel image neighborhood (real part of complex amplitude) (After Ref. [93])

Full size|PPT slide

Burr proposed an alternative approach for pixel-matched systems that introduces a deliberate image magnification error into a system operating near the pixel-matched imaging condition, thus implementing sub-Nyquist rate oversampling [ 93]. Such a configuration has the advantages of maintaining a low detector bandwidth, while also shifting the burden from preventing detector misalignment to measuring detector alignment. Ayres et al. developed a sub-Nyquist oversampling methodology that can recover arbitrarily aligned and distorted megapixel data page images with pixel-matched fidelity by using fewer than double the number of detector pixels [ 96]. The method consists of a novel alignment measurement technique that uses covariance calculations to accurately locate pseudorandom fiducial marks and a linear resampling filter with coefficients that vary according to the local pixel alignment. A laboratory demonstration was performed with an oversampling ratio of 4/3. Stacks of holograms were written and then read out with both a pixel-matched detector and an oversampled detector. Under nominal conditions, the performance of oversampled detection was equal to that of pixel-matched detection. In increasingly marginal conditions (created by dropping the diffraction efficiency of the holograms), oversampled detection actually degraded more slowly than pixel-matched detection.

Homodyne detection

Coherent detection techniques have been used in communications channels for over a century. Coherent detection enables the reception of signals modulated in phase and frequency, and generally improves the reception of signals modulated in amplitude when compared with direct detection. Coherent detection is performed by combining the signal with a reference signal, called the local oscillator, and then detecting the intensity of the mixed signal. Generally, the local oscillator may differ in frequency from the signal carrier, and the process is known as heterodyne detection [ 12]. Homodyne detection is the method of blending a coherent reference field with a signal and detecting the interference pattern between the two. This has the effect of amplifying the signal, eliminating nonlinear effects of coherent noise, and allowing the detection of phase as well as amplitude. Homodyne detection normally requires careful phase control of the reference field and the signal, requiring complex adaptive optics and/or phase servo loops. Akonia Holographics LLC has instead developed a novel algorithm that allows homodyne detection to be performed simply by combining two blended signals with phase changed by 90°. Akonia estimates this will boost SNR enough to approximately double storage density [ 78, 97].
The ability to detect the phase of a hologram presents another opportunity to increase storage density. A second hologram can be recorded with each reference beam (e.g., two holograms at each reference beam angle for angle multiplexing), and the holograms will not cross talk provided they have a 90° difference in phase. Thus, the technique of phase quadrature holographic multiplexing (PQHM) provides yet another doubling of storage density, and opens the door to other advanced channel techniques [ 12, 78]. Together, homodyne detection and PQHM will boost transfer rates by factors of 4 and 10 for writing and reading, respectively.

Channel codes

Channel codes can be classified into two categories, namely, modulation codes and ECCs. Modulation codes are used to combat ISI and facilitate detection. ECCs introduce controlled redundancy across data pages to identify and rectify bit errors.

Modulation codes

The role of modulation codes is to shape the coded data according to the channel characteristics so that data are less prone to errors. Choosing an appropriate modulation code facilitates detection and decoding. Threshold detection performs poorly when there are frequent spatial variations in the data pages. An alternative to threshold decoding is to use balanced arrays (dc-free array codes) together with a simple detection scheme. Since the variation in pixel intensity is not much in a small local neighborhood of the detector array, coding data patterns using balanced arrays facilitates simple detection algorithms. For instance, by reading N binary coded pixels, N/2 pixels with the highest intensity can be declared “1” and the rest as zeros [ 87]. The real challenge is to design asymptotically unity rate 2D balanced codes, i.e., codewords that have the least amount of redundancy. Algorithms for constructing such balanced arrays are reported in Ref. [ 87].
Sometimes, we need codes for shaping the power spectrum of the channel data. Such codes are called spectral shaping codes. Low-pass filtering codes [ 98] (codes that eliminate patterns with rapidly changing ones and zeros, i.e., having high spatial frequency) and spectral null codes [ 87] (codes that exhibit a null at zero frequency) are examples of spectral shaping codes. Compensating for IPI and ensuring timing recovery in 2D detectors is an important application of modulation codes like runlength-limited codes [ 99] and checker-board codes. Constructing efficient 2D run length limited (RLL) codes is a challenging problem [ 100].
There are two major concerns with HDS systems: 2D ISI and IPI. The use of existing PRML detection can somewhat mitigates the problem of ISI. However, if the channel is subject to misalignment, the performance of PRML detection can be degraded. This problem can be remedied by a modulation code. Park et al. proposes a 6/9 four-ary modulation code for four-level HDS, which mitigates ISI [ 80]. Kim and Lee proposed a simple 2/3 modulation code for multi-level HDS [ 101]. The simulation results confirmed that the BER and symbol error rate(SER) of the proposed 2/3 modulation code are better than random sequences at high SNR, and the majority of the interference can be avoided.
Other modulation codes include low-density parity-check (LDPC) codes and trellis-based error-correcting modulation codes [ 102, 103]. LDPC codes, first invented in early 1960s by Gallager [ 104], were rediscovered in late 1990s and shown to form a class of Shannon limit approaching codes [ 105]. Pishro-Nik et al. showed that a carefully designed irregular LDPC code has a very good performance in HDS systems [ 102]. They optimized high-rate LDPC codes for the nonuniform error pattern in HDS to reduce the BER extensively. The prior knowledge of noise distribution was used for designing as well as decoding the LDPC codes. They showed that this technique can increase the maximum storage capacity of HDS by more than 50% if the irregular LDPC codes with soft-decision decoding were used instead of conventionally employed reed-solomon (RS) codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity. Yoon et al. developed and evaluated an effective bit likelihood mapping method for LDPC decoding [ 106]. An additional step was applied to a conventional LDPC decoding using feedback information in an iterative decoding in order to improve decoding performance. Feedback information can be helpful in accelerating the convergence of LDPC iterative decoding and in improving the error performance at the same number of decoding iterations.
Trellis modulation (also known as trellis coded modulation, or simply TCM) is a modulation scheme which allows highly efficient transmission of information over band-limited channels. Trellis modulation was invented by Ungerboeck working for IBM in the 1970s, and first described in a conference paper in 1976; but it went largely unnoticed until he published a new detailed exposition in 1982 which achieved sudden widespread recognition [ 107]. Kim et al. proposed 4/6 modulation codes that can achieve coding gain using trellis modulation schemes [ 108]. The proposed modulation codings not only remove the fatal 2D ISI patterns but also have error correcting capability. The proposed coding methods show better performance than the conventional 6/8 and 4/6 modulation codes. Kim et al. adopted TCM to obtain a good error correcting capability without loss of data rate in the HDS systems [ 109]. To overcome loss of a data rate caused by a 1/2-rate convolutional code, they extended the 3/4 modulation code set as high-order modulation and mapping method to maximize the free distance on the trellis and calculated the free distances according to each constraint length. The simulation results show that the proposed scheme has about 4 dB coding gains compared to the uncoded 2/4 modulation code scheme as reference system in spite of the same data rates.

Error correction codes

In holographic channels, errors occur in the form of 2D bursts of a certain geometrical shape and are not independent and identically distributed. Further, error rates can vary over data pages. For example, bits at the center of a page are less likely to be in error than those near edges. Designing simple and efficient 2D burst error correcting codes for different error rate regions is an interesting problem. Several authors have investigated the problem of designing optimal burst error correcting codes on a rectangular lattice [ 110, 111]. To correct burst errors, optimal interleaving strategies must be adopted. The problem of optimal interleaving in one-dimension is straightforward. Designing interleaving strategies for higher-dimensional constraints is a challenging problem. Blaum et al. developed efficient two- and 3D interleaving schemes requiring the smallest possible number of distinct codes without repetitions [ 112]. Etzion and Vardy have investigated 2D interleaving schemes with repetitions [ 113]. Jiang and Bruck introduced the concept of multicluster interleaving (MCI), a generalization of traditional interleaving problems [ 114]. Compared to traditional interleaving schemes, MCI has the distinct feature that the diversity of integers is required in multiple, instead of single, clusters. It has potential applications in HDS. There exist many open problems in MCI. Gu et al. proposed a Reed–Solomon volumetric coding with matched interleaving (RSVC-MI) scheme [ 115]. This scheme combines the advantages of the RSVC scheme and the matched interleaving scheme, makes full use of the prior knowledge of the error patterns in the HDS channel, distributes errors more uniformly, and decodes data iteratively in three dimensions. It is able to eliminate the influences of non-uniform distribution of errors within a page and across pages, overcome the effects of burst errors, correct random errors, and effectively reduce the SER of the HDS channel.

Discussion and conclusions

HDS made great progress toward practicality with the advances in multiplexing techniques, holographic recording materials and channel coding and signal processing. However, it is impossible to give an exhaustive review of HDS that had been developed in the last ten years. Other important achievements are omitted because of few open publications, the author’s knowledge and the limited time, such as progress in the drive technology, the media manufacturing, the writing strategies and disk formatting, servo and drive control, read only memories and so on. However, as a promising storage technology that has been worked on over the past 50 years, HDS has not yet achieved commercial success. For commercial products, desirable features might include capacity, input and output data rates, latency, cost, system volume, and power consumption. In the past ten years, the most recent efforts were by InPhase Technologies between 2001 and 2009 resulting in 52 functioning prototypes capable of 300 GB/disk and 20 MB/s transfer rates [ 12]. The primary competitor to holographic archive storage is magnetic tape. By 2013, magnetic tape had achieved 5 TB capacity and 240 MB/s transfer rates. This left HDS at a competitive disadvantage to magnetic tape archive solutions despite other strengths such as robustness, random access, and longer-term archive lifetime.
However, HDS is just taking its first steps into the commercial arena, while traditional optical storage is reaching the end of its practical roadmap. For example, for a medium 1.5 mm thick and a bulk index of refraction of 1.5, and a recording wavelength of 0.4 µm, the addressable limit for the HDS system is approximately 12 Tbit/in.2. By increasing the material thickness, and index of refraction, it is possible to build devices and media with addressable densities of 40 Tbit/in.2. However, using PQHM does double or triple the achievable addressable space limit [ 12]. For used in big data center in the future, it is assumed that the first step is the capacity of 1 Tbytes/disc. For example, in an optical disc library containing 1000 discs, the total capacity will reach 1 PB. So the author concludes that HDS technology is an attractive candidate for big data center. On the other hand, there are many challenges ahead for HDS technology to overcome in the years to come. Looking into the future, HDS must be highly competitive with tape in three critical areas: cost/TB, capacity/footprint, and transfer rate. If this can be achieved, HDS would become a superior solution to propel it into being the front-runner in big data center storage.

Acknowledgement

This work was supported in part by the Science and Technology Commission of Shanghai Municipality (No. 14521103303).
1
Ruan H, Bu C Y. Multilayer optical storage for big data center: by pre-layered scheme. In: Proceedings of the Society for Photo-Instrumentation Engineers, 2013, 8913: 891308

DOI

2
Gartner Inc. 2013, http://www.gartner.com/newsroom/id/496819

3
Poess M, Nambiar R O. Energy cost, the key challenge of today’s data centers: a power consumption analysis of TPC-C results. In: Proceedings of the VLDB Endowment, 2008, 1(2): 1229–1240

DOI

4
Burr G W. Three-dimensional optical storage. In: Proceedings of the Society for Photo-Instrumentation Engineers, 2003, 5225: 78

DOI

5
van Heerden P J. Theory of optical information storage in solids. Applied Optics, 1963, 2(4): 393–400

DOI

6
Anderson L K. Holographic optical memory for bulk data storage. Bell Laboratories Record, 1968, 45(10): 319–326

7
Staebler D L, Burke W J, Phillips W, Amodei J J. Multiple storage and erasure of fixed holograms in Fe-doped LiNbO3. Applied Physics Letters, 1975, 26(4): 182–184

DOI

8
Tsunoda Y, Tatsuno K, Kataoka K, Takeda Y. Holographic video disk: an alternative approach to optical video disks. Applied Optics, 1976, 15(6): 1398–1403

DOI PMID

9
Kubota K, Ono Y, Kondo M, Sugama S, Nishida N, Sakaguchi M. Holographic disk with high data transfer rate: its application to an audio response memory. Applied Optics, 1980, 19(6): 944–951

DOI PMID

10
Heanue J F, Bashaw M C, Hesselink L. Volume holographic storage and retrieval of digital data. Science, 1994, 265(5173): 749–752

DOI PMID

11
Coufal H J, Psaltis D, Sincerbox G. Holographic Data Storage. New York: Springer-Verlag, 2000

12
Curtis K, Dhar L, Hill A, Wilson W, Ayres M. Holographic Data Storage: From Theory to Practical Systems. Chichester, UK: John Wiley & Sons Ltd, 2011

13
Anderson K, Curtis K. Polytopic multiplexing. Optics Letters, 2004, 29(12): 1402–1404

DOI PMID

14
Horimai H, Tan X. Collinear technology for a holographic versatile disk. Applied Optics, 2006, 45(5): 910–914

DOI PMID

15
Eichler H J, Kuemmel P, Orlic S, Wappelt A. High-density disk storage by multiplexed microholograms. IEEE Journal on Selected Topics in Quantum Electronics, 1998, 4(5): 840–848

DOI

16
Yamatsu H, Ezura M, Kihara N. Study on Multiplexing methods for volume holographic memory. In: Proceedings of Joint International Symposium on Optical Memories and Optical Data Storage (ISOM/ODS), 2005, ThE1

17
Shimada K, Ide T, Shimano T, Anderson K, Curtis K. New optical architecture for holographic data storage system compatible with Blu-ray Disc™ system. Optical Engineering (Redondo Beach, Calif.), 2014, 53(2): 025102

DOI

18
Li H Y S, Psaltis D. Three-dimensional holographic disks. Applied Optics, 1994, 33(17): 3764–3774

DOI PMID

19
Anderson K, Fotheringham E, Hill A, Sissom B, Curtis K. High-speed holographic data storage at 500 Gbits/in.2. SMPTE Motion Imaging Journal, 2006, 115(5–6): 200–203

DOI

20
Hoskins A, Ihas B, Anderson K, Curtis K. Monocular architecture. Japanese Journal of Applied Physics, 2008, 47(7): 5912–5914

DOI

21
Shimada K, Ishii T, Ide T, Hughes S, Hoskins A, Curtis K. High density recording using monocular architecture for 500 GB consumer system. In: Proceedings of Optical Data Storage Conference (ODS), 2009, TuC2

22
Ishii T, Hosaka M, Hoshizawa T, Yamaguchi M, Koga S, Tanaka A. Terabyte holographic recording with monocular architecture. In: Proceedings of IEEE International Conference on Consumer Electronics (ICCE), 2012, 427–428

23
Orlov S S, Phillips W, Bjornson E, Takashima Y, Sundaram P, Hesselink L, Okas R, Kwan D, Snyder R. High-transfer-rate high-capacity holographic disk data-storage system. Applied Optics, 2004, 43(25): 4902–4914

DOI PMID

24
Saito K, Hormai H. Holographic 3-D disk using in-line face-to-face recording. In: Proceedings of Optical Data Storage Conference (ODS), Aspen, Colorado, 1998, 162–164

25
Tan X D, Horimai H. Collinear holographic information storage technologies and system. Acta Optica Sinica, 2006, 26(6): 827–830 (in Chinese)

26
Horimai H, Tan X D. Holographic information storage system: today and future. IEEE Transactions on Magnetics, 2007, 43(2): 943–947

DOI

27
Shimura T, Ichimura S, Fujimura R, Kuroda K, Tan X, Horimai H. Analysis of a collinear holographic storage system: introduction of pixel spread function. Optics Letters, 2006, 31(9): 1208–1210

DOI PMID

28
Jia W, Chen Z, Wen F J, Zhou C, Chow Y T, Chung P S. Coaxial holographic encoding based on pure phase modulation. Applied Optics, 2011, 50(34): H10–H15

DOI PMID

29
Jia W, Chen Z, Wen F J, Zhou C, Chow Y T, Chung P S. Single-beam data encoding using a holographic angular multiplexing technique. Applied Optics, 2011, 50(34): H30–H35

DOI PMID

30
Nobukawa T, Nomura T. Coaxial holographic memory with designed reference pattern on the basis of Nyquist aperture for high density recording. Japanese Journal of Applied Physics, 2013, 52(9S2): 09LD09

DOI

31
Liu J Q, Cao L C, Li C M Y, Li J H, He Q S, Jin G F. Crosstalk analysis of multilayer collinear volume holographic data storage. Proceedings of the Society for Photo-Instrumentation Engineers, 2013, 8847: 88470D

DOI

32
Yu Y W, Chen C Y, Sun C C. Increase of signal-to-noise ratio of a collinear holographic storage system with reference modulated by a ring lens array. Optics Letters, 2010, 35(8): 1130–1132

DOI PMID

33
Horimai H, Tan X, Li J. Collinear holography. Applied Optics, 2005, 44(13): 2575–2579

DOI PMID

34
O’Callaghan M J, McNeil J R, Walker C, Handschy M. Spatial light modulators with integrated phase masks for holographic data storage. In: Proceedings of Optical Data Storage Conference (ODS), Montreal, Canada, 2006, 23–25

35
Ishioka K, Tanaka K, Kojima N, Fukumoto A, Sugiki M. Optical collinear holographic recording system using a blue laser and a random phase mask. In: Proceedings of Joint International Symposium on Optical Memories and Optical Data Storage (ISOM/ODS), Honolulu, Hawaii, 2005, ThD3

36
Lin X, Ke J, Wu A A, Xiao X, Tan X D. An effective phase modulation in the collinear holographic storage. In: Proceedings of the Society for Photo-Instrumentation Engineers, 2014, 9006: 900607

DOI

37
Tanaka K, Mori H, Hara M, Hirooka K, Fukumoto A, Watanabe K. High density recording of 270 Gbit/in.2 in a coaxial holographic recording system. Japanese Journal of Applied Physics, 2008, 47(7): 5891–5894

DOI

38
Tanabe N, Yamatsu H, Kihara N. Experimental research on hologram number criterion for evaluating bit error rates of shift multiplexed holograms. In: Proceedings of Technical Digest of International Symposium on Optical Memories, 2004, 216–217

39
Tanaka K, Hara M, Tokuyama K, Hirooka K, Okamoto Y, Mori H, Fukumoto A, Okada K. 415 Gbit/in.2 recording in coaxial holographic storage using low-density parity-check codes. In: Proceedings of Optical Data Storage Conference, Lake Buena Vista, Florida, 2009, 64–66

40
Kimura K. Improvement of the optical signal-to-noise ratio in common-path holographic storage by use of a polarization-controlling media structure. Optics Letters, 2005, 30(8): 878–880

DOI PMID

41
Orlic S, Rass J, Dietz E, Frohmann S. Multilayer recording in microholographic data storage. Journal of Optics, 2012, 14(7): 072401

DOI

42
McLeod R R, Daiber A J, McDonald M E, Robertson T L, Slagle T, Sochava S L, Hesselink L. Microholographic multilayer optical disk data storage. Applied Optics, 2005, 44(16): 3197–3207

DOI PMID

43
Orlic S, Dietz E, Feid T, Frohmann S, Markoetter H, Rass J. Volumetric optical storage with microholograms. In: Proceedings of Optical Data Storage Topical Meeting, Lake Buena Vista, Florida, 2009, 1–3

44
Orlic S, Dietz E, Frohmann S, Rass J. Resolution-limited optical recording in 3D. Optics Express, 2011, 19(17): 16096–16105

DOI PMID

45
Min C K, Kim D H, Jeon S, Park K S, Park Y P, Yang H, Park N C, Kim J. Analysis of inter-symbol-interference caused by shift misalignment of two objective lenses in high-NA micro holographic storage. Microsystem Technologies, 2010, 18(9–10): 1623–1631

46
Mikami H, Osawa K, Watanabe K. Optical phase multi-level recording in microhologram. In: Proceedings of the Society for Photo-Instrumentation Engineers, 2010, 7730: 77301D

DOI

47
Mikami H, Osawa K, Tatsu E, Watanabe K. Experimental demonstration of optical phase multilevel recording in microhologram. Japanese Journal of Applied Physics, 2012, 51(8S2): 08JD01

DOI

48
Mikami H, Watanabe K. Microholographic optical data storage with spatial mode multiplexing. Japanese Journal of Applied Physics, 2013, 52(9S2): 09LD02

DOI

49
Katayama R. Proposal for angular momentum multiplexing in microholographic recording. Japanese Journal of Applied Physics, 2013, 52(9S2): 09LD11

DOI

50
Orlic S, Dietz E, Frohmann S, Gortner J, Mueller C. Microholographic multilayer recording at DVD density. In: Proceedings of Optical Data Storage Conference (ODS), 2007, MB4

51
Horigome T, Saito K, Miyamoto H, Hayashi K, Fujita G, Yamatsu H, Tanabe N, Kobayashi S, Uchiyama H. Recording capacity enhancement of micro-reflector recording. Japanese Journal of Applied Physics, 2008, 47(7): 5881–5884

DOI

52
Saito K, Kobayashi S. Analysis of micro-reflector 3-D optical disc recording. In: Proceedings of the Society for Photo-Instrumentation Engineers, 2006, 6282: 628213

DOI

53
Boden E P, Chan K P, Dylov D V, Kim E M, Lorraine P W, McCloskey P J, Misner M J, Natarajan A, Ostroverkhov V, Pickett J E, Shi X, Takashima Y, Watkins V H. Recent progress in micro-holographic storage. In: Proceedings of Joint International Symposium on Optical Memory and Optical Data Storage (ISOM/ODS), 2011, OWA1

54
Sutter K, Hulliger J, Günter P. Photorefractive effects observed in the organic crystal 2-cyclooctylamino-5-nitropyridine doped with 7,7,8,8-tetracyanoquinodimethane. Solid State Communications, 1990, 74(8): 867–870

DOI

55
Bässler H. Charge transport in disordered organic photoconductors a Monte Carlo simulation study. Phyical Status Solidi B, 1993, 175(1): 15–56

DOI

56
Eickmans J, Bieringer T, Kostromine S, Berneth H, Thoma R. Photoaddressable polymers: a new class of materials for optical data storage and holographic memories. Japanese Journal of Applied Physics, 1999, 38(Part 1, No. 3B): 1835–1836

DOI

57
Loerincz E, Ujhelyi F, Sueto A, Szarvas G, Koppa P, Erdei G, Hvilsted S, Ramanujam P S, Richter P I. Rewritable holographic memory card system. In: Proceedings of the Society for Photo-Instrumentation Engineers, 2000, 4090: 185–190

DOI

58
Lawrence B, Ostroverkhov V, Shi X, Longley K, Boden E P. Micro-holographic storage and threshold holographic materials. In: Proceedings of Joint International Symposium on Optical Memories and Optical Data Storage (ISOM/ODS), 2008, TD05–06

59
Lohr S. GE’s Breakthrough Can Put 100 DVDs on a Disc. The New York Times, 26. April2009

60
Close D H, Jacobson A D, Margerum J D, Brault R G, McClung F J. Hologram recording on photopolymer materials. Applied Physics Letters, 1969, 14(5): 159–160

DOI

61
Bruder F K, Hagen R, Rölle T, Weiser M S, Fäcke T. From the surface to volume: concepts for the next generation of optical-holographic data-storage materials. Angewandte Chemie International Edition, 2011, 50(20): 4552–4573

DOI PMID

62
Guo J X, Gleeson M R, Sheridan J T. A review of the optimisation of photopolymer materials for holographic data storage. Physics Research International, 2012, 803439

DOI

63
Li X, Bullen C, Chon J W M, Evans R A, Gu M. Two-photon-induced three-dimensional optical data storage in CdS quantum-dot doped photopolymer. Applied Physics Letters, 2007, 90(16): 161116

DOI

64
Suzuki N, Tomita Y, Kojima T. Holographic recording in TiO2 nanoparticle-dispersed methacrylate photopolymer films. Applied Physics Letters, 2002, 81(22): 4121–4123

DOI

65
Trentler T J, Boyd J E, Colvin V L. Epoxy resin photopolymer composites for volume holography. Chemistry of Materials, 2000, 12(5): 1431–1438

DOI

66
Gleeson M R, Sheridan J T, Bruder F K, Rölle T, Berneth H, Weiser M S, Fäcke T. Comparison of a new self developing photopolymer with AA/PVA based photopolymer utilizing the NPDD model. Optics Express, 2011, 19(27): 26325–26342

DOI PMID

67
Gleeson M R, Sabol D, Liu S, Close C E, Kelly J V, Sheridan J T. Improvement of the spatial frequency response of photopolymer materials by modifying polymer chain length. Journal of the Optical Society of America B: Optical Physics, 2008, 25(3): 396–406

DOI

68
Guo J, Gleeson M R, Liu S, Sheridan J T. Non-local spatial frequency response of photopolymer materials containing chain transfer agents: part II. experimental results. Journal of Optics, 2011, 13(9): 095602

DOI

69
Liu X, Tomita Y, Oshima J, Chikama K, Matsubara K, Nakashima T, Kawai T. Holographic assembly of semiconductor CdSe quantum dots in polymer for volume Bragg grating structures with diffraction efficiency near 100%. Applied Physics Letters, 2009, 95(26): 261109

DOI

70
Krul L P, Matusevich V, Hoff D, Kowarschik R, Matusevich Y I, Butovskaya G V, Murashko E A. Modified polymethylmethacrylate as a base for thermostable optical recording media. Optics Express, 2007, 15(14): 8543–8549

DOI PMID

71
Waldman D A, Li H Y S, Horner M G. Volume shrinkage in slant fringe gratings of a cationic ring-opening holographic recording material. Journal of Imaging Science and Technology, 1997, 41(5): 497–514

72
Waldman D A, Butler C J, Raguin D H. CROP holographic storage media for optical data storage at greater than 100 bits/µm2. In: Proceedings of the Society for Photo-Instrumentation Engineers, 2003, 5216: 10

DOI

73
Dhar L, Hale A, Katz H E, Schilling M, Schnoes M G, Schilling F C. Recording media that exhibit high dynamic range for digital holographic data storage. Optics Letters, 1999, 24(7): 487–489

DOI PMID

74
Suzuki N, Tomita Y, Ohmori K, Hidaka M, Chikama K. Highly transparent ZrO2 nanoparticle-dispersed acrylate photopolymers for volume holographic recording. Optics Express, 2006, 14(26): 12712–12719

DOI PMID

75
Shelby R M, Waldman D A, Ingwall R T. Distortions in pixel-matched holographic data storage due to lateral dimensional change of photopolymer storage media. Optics Letters, 2000, 25(10): 713–715

DOI PMID

76
Dhar L, Curtis K, Tackitt M, Schilling M, Campbell S, Wilson W, Hill A, Boyd C, Levinos N, Harris A. Holographic storage of multiple high-capacity digital data pages in thick photopolymer systems. Optics Letters, 1998, 23(21): 1710–1712

DOI PMID

77
Aprilis Inc.

78
Anderson K, Ayres M, Sissom B, Askham F. Holographic data storage: rebirthing a commercialization effort. In: Proceedings of the Society for Photo-Instrumentation Engineers, 2014, 9006: 90060C

DOI

79
Askham F U S. Patents, 8323854, 2012

80
Park K, Kim B S, Lee J. A 6/9 four-ary modulation code for four-level holographic data storage. Japanese Journal of Applied Physics, 2013, 52(9S2): 09LE05

DOI

81
Heanue J F, Bashaw M C, Hesselink L. Channel codes for digital holographic data storage. Journal of the Optical Society of America A: Optics, Image Science, and Vision, 1995, 12(11): 2432–2439

DOI

82
Vadde V, Kumar B V K V. Channel modeling and estimation for intrapage equalization in pixel-matched volume holographic data storage. Applied Optics, 1999, 38(20): 4374–4386

DOI PMID

83
Heanue J F, Gürkan K, Hesselink L. Signal detection for page-accessoptical memories with intersymbol interference. Applied Optics, 1996, 35(14): 2431–2438

DOI PMID

84
Chugg K M, Chen X P, Neifeld M A. Two-dimensional equalization in coherent and incoherent page-oriented optical memory. Journal of the Optical Society of America A: Optics, Image Science, and Vision, 1999, 16(3): 549–562

DOI

85
Keskinoz M, Kumar B V K V. Discrete magnitude-squared channel modeling, equalization, and detection for volume holographic storage channels. Applied Optics, 2004, 43(6): 1368–1378

DOI PMID

86
Kim T, Kong G, Choi S. Two-dimensional equalization using bilinear recursive polynomial model for holographic data storage systems. Japanese Journal of Applied Physics, 2012, 51(8S2): 08JD05

DOI

87
Srinivasa S G. Constrained Coding and Signal Processing for Holography. PhD Thesis, Georgia Institute of Technology, 2006

88
Chen Y T, Ou-Yang M, Lee C C. A recognition method in holographic data storage system by using structural similarity. In: Proceedings of the Society for Photo-Instrumentation Engineers, 2013, 8855: 88550J

DOI

89
Chen C Y, Chiueh T D. Hardware implementation of pixel detection in gray-scale holographic data storage systems. Applied Optics, 2012, 51(34): 8228–8235

DOI PMID

90
Kong G, Choi S. Effective two-dimensional partial response maximum likelihood detection scheme for holographic data storage systems. Japanese Journal of Applied Physics, 2012, 51(8S2): 08JB06

DOI

91
Koo K, Kim S Y, Kim S W. Modified two-dimensional soft output Viterbi algorithm with two-dimensional partial response target for holographic data storage. Japanese Journal of Applied Physics, 2012, 51(8): 08JB03

92
Koo K, Kim S Y, Jeong J J, Kim S W. Two-dimensional soft output Viterbi algorithm with a variable reliability factor for holographic data storage. Japanese Journal of Applied Physics, 2013, 52(9S2): 09LE03

DOI

93
Burr G W. Holographic data storage with arbitrarily misaligned data pages. Optics Letters, 2002, 27(7): 542–544

DOI PMID

94
Chen C Y, Fu C C, Chiueh T D. Low-complexity pixel detection for images with misalignment and interpixel interference in holographic data storage. Applied Optics, 2008, 47(36): 6784–6795

DOI PMID

95
Gu H R, Cao L C, He Q S, Jin G F. Compensation for pixel mismatch based on a three-pixel model in volume holographic data storage. In: Proceedings of the Society for Photo-Instrumentation Engineers, 2010, 7848: 78480

DOI

96
Ayres M, Hoskins A, Curtis K. Image oversampling for page-oriented optical data storage. Applied Optics, 2006, 45(11): 2459–2464

DOI PMID

97
Ayres M R U S. Patents, 7623279, 2009

98
Ashley J J, Marcus B H. Two-dimensional low-pass filtering codes. IEEE Transactions on Communications, 1998, 46(6): 724–727

DOI

99
Immink K A S, Siegel P H, Wolf J K. Codes for digital recorders. IEEE Transactions on Information Theory, 1998, 44(6): 2260–2299

DOI

100
Srinivasa S G, McLaughlin S W. Enumeration algorithms for constructing (d(1), infinity, d(2), infinity) run length limited arrays: capacity estimates and coding schemes. In: Proceedings of IEEE Information Theory Workshop, 2004, 141–146

101
Kim S Y, Lee J. A simple 2/3 modulation code for multi-level holographic data storage. Japanese Journal of Applied Physics, 2013, 52(9S2): 09LE04

DOI

102
Pishro-Nik H, Rahnavard N, Ha J, Fekri F, Adibi A. Low-density parity-check codes for volume holographic memory systems. Applied Optics, 2003, 42(5): 861–870

DOI PMID

103
Kim J, Lee J. Simplified decoding of trellis-based error-correcting modulation codes using the M-algorithm for holographic data storage. Japanese Journal of Applied Physics, 2012, 51(8S2): 08JD02

DOI

104
Gallager R G. Low-density parity-check codes. I.R.E. Transactions on Information Theory, 1962, 8(1): 21–28

DOI

105
MacKay D J C, Neal R M. Near Shannon limit performance of low density parity check codes. Electronics Letters, 1996, 32(18): 1645–1646

DOI

106
Yoon P, Chung B, Kim H, Park J, Park G. Low-density parity-check code for holographic data storage system with balanced modulation code. Japanese Journal of Applied Physics, 2008, 47(7): 5981–5988

DOI

107
Ungerboeck G. Channel coding with multilevel/phase signals. IEEE Transactions on Information Theory, 1982, 28(1): 55–67

DOI

108
Kim J, Wee J K, Lee J. Error correcting 4/6 modulation codes for holographic data storage. Japanese Journal of Applied Physics, 2010, 49(8): 08KB04

DOI

109
Kim Y, Kong G, Choi S. Error correcting capable 2/4 modulation code using trellis coded modulation in holographic data storage. Japanese Journal of Applied Physics, 2012, 51(8S2): 08JD08

DOI

110
Imai H. Two-dimensional fire codes. IEEE Transactions on Information Theory, 1973, 19(6): 796–806

DOI

111
Abdel-Ghaffar K A S, McEliece R J, van Tilborg H C K. Two-dimensional burst identification codes and their use in burst correction. IEEE Transactions on Information Theory, 1988, 34(3): 494–504

DOI

112
Blaum M, Bruck J, Vardy A. Interleaving schemes for multidimensional cluster errors. IEEE Transactions on Information Theory, 1998, 44(2): 730–743

DOI

113
Etzion T, Vardy A. Two-dimensional interleaving schemes with repetitions: constructions and bounds. IEEE Transactions on Information Theory, 2002, 48(2): 428–457

DOI

114
Jiang A A, Bruck J. Multicluster interleaving on paths and cycles. IEEE Transactions on Information Theory, 2005, 51(2): 597–611

DOI

115
Gu H R, Cao L C, He Q S, Jin G F. Reed-Solomon volumetric coding with matched interleaving for holographic data storage. Japanese Journal of Applied Physics, 2012, 51(8R): 082502

DOI

Outlines

/