IMRSim: A Disk Simulator for Interlaced Magnetic Recording Technology
Abstract.
The emerging interlaced magnetic recording (IMR) technology achieves a higher areal density for hard disk drive (HDD) over the conventional magnetic recording (CMR) technology. IMR-based HDD interlaces top tracks and bottom tracks, where each bottom track is overlapped with two neighboring top tracks. Thus, top tracks can be updated without restraint, whereas bottom tracks can be updated by the time-consuming read-modify-write (RMW) or other novel update strategy. Therefore, the layout of the tracks between the IMR-based HDD and the CMR-based HDD is much different. Unfortunately, there has been no related disk simulator and product available to the public, which motivates us to develop an open-source IMR disk simulator to provide a platform for further research.
We implement the first public IMR disk simulator, called IMRSim, as a block device driver in the Linux kernel, simulating the interlaced tracks and implementing many state-of-the-art data placement strategies. IMRSim is built on the actual CMR-based HDD to precisely simulate the I/O performance of IMR drives. While I/O operations in CMR-based HDD are easy to visualize, update strategy and multi-stage allocation strategy in IMR are inherently dynamic. Therefore, we further graphically demonstrate how IMRSim processes I/O requests in the visualization mode. We release IMRSim as an open-source IMR disk simulation tool and hope to attract more scholars into related research on IMR technology.
1. Introduction
With the advent of the big data era, enterprises and industries need storage systems with larger capacity and lower costs. Affected by the superparamagnetic effect (Thompson and Best, 2000), the areal data density of CMR (see Fig. 1(a)) has reached the limitation (Marchon et al., 2013; Wood, 2000). To further expand disk storage capacity and reduce costs, academia and storage manufacturers are trying to explore new technologies and methodologies. Among the innovations in track layout, shingled magnetic recording (SMR) (Todd et al., 2012; Keng Teo et al., 2012; Salo et al., 2014) and interlaced magnetic recording (IMR) (Gao et al., 2016; Hwang et al., 2016; Granz et al., 2019) have gained tremendous popularity in disk devices (Granz et al., 2018). Furthermore, with the combination of the energy-assisted technologies for the recording head or media, such as heat-assisted magnetic recording (HAMR) (Kryder et al., 2008; Rottmayer et al., 2006), microwave-assisted magnetic recording (MAMR) (Zhu et al., 2007; Zhu and Wang, 2010), hard disk drives can further improve the areal data density.

In the SMR technology, to maximize the data storage density (Greaves et al., 2009), the tracks overlap like roof tiles to shorten the track gap, as shown in figure 1(b). As each track is partially overlapped with its subsequent track, random writes may corrupt data on the subsequent tracks. To protect the data, all subsequent tracks must be backed up before randomly writing a track, and then rewritten after randomly writing the track. This time-consuming phenomenon, known as read-modify-write (RMW), seriously degrades the random write performance of the SMR drive.
Compared to SMR, the alternative IMR technique (Fig. 1(c)) achieves better random write performance. IMR organizes the tracks in an interlaced fashion rather than the tiled coverage, thus the random write process will only affect at most two adjacent tracks. As shown in Fig. 1(c), IMR divides tracks into top and bottom tracks, where each bottom track is overlapped with two neighboring top tracks, and each top track partially covers two neighboring bottom tracks.
The essences of the IMR technique are energy-assisted recording and track overlapping. In IMR, the bottom track is wider than the top track (Gao et al., 2016). However, the traditional perpendicular magnetic recording (PMR) technology cannot control the width of the track. Thus, such track characteristics can only be achieved by the energy-assisted technology at present. The wider bottom track requires higher energy intensity than the narrower top track, which results in a higher data storage density in the bottom track than the top track (Wang et al., 2018). Therefore, data updating on one top track requires a lower energy intensity that is not enough to destroy the adjacent bottom tracks. As a result, IMR can perform update-in-place operation for top track data updating without the additional rewriting overhead. Unfortunately, higher energy intensity is required to update data on a bottom track. This will destroy data from the adjacent top tracks. To avoid the data corruption on the top tracks, the time-consuming update mechanism (e.g., read-modify-write) is required (Gao et al., 2016). Anyway, compared with SMR, the theoretical performance of IMR technology is outstanding. However, at present, the development of IMR-specific technology is still in the research stage, both in academia and industry. Meanwhile, there is no publicly available IMR simulator.
To solve this problem, we propose the first open-source IMR disk simulator, called IMRSim (ref, [n.d.]), as a block device driver in the Linux kernel. IMRSim effectively simulates the interlaced track layout, and supports many state-of-the-art data placement scheme, such as the two-stage allocation strategy, the three-phase allocation strategy. Furthermore, IMRSim provides an extensible user interface to flexibly adjust the device-specific parameters. In addition, IMRSim can process I/O requests in the visualization mode, greatly improving the understanding of the emerging IMR technology. Finally, We test IMRSim against CMR-based HDD on some real workloads to evaluate the I/O performance of the IMR-based HDD. The results show that IMR’s update strategy brings significant performance loss which meet the expectation of the evaluation. To the best of our knowledge, this is the first public IMR disk simulator. The main contributions are summarized as follows:
-
•
We implement the first public IMR disk simulator, IMRSim. IMRSim can effectively capture the key characteristics of IMR, including simulating interlaced tracks with different data densities, redirecting and sending bio requests to realistically simulate update strategies, etc. It should be noted that, since there are currently no IMR-based HDD products on the market, we have to ignore some physical factors that may affect performance (e.g., the performance impact of different write energy intensity).
-
•
IMRSim is scalable. IMRSim provides good scalability for future related research. IMRSim provides an extensible user interface for adjusting the simulator parameters flexibly. And, users can modify the I/O path and configure their IMR optimization according to their needs.
-
•
IMRSim is visual. IMRSim provides the visualization component for the logical data layout and the data movement in I/O requests, enabling users to gain a clearer understanding of the IMR technology. We hope visualization can expose new opportunities for future IMR optimization.
In the remaining of this paper, Section 2 presents the design details of IMRSim. Section 3 lists the experiment settings and analyzes the results. Section 4 presents more related work and Section 5 concludes this paper.
2. IMR Simulation
The design goal of the proposed IMR disk simulator, IMRSim, is to simulate an relatively accurate, scalable, and visual IMR-based HDD. To achieve this goal, we use the device-mapper (DM) framework to create a Linux kernel module, and then use the module to export a pseudo block device in user-space that acts like an IMR drive. As we build the device-mapper target directly on top of CMR-based HDD, the module will execute the incoming requests on the bottom CMR drive finally. Therefore, IMRSim has the same rotational speed and the head switch time as the CMR-based HDD (Liang et al., 2022). But, the logical arrangement of the tracks in IMRSim is different from the CMR-based HDD. IMRSim is a software application that simulates the interlaced track layout of an IMR-based HDD. Simultaneously, IMRSim provides a set of user interfaces, thus, users can verify status information and adjust simulator action. In addition, IMRSim provides the visualization component for the logical data layout and the data movement in I/O requests, enabling users to gain a clearer understanding of the IMR technology.
2.1. IMRSim Kernel Module
Fig. 2 depicts the architectural overview of IMRSim. The read/write requests initiated by the upper-layer application are routed from top to bottom in Linux’s I/O subsystem. The IMRSim kernel module has the ability to capture and modify the original request (e.g., redirect). Thus, a mapped request is obtained, and then the request is added to the request queue until it is processed on disk.

The IMRSim kernel module contains the following sub-modules: (1) the data layout and management sub-module, which simulates the layout of interlaced track and uses “zone” to manage the track space (“zone” is similar to “TG” (Amer et al., 2011; Wu et al., 2020)), (2) the data placement sub-module, which handles different types of requests (read, write and update) for the simulated IMR disk, (3) the address translation and I/O mapping sub-module, which does the I/O redirection, (4) the availability and statistics collection sub-module, which launches a persistent thread to ensure availability, synchronize meta-data, and collect status information of the simulator. We will introduce them in detail in the following.
Data Layout and Management.The data layout and management sub-module determines how the disk tracks and sectors are logically grouped together and managed. The data layout is organized in zones, which are clusters of several continuous interlaced tracks, as illustrated in Fig. 3. IMR can consist of one or more zones. We assume that an application can be assigned to a zone and focus on data management within a zone. Note that the original intention of our design of zone is different from the design of band in SMR. Such a partition method can maintain the spatial locality of application data inside the zone to increase data access efficiency. Since IMRSim is a block device, a block (typically 4KB) is the smallest unit of management in the simulation. Each block is assigned a unique logical/physical block address.

It’s worth noting that the bottom track is wider and has a larger data storage density than the top track. Thus, we design the top and bottom tracks with the different data densities. According to previous research (Rottmayer et al., 2006), the data density of the bottom track in IMRSim is about 1.25 times that of the top track.
Data Placement. Data placement sub-module handles the incoming requests (e.g., read, write, or update). While the preceding discussion shows that, to avoid data loss or damage, before updating one bottom track data, IMR needs to protect the data in the affected top tracks. Recently, more and more different update strategies have emerged, such as read-swap-write (RSW) (Liang et al., 2022) and move-on-modify (MOM) (Narayanan et al., 2008). IMRSim can easily extend these strategies, here for simplicity, IMRSim applies the traditional read-modify-write (RMW) update strategy.
The RMW process of data read, write and update operations is shown in Algorithm 1. For the update of the data block, we need to judge the properties of the track where the block is located. If the block is on the top track, the update is free; if the block is on the bottom track, the RMW process is required during the update operation. It is worth noting that whether the top track data is backed up in memory or on disk has a greater impact on performance, but such analysis is beyond the scope of this article. In particular, in our implementation, we use page to keep data that needs to be backed up in memory.
Address Translation and I/O Mapping. For traditional CMR, LBA (logical block address) is equal to PBA (physical block address) and it can write randomly. However, if the IMR is like the CMR, then there is the potential for severe write amplification problems in the low-utilization allocation phase. So, we let the IMR allocate from bottom tracks, and then allocate top tracks, so that no write amplification occurs during the allocation of the bottom track. In this way, IMR must needs to keep a mapping table (MT) to map the logical block address (LBA) to a physical block address (PBA). And this sub-module focuses on how to convert LBA to PBA.
Here, we define two helpful functions. The first one is the address translation function (, e.g., ) is responsible for converting the request address into the logical zone ID, the track offset and the block offset triples in the related zone. The second one is its inverse function (denoted as ) performs the reverse conversion. The represents LBA or PBA, and represents the relative block address, which is represented by a triple (, , ).

As illustrated in Fig. 4, to get the PBA with the given LBA, IMRSim first uses to convert the LBA into the managed triple to get the zone ID and block offset (). Then, there will be two cases:
1) If there is no record corresponding to in the MT, it means a new write operation. At this time, IMRSim allocates a new block (with a new block offset, i.e. ) according to the different track allocation schemes, and then record (block offset) to in the MT.
2) If there is a record corresponding to in the MT, it means an update operation. Thus, the can be obtained by querying the MT.
Finally, the PBA can be obtained by using .
Note that the address translation triples (ie, y and y)̀ omit the track offset in Fig. 4. This is because, with the and can uniquely identify the PBA. Currently, we have implemented the classical two phases allocation strategy, and the three phases allocation strategy in IMRSim. Of course, the other multi-phase allocation strategies can be designed as needed. Also, our mapping table may be utilized to implement strategies such as track caching, hot and cold data swap, and track flipping (Wu et al., 2018; Wu et al., 2020).
Availability and Statistics Collection. This sub-module provides a certain level of availability for disks, and collects some simulator status information.
We add a meta-data area after data storage to record the mapping table and disk statistics. In addition, users can easily add new statistical indicators in IMRSim. In the current version of IMRSim, we record the behavior information of the simulator (e.g., the number of writes that occurred additionally) into memory, and open a persistent thread to periodically flush the collected statistics or the changed metadata to the disk.
2.2. User Interface
To facilitate user interaction with the simulator, we design a command-line user interface. It is a standard C program implemented based on the ioctl system call. The ioctl system call can be easily extended in the kernel, which makes IMRSim highly extensible. The user can use such system calls to flexibly adjust the design parameters and control the behavior of the simulator according to their needs. For example, users can input “./imrsim_util /dev/mapper/imrsim l 5 3” to set the allocation strategy to three-phase allocation.
In the implementation, the interface needs some command parameters to select the required operation. If the input format is wrong, it will notify the user of the wrong format and provide a friendly help interface and a correct input example.


2.3. Visualization
In addition, we implement a quick visualization of IMRSim which cam briefly demonstrate the dynamic process of IMR processing requests. Such visualization can help readers to further understand the track characteristics of IMR. We have collected the I/O traces of a zone during the fio test to realize visualization. As shown in Fig. 5(a), a two-stage IMR zone always allocates the bottom track before the top track when it is empty to process write requests (whether sequential or random). There is always one track spaced apart during the allocation process (the allocated track turns darkgray), as shown in Fig. 5(b).
3. Simulation Analysis
3.1. Experimental Methodology
This section evaluates the performance of two different allocation strategies. We created a 128GB IMR-based HDD and a 128GB CMR-based HDD on a Western Digital’s pure CMR drive (see Table 1). Note that we use the same partition on the CMR drive to ignore the impact of OD/ID differences on performance. In our experiment, we implement different allocation strategies for evaluation as follows: First, we evaluate the performance of IMR two-stage allocation versus three-stage allocation strategies. Second, the typical RMW update strategy is implemented.
HDD Details | |
---|---|
HDD model | WD20EJRx |
drive cache size | 64MB |
rotational speed | 5400rpm |
IMRSim Parameters | |
block size | 4096B |
top track size | 456 blocks |
bottom track size | 568 blocks |
capacity | 128GB |
workload | Number of Read / Write Requests | Total Read / Write Size (GB) | write ratio |
---|---|---|---|
hm_0 | 1,417,748 / 2,575,568 | 9.96 / 20.48 | 64.50% |
proj_0 | 527,381 / 3,697,143 | 8.97 / 144.27 | 87.52% |
src1_2 | 484,079 / 1,423,694 | 8.82 / 44.14 | 74.63% |
src2_2 | 350,930 / 805,955 | 22.79 / 32.28 | 69.67% |
We conduct all the experiments on a desktop PC, which is equipped with twelve Intel(R) Core(TM) i5-10400 CPU @ 2.90GHz processors and 16 GB DDR4 DIMM memory, where the operating system is 64-bit Ubuntu 14.10 with Linux kernel 3.16.0. For performance testing, we use fio-3.30 to replay and evaluate four real write-intensive workloads, i.e., hm_0, proj_0, src1_2 and src2_2, collected by Microsoft Research Cambridge (Narayanan et al., 2008). Table 2 provides more details about these workloads including the numbers of read/write requests, the total read/write sizes (GB), and the write ratio. In particular, to better evaluate the performance in handling requests, four write-intensive workloads with the accessed LBAs fitting in the tested disk space (i.e., 128GB).
In the experimental configuration, the number of threads is set to 1, the IO queue depth is set to 32, and the IO engine is set to libaio. In addition, to avoid the impact of kernel cache on the simulator performance, all the I/O requests generated by the test are the direct I/O, by passing the kernel buffer. Moreover, we execute command hdparm -W0 -A0 -a0 to have the HDD cache disabled. One more thing to note, since there are no rewrites operation are incurred if the space usage is less than 55.47% of the disk space (at this point, the first stage has just been allocated). Thus, we use 32 KB (which is close to the average write size of the four evaluated workloads) random write requests to initialized the simulated IMR-based HDD.
3.2. Experimental Results




1) Write Amplification under Different Space Usages: Fig. 6 shows the write amplification factor with different space usage under the src1_2 workload, where the x-axis denotes space usage of different tested devices and the y-axis represents write amplification factor (i.e., the ratio of the actual total number of writes to the number of write requests). Since the CMR-based HDD (denoted as CMR) uses a cost-free in-place update strategy and does not cause write amplification problems, its write amplification factor is always 1. We use our IMRSim to simulate IMR-based HDD with two stage allocation strategy (denoted as two-stage IMR) and IMR-based HDD with two stage allocation strategy (denoted as three-stage IMR).
It can be clearly observed that, as the space utilization increases, the write amplification factor gradually increases. Also, the write amplification of two-stage IMR is slight larger than that of three-stage IMR. However, as the space utilization increases, the write amplification difference between the two-stage IMR and three-stage IMR is getting smaller.
2) Performance results under different workloads: As shown in Fig. 7, We evaluate total write and read latency and write amplification with different workloads under 80% space usage. Specifically, we reply four real write-intensive workloads (i.e., hm_0, proj_0, src1_2 and src2_2) to collect the experimental results and verify the correctness of our simulator.
Write Perfomance. Fig. 7(a) demonstrates the write performance of three simulated devices under 80% space usage, where the x-axis denotes various workloads and the y-axis represents the write performance in terms of the total write latency (i.e., the total time for executing write requests).
It can be firstly observed that, under four emulated workloads, the simulated IMR-based HDD have a relatively large write performance gap with CMR while the two-stage IMR and three-stage IMR achieve similar write performance. Specifically, CMR outperforms IMR by 12.5x in the hm_0 workload, by 35.6x in the proj_0 workload, and by an average of 28.7x in four workloads. For the comparison of IMR-based HDDs with different allocation stages, two-stage IMR outperforms three-stage IMR in write performance in the hm_0 workload, however it performs worse compared to three-stage IMR in the other three workloads and has an average write performance loss of 0.55x in four workloads.
The experimental result shows that the performance loss brought by the RMW update strategy is very huge, although each update behavior only causes at most two additional read and write operations. In order to analyze the reasons for the huge performance loss caused by RMW, we calculated the write amplification factor of each workload under 80% space usage in the experiment. We count the number of extra writes in blocks and calculate the write amplification factor, and the result is shown in Fig. 7(c). It can be observed that, across four emulated workloads, the additional writes of IMR are at most 1.8x that of CMR. A mere 1.8x difference in write times results in a 28x difference in write performance. This is mainly because mechanical disks consume time in seek and rotation delays, and the RMW process needs to spend extra time on these positioning operations. The extra time is much larger than the data transfer delay. For IMR-based HDDs with different allocation stages, the average extra write times of the two-stage IMR is slightly larger than that of the three-stage IMR, which results in a slightly worse write performance than the three-stage IMR.
Read Perfomance. Fig. 7(b) demonstrates the read performance of three simulated devices under 80% space usage, where the x-axis denotes various workloads and the y-axis represents the read performance in terms of the total read latency (i.e., the total time for executing read requests).
It can be clearly observed that, in four emulated workloads, two-stage IMR and three-stage IMR exhibit read performance close to CMR, as expected. Specifically, the read performance of two-stage IMR is 9.55% slower than that of CMR on average under 80% space usage, and the read performance of three-stage IMR is only 7.98% slower than that of CMR on average. The main reason for the slight difference in read performance between IMR and CMR is that IMR needs to maintain a dynamic mapping table, and different allocation methods will also lead to different block addresses, which in turn affects delays such as seek time.
4. Related Works
We have collected some studies related to disk simulators. DiskSim (Bucy J and S, 2008) is a disk simulator developed by Carnegie Mellon University. It can accurately simulate the performance of various types of traditional hard disks, but it is only suitable for running in 32-bit systems. Although 64-bit systems are supported later, it is complicated to use. To support solid state drives (SSD), Microsoft created an SSDmodel module (Agrawal et al., 2008) to simulate the performance of SSD, and it has been widely used. FlashSim (Kim et al., 2009) is another emulator that evaluates SSD performance and studies some flash translation layers used in SSDs. Tan et al .(Tan et al., 2013) designed the SMR simulation platform for various testing and analysis of shingled translation layer designs. Pitchumani et al. (Pitchumani et al., 2012) developed a tool to imitate SMR disk by implementing a shingled device-mapper target on top of the Linux kernel’s block devices. Their goal is to allow STL to be evaluated on actual hard drives. So far, there is no simulation tool that is publicly available to simulate IMR disk and analyze their performance with emerging allocation and update strategies.
5. Conclusion
In this paper, we implement the first open-source IMR disk simulator, IMRSim, to simulate several different interlaced track layouts and accurate I/O performance. The performance of IMR is excellent in theory, but in practice, the time-consuming RMW process costs more than we expected. Further, IMRSim is designed in Linux kernel space using the Device Mapper framework, and IMRSim exposes a scalable user interface for interaction between the user and the disk simulator. Finally, we provide a visualization component to interestingly demonstrate how IMR handles I/O requests.
References
- (1)
- ref ([n.d.]) [n.d.]. IMRSim GitHub Repository. https://github.com/AlieZ22/IMRSim.
- Agrawal et al. (2008) Nitin Agrawal, Vijayan Prabhakaran, Ted Wobber, John D Davis, Mark Manasse, and Rina Panigrahy. 2008. Design Tradeoffs for SSD Performance. In 2008 USENIX Annual Technical Conference (USENIX ATC 08).
- Amer et al. (2011) Ahmed Amer, JoAnne Holliday, Darrell D. E. Long, Ethan L. Miller, Jehan-François Paris, and Thomas Schwarz. 2011. Data Management and Layout for Shingled Magnetic Recording. IEEE Transactions on Magnetics 47, 10 (2011), 3691–3697. https://doi.org/10.1109/TMAG.2011.2157115
- Bucy J and S (2008) Schindler J Bucy J and Schlosser S. 2008. The disksim simulation environment version 4.0 reference manual (cmu-pdl-08-101). Carnegie Mellon University:Parallel Data Laboratory.
- Gao et al. (2016) Kaizhong Gao, Wenzhong Zhu, and Edward Gage. 2016. Write management for interlaced magnetic recording devices. US Patent 9,508,362.
- Granz et al. (2019) Steven Granz, Michael Conover, Javier Guzman, William Cross, Pete Harllee, and Tim Rausch. 2019. Perpendicular Interlaced Magnetic Recording. IEEE Transactions on Magnetics 55, 12 (2019), 1–5. https://doi.org/10.1109/TMAG.2019.2936658
- Granz et al. (2018) Steven Granz, Jason Jury, Chris Rea, Ganping Ju, Jan-Ulrich Thiele, Tim Rausch, and Edward C Gage. 2018. Areal density comparison between conventional, shingled, and interlaced heat-assisted magnetic recording with multiple sensor magnetic recording. IEEE Transactions on Magnetics 55, 3 (2018), 1–3.
- Greaves et al. (2009) Simon Greaves, Yasushi Kanai, and Hiroaki Muraoka. 2009. Shingled Recording for 2–3 Tbit/in2. IEEE Transactions on Magnetics 45, 10 (2009), 3823–3829. https://doi.org/10.1109/TMAG.2009.2021663
- Hwang et al. (2016) Euiseok Hwang, Jongseung Park, Richard Rauschmayer, and Bruce Wilson. 2016. Interlaced magnetic recording. IEEE Transactions on Magnetics 53, 4 (2016), 1–7.
- Keng Teo et al. (2012) Kim Keng Teo, Moulay Rachid Elidrissi, Kheong Sann Chan, and Yasushi Kanai. 2012. Analysis and design of shingled magnetic recording systems. Journal of Applied Physics 111, 7 (2012), 07B716.
- Kim et al. (2009) Youngjae Kim, Brendan Tauras, Aayush Gupta, and Bhuvan Urgaonkar. 2009. Flashsim: A simulator for nand flash-based solid-state drives. In 2009 First International Conference on Advances in System Simulation. IEEE, 125–131.
- Kryder et al. (2008) Mark H Kryder, Edward C Gage, Terry W McDaniel, William A Challener, Robert E Rottmayer, Ganping Ju, Yiao-Tee Hsia, and M Fatih Erden. 2008. Heat assisted magnetic recording. Proc. IEEE 96, 11 (2008), 1810–1835.
- Liang et al. (2022) Yuhong Liang, Ming-Chang Yang, and Shuo-Han Chen. 2022. MAGIC: Making IMR-Based HDD Perform Like CMR-Based HDD. IEEE Trans. Comput. 71, 3 (2022), 643–657. https://doi.org/10.1109/TC.2021.3059770
- Marchon et al. (2013) Bruno Marchon, Thomas Pitchford, Yiao-Tee Hsia, and Sunita Gangopadhyay. 2013. The head-disk interface roadmap to an areal density of Tbit/in2. Advances in Tribology 2013 (2013).
- Narayanan et al. (2008) Dushyanth Narayanan, Austin Donnelly, and Antony Rowstron. 2008. Write Off-Loading: Practical Power Management for Enterprise Storage. ACM Trans. Storage 4, 3, Article 10 (nov 2008), 23 pages. https://doi.org/10.1145/1416944.1416949
- Pitchumani et al. (2012) Rekha Pitchumani, Andy Hospodor, Ahmed Amer, Yangwook Kang, Ethan L Miller, and Darrell DE Long. 2012. Emulating a shingled write disk. In 2012 IEEE 20th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems. IEEE, 339–346.
- Rottmayer et al. (2006) Robert E Rottmayer, Sharat Batra, Dorothea Buechel, William A Challener, Julius Hohlfeld, Yukiko Kubota, Lei Li, Bin Lu, Christophe Mihalcea, Keith Mountfield, et al. 2006. Heat-assisted magnetic recording. IEEE Transactions on Magnetics 42, 10 (2006), 2417–2421.
- Salo et al. (2014) Michael Salo, Terry Olson, Richard Galbraith, Richard Brockie, Byron Lengsfield, Hiroyuki Katada, and Yasutaka Nishida. 2014. The structure of shingled magnetic recording tracks. IEEE transactions on magnetics 50, 3 (2014), 18–23.
- Tan et al. (2013) Sophia Tan, Weiya Xi, Zhi Yong Ching, Chao Jin, and Chun Teck Lim. 2013. Simulation for a shingled magnetic recording disk. IEEE transactions on magnetics 49, 6 (2013), 2677–2681.
- Thompson and Best (2000) David A Thompson and John S Best. 2000. The future of magnetic data storage techology. IBM Journal of Research and Development 44, 3 (2000), 311–322.
- Todd et al. (2012) Richard M Todd, Enfeng Jiang, Richard L Galbraith, JR Cruz, and Roger W Wood. 2012. Two-dimensional Voronoi-based model and detection for shingled magnetic recording. IEEE transactions on magnetics 48, 11 (2012), 4594–4597.
- Wang et al. (2018) Guohua Wang, Hung-Chang Du David, Fenggang Wu, and Shiyong Liu. 2018. Survey on High Density Magnetic Recording Technology. Journal of Computer Research and Development 55, 9 (2018), 2016.
- Wood (2000) R. Wood. 2000. The feasibility of magnetic recording at 1 Terabit per square inch. IEEE Transactions on Magnetics 36, 1 (2000), 36–42. https://doi.org/10.1109/20.824422
- Wu et al. (2020) Fenggang Wu, Bingzhe Li, Baoquan Zhang, Zhichao Cao, Jim Diehl, Hao Wen, and David HC Du. 2020. Tracklace: Data management for interlaced magnetic recording. IEEE Trans. Comput. 70, 3 (2020), 347–358.
- Wu et al. (2018) Fenggang Wu, Baoquan Zhang, Zhichao Cao, Hao Wen, Bingzhe Li, Jim Diehl, Guohua Wang, and David HC Du. 2018. Data management design for interlaced magnetic recording. In 10th USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage 18).
- Zhu and Wang (2010) Jian-Gang Zhu and Yiming Wang. 2010. Microwave assisted magnetic recording utilizing perpendicular spin torque oscillator with switchable perpendicular electrodes. IEEE Transactions on Magnetics 46, 3 (2010), 751–757.
- Zhu et al. (2007) Jian-Gang Zhu, Xiaochun Zhu, and Yuhui Tang. 2007. Microwave assisted magnetic recording. IEEE Transactions on Magnetics 44, 1 (2007), 125–131.