This study addresses the subject of the effectiveness of data overwriting as a method of information destruction and its resistance to subsequent attempts at data recovery or forensic analysis. In the context of the need to protect confidential and sensitive data against access by unauthorized persons, correct and irreversible deletion of information is an important element of cybersecurity, however, among the effective methods of data destruction, one can choose the method that is most economically justified or least burdensome for the environment. The aim of the work is to consider in what situations and under what conditions data overwriting can be such a method.
The terms of data recovery and data destruction are defined below, as well as the methods of data destruction by physical damage or destruction of the media. Next, the basic principles of operation of hard drives and semiconductor media using Flash-NAND memory chips are described, with particular emphasis on the physics of writing and reading information. The methods proposed in the literature and the potential possibilities of recovering overwritten data are discussed, taking into account their physical and technical limitations and the risk of interfering with the correct operation of data overwriting programs. The achievements to date in the field of data recovery, regardless of the disk with the use of specialized devices, have been described, and the directions of development of hard disk technology and electronic data carriers have been indicated.
A data carrier is any object that allows you to store information in a way that allows its subsequent faithful reproduction. The first data carriers were corrugated pieces of wood, clay tablets, papyrus rolls, knotted strings, etc. The emergence of the possibility of recording information in a way other than memorizing it greatly facilitated and accelerated the development of civilization. An important breakthrough in the history of data carriers was the invention of printing, which allowed for easy and cheap copying and dissemination of information. However, both books written by hand by monastic scribes and mass-printed books are analog media that can only be understood by humans. (...)
Nowadays, digital data is stored in files organized in logical structures. These structures are usually seen by the user in the form of a directory tree, sometimes called folders. Logical structures are managed by file systems. However, this was not always the case. In the beginning of computer technology, data was recorded and addressed directly in the physical addressing of the media. The burden of finding the right data rested with the operator. This is why tapes and punched cards have often been described in a human-readable way, and sometimes even contain a full repetition of the information encoded on them.
One of the basic tasks of a file system is to protect already written data from accidental corruption by writing other information in the same place. It is the file system that knows which fragments of the media area it manages are occupied and which are free and can accept further information. If the free space is too small to accept another portion of information, the system will return a message informing about insufficient free space on the disk (partition). The amount of free space can be increased by deleting (erasing) information already on the media.
Data recovery is a set of activities aimed at recovering information to which access has been lost using the usual means of the operating system,
in case no usable backup exists. Restoring data using an existing backup is not data recovery, but a routine procedure performed by system administrators. If there is an efficient backup copy and it remains at the disposal of a foreign administrator, no data destruction procedure can be considered effective. Only when all existing copies of the information have been poperly destroyed can the information be considered to have been destroyed.
Data destruction consists in taking such actions that the recovery of the destroyed information will not be possible in the future by any means. Taking action to irreversibly destroy information does not have to result from the intention to destroy evidence of illegal or unethical activities. It may be dictated by the protection of the interests of the owner of this data, as well as organizational or legal requirements related to, for example, the protection of personal data.
The hard disk is an energy-independent (non-volatile - not requiring constant power supply to maintain logical states) data carrier that uses magnetic recording on one or many closed platters to store information
in the enclosure. In the early days of this type of mass storage, the enclosures were opened and the platters could be removed and moved to other drives. Currently, opening the enclosure, also known as hermoblock, and interfering with the mechanical subsystem of the hard drive is tantamount to its uselessness for further operation. Operations of this type are usually performed only by specialized laboratories in order to recover data from a faulty medium.
Data in hard drives is stored in a thin magnetic layer covering both surfaces of all platters. The physical properties of the materials from which this layer is made depend on the parameters of recording and reading the signal, the achievable density of information recording, durability and resistance of the stored record to external factors, such as an external magnetic field or temperature. The properties of the magnetic layer also affect the possibility of errors when reading or writing data.(...)
The hard disk stores information on the surfaces of magnetic platters, on which concentric tracks divided into sectors are formed. In order to avoid the phenomenon of eccentricity, tracks and sectors are created at the end of the production process, after mechanical assembly of the disk. The process of creating a structure of tracks and sectors on the surface of the media is called low-level formatting.(...)
Starting from the turn of the 1980s and 1990s, linear motors consisting of a coil placed between two permanent magnets are commonly used to position the block of magnetic heads in hard drives. On the opposite side of the block there is a set of magnetic heads. The whole is placed on an axis that allows the block to rotate. Magnets are permanently attached to the hard drive housing, usually with screws or rivets.(...)
In order to be able to transmit or store any information, it is necessary to have a specific connection between the logical states and the physical state of the medium. The methods of representing logical states can be divided into two groups: RZ - Return-to-Zero and NRZ - Non-Return-to-Zero. (...)
The magnetic signal read by the heads is an analog signal. The analog nature of the signal is the basis for the belief that its careful analysis can determine the previous logical state of the domain. An often-cited example from Peter Gutmann's work says that if 0 is remagnetized to logical 1, it will actually become 0.95, and remagnetizing 1 to 1 will amplify the signal to 1.05. This type of analysis itself would require capturing the signal directly from the block of magnetic heads using an oscilloscope, magnetic force microscope or similar apparatus.(...)
In the oldest hard drives, the data density was low, and changes in surface polarization produced clear and precise signal pulses. When reading the signal, it was enough to detect the peaks of these pulses (peak-detection). Using this method requires that the amplitude of the signal be significantly higher than the noise level. As the recording density increases, the importance of interference between successive pulses - the so-called Inter Symbol Interference (ISI) and the signal-to-noise ratio is reduced. It becomes necessary to use more and more sensitive detectors or appropriate signal amplification. (...)
Shortly after 2000, hard disk drive manufacturers reached the limit of their ability to further increase information storage density with the then-used longitudinal recording method. Interactions between individual magnetic domains meant that further reduction of domain sizes and increasing their packing density on the surface of the platter would result in a decrease in data quality and an increase in the risk of data damage or loss. One of the most serious obstacles preventing further increase in parallel recording density is the phenomenon of superparamagnetism.(...)
Shingled Magnetic Recording is a technology that uses the fact that the reading head is able to read a much narrower path than the writing head writes. Increasing the recording density is achieved by partially overwriting the adjacent track. Successive tracks are recorded with a shift
compared to the previous ones corresponding to the width of the reading head.(...)
Modern semiconductor media such as memory cards, pen drives or SSDs use Flash-NAND chips to store user information. NAND is an abbreviation for "not-and" meaning a logical disjunction function (negation of conjunction).(...)
The oldest Flash NAND chips were able to distinguish only two states of a memory cell: charged (usually interpreted as logical zero) and uncharged (logical one). The analog nature of the memory cells and the increase in the accuracy of measuring the current flowing between the transistor electrodes in the reading process made it possible to distinguish more than two logical states. This allows you to store more than one bit of information in one memory cell. (...)
Flash-NAND memories do not allow you to address and access directly to any memory cell. Transistors are grouped into pages, which are the minimum addressing unit when reading. The page size usually corresponds to the size of the data register of the memory chip. In the oldest Flash NAND chips, there were two types of pages - small and large. The small page was 528 bytes - 512 bytes corresponding to the user data sector and 16 bytes of redundant information. The large page was 2112B - 2048B corresponding to four sectors of user data and 64B of redundant information.(...)
Semiconductor media controllers perform a number of operations during writing, focused mainly on ensuring the highest possible speed and performance of the device, as well as extending its life and reducing the risk of failure. The first of these goals is achieved primarily by dispersing data in all the systems included in the carrier. Such dispersion allows for parallel processing of information in these systems, which can result in faster transfers both when writing and reading. On the other hand, ensuring the longest possible trouble-free operation of the carrier is primarily achieved by controlling wear and eliminating damaged blocks from operation.(...)
Natural wear of memory cells is the most important cause of damage to information carriers using Flash-NAND memories. Failures caused by damage to transistor floating gates may result in errors in user files, the logical structure of the file system, but also often errors in the content or structure of service data used to ensure proper operation of the device, especially the subsystem for translating physical addressing based on block and page numbers into logical LBA addressing . In the event of damage to the information necessary for the correct translation of physical addresses into logical ones, the basic microcode of the device does not allow access to the contents of Flash-NAND chips. (...)
The occurrence of single bit errors while reading pages is not a sufficient reason for decommissioning a block. In most cases, single reading errors are corrected using ECC (Error Correction Code). Errors determining whether a block is considered damaged are usually errors arising during erasing and programming operations in a number exceeding the capabilities of ECC correction codes.
One of the basic factors affecting both the durability and the efficiency of semiconductor media using Flash-NAND chips is the number of free blocks ready to accept information. The most expensive way to obtain a large number of redundant blocks is to produce carriers with a physical capacity significantly higher than the nominal one. For economic reasons, manufacturers are looking for solutions to achieve a similar effect without increasing production costs. They take advantage of the fact that for the proper functioning of the medium and the information contained on it, not all data actually have to be stored on this medium.
Physical methods of data destruction are aimed at such damage to the medium that it becomes impossible to recover data from it. With these methods, the media is inherently unusable and must be disposed of. Physical methods of data destruction are widely used in many institutions obliged by law or internal regulations to destroy carriers in accordance with specific procedures. Physically, inoperable carriers should also be destroyed, in relation to which it is not possible to effectively use non-destructive methods of irretrievable information removal.(...)
Chemical methods of information destruction boil down to dissolving the carrier in various types of solutions. The choice of substances used to dissolve the carriers is limited by the materials from which the carriers are made. While carriers made of plastic, such as CDs and DVDs, can be destroyed relatively easily, hard drives made of many different materials, including precious metals, require a more careful selection of the appropriate solution.(...)
Thermal methods of data destruction consist in subjecting the media to a sufficiently high temperature. The value of this temperature is different for different carriers. While media made of plastics can melt at a temperature of several dozen degrees Celsius, hard drives and semiconductor media using Flash-NAND chips can withstand temperatures of several hundred degrees Celsius.(...)
Mechanical methods of destroying data carriers are generally considered to be the most effective way of destroying information. At the same time, these methods are relatively cheap, especially the simplest of them, such as hammering or drilling a disc. Professional devices, such as shredding machines or disk grinders, are also often used. However, the effectiveness of some of these methods is highly questionable.(...)
Demagnetization (degaussing) is used to destroy information on magnetic data carriers, such as hard drives, floppy disks, magnetic tapes. Demagnetization is usually carried out using special devices - demagnetizers, also called degaussers. The most important element of such a device is the carrier chamber located inside the coil.(...)
Methods using ultraviolet or ionizing radiation are often mentioned, although not used in practice. These methods refer to the old way of erasing EPROM (Erasable - Programmable Read Only Memory) chips by irradiating them with ultraviolet light with a wavelength of 253.7 nm. The energy supplied by ultraviolet light was used to free electrons from the gates of the floating transistors forming the memory bit cells.(...)
One of the proposed methods of data destruction is the use of explosives or pyrotechnics. In practice, this method can be considered as an intermediate method between mechanical and thermal damage to the carrier. This is a very unreliable method of data destruction, and its effectiveness is random and difficult to verify.(...)
Court case files and criminal records are a mine of many different ideas for destroying data by physically damaging the medium. In addition to the mechanical methods described above, the most popular actions aimed at destroying data include flooding the carrier with water or other liquid and causing short circuits or other electrical damage.(...)
On the Internet and in the literature, there are various concepts of methods for recovering data in the event of overwriting. They usually focus on recovering data from magnetic media, especially hard drives. In the case of semiconductor media, the possibility of recovering overwritten data is quite obvious. Writing any information to the Flash-NAND memory block requires its prior erasure in its entirety - emptying the floating gates of all memory cells of the electric charges contained in them. Therefore, this technology does not provide sufficient grounds for claims about the potential possibility of recovering overwritten data.(...)
The phenomenon of hysteresis is associated with a delay in response to external factors. Thus, the current state of the system may depend not only on the conditions in which the system is located, but also on its previous state. Also, magnetic materials are characterized by hysteresis depending on the coercivity of a given material. This fact is the basis of the concept of using the phenomenon of magnetic hysteresis to determine the previous state of magnetization of the magnetic layer of the hard disk.(...)
Residual magnetism is the residue in the demagnetized magnetic material of the previous magnetization. They can be found in various magnetic materials, including analog information carriers. In the case of digital media, including hard drives, residual magnetism is practically non-existent.(...)
One of the most frequently raised arguments for the possibility of recovering overwritten data is the possibility of inaccurate overwriting of the track by the head, which for many reasons may not hit the track it is supposed to save accurately enough. Among the reasons for such behavior of the head, the most frequently indicated are the lack of sufficient precision of the mechanical elements of the disk, the eccentricity of the axis on which the platters are mounted during disk operation, differences in thermal expansion of the materials from which individual disk components are made. Evidence of problems of this type are the experiences with audio tapes, where it is not uncommon to hear short fragments of the previous recording after recording a new track.(...)
Described by Francis Bitter in 1931, the method of observing magnetic domains is based on the use of a colloid solution containing magnetic particles (usually iron oxides). The surfactants in the solution prevent these molecules from agglomerating. The Bitter method is an invasive method. The solution is spread on a cleaned and polished surface, where the magnetic particles are attracted by the magnetic field coming from the boundaries of the magnetic domains, creating dark lines along these boundaries. The arrangement of these lines corresponds to the impulses generated by the domain walls in the electrical course read by the heads.(...)
The presence of a magnetic field can be observed using magneto-optical effects discovered by Michael Faraday and John Kerr. The Faraday effect consists in the rotation of the plane of polarization of linearly polarized light during its passage through the medium under the influence of a magnetic field. The angle of rotation depends on the value of the magnetic induction and the length of the section on which it affects the light. A very similar phenomenon is the Kerr effect, which consists in twisting the plane of polarization of light reflected from a magnetized surface. The Kerr effect is used e.g. in magneto-optical carriers and can be used to observe surface magnetization.(...)
The magnetic force microscope is a device that registers the magnetization of the examined surface by modulating the frequency with which the blade covered with a thin layer of ferromagnetic material vibrates. This device allows you to scan the entire surface of the magnetic platter, including the areas between the tracks, and obtain a high-resolution image of the magnetic forces. However, the image obtained in this way is only a reflection of the surface magnetization and in order to obtain useful information from this image, it is necessary to face a number of problems. (...)
Spin-standis a group of devices used mainly by hard drive manufacturers to test hard drive components in factory laboratories. These devices allow you to write and read a magnetic platter using a magnetoresistive head in a similar way as in hard drives, except that the interpretation of the magnetic signal read by these devices in order to obtain a logical image of the data stored on the disk encounters similar problems as when scanning the platter surface with a magnetic force microscope. For this reason, factory tests do not use real data, but use properly prepared patterns to write reference tracks. (...)
In the case of a technically sound hard drive, it is possible to capture and acquire an analog signal from the hard drive heads by connecting the oscilloscope probes to the reading channel leads in the connector of the magnetic heads block. The analog signal captured in this way, unprocessed by the signal processor, can be used for analysis in search of traces of the previous recording. Obtaining a signal in this way is faster and easier than in the case of spin-stand devices, because it occurs without interfering with the mechanical subsystem of the disk resulting in eccentricity, but it does not allow for the analysis of the signal lying at the edge of the track or beyond it. Controlled by the original drive firmware and using the original servo, the head will follow the center of the track.(...)
The hard disk, in addition to user data, contains a number of information that most users are not aware of. First of all, this is information contained in the negative tracks of individual or all platter surfaces - in the service zone. This information is necessary for the correct operation of the drive, including unique and critical information for access to user data, such as fault lists, zone tables, information necessary for the correct translation of physical addresses into LBA addresses, a copy of ROM content, SMART and test logs as well as information of unknown or irrelevant significance. Often in the service area there are also paths for testing heads. In addition, on the tracks in the LBA addressing area, there are servos, sector address tags, as well as codes and checksums used by internal correction mechanisms.(...)
All manufactured data carriers comply with specific standards ensuring their compatibility with other devices and software.
Specifically, hard drives and SSDs are ATA or SCSI compliant. The possibilities of modifying the drive's internal software for unauthorized data interception are theoretically quite significant, but they encounter a number of technical and logistical problems.(...)
In the fourth version of the ATA standard, the SET MAX ADDRESS command was introduced, which allows to reduce the number of available LBA sectors in relation to the maximum factory size. This creates an area of the disk called HPA - Host Protected Area or, less often, Hidden Protected Area, which is inaccessible to operating systems and most programs. This command is quite widely used in hard drive repairs consisting in hiding final degraded areas, but it can also be used to hide areas containing information. The HPA area can contain information necessary to restore the correct operation of the operating system, used in anti-theft security systems, but this area can also be used to hide certain data.(...)
It is technically possible to prepare the internal software of the carrier in such a way that it duplicates data in a part of the carrier outside the LBA addressing space. For economic reasons, this is extremely unlikely at the factory stage. This would require the production of a medium with twice the nominal capacity, which, given that this procedure would have to cover entire production batches, for such a medium to have a real chance of getting into the right computer, would be extremely economically inefficient. As in the case of tampering with the firmware to disrupt the correct operation of ATA commands, a precise attack directed against a specific entity would be much more likely.(...)
Some types of media may store data or part of it outside the LBA addressing. In particular, this applies to media using Flash-NAND chips, such as SSDs, pen drives and memory cards. This also applies to Flash-NAND chips used as SSHD buffers. But also with SMR hard drives, there are physical addressing units that hold outdated user data that have not been assigned LBA addresses. (...)
When working with data, it is very often cached. Buffering is usually dictated by the desire to increase work efficiency with given economic constraints. Typically, faster data carriers are significantly more expensive than slower ones. Hence, it is economically justified to store large data sets on capacious and slow carriers, while using fast buffers with limited capacity, sufficient to store the information needed for ongoing processing. (...)
Defects in data carriers are a common problem. Although it is theoretically possible to create carriers completely free from defects, it is economically unjustified in view of the market expectations of capacious and efficient devices at a low price. On the other hand, there are limits to the acceptable risk of losing information due to the incorrect operation of the medium. Therefore, manufacturers implement solutions that eliminate damaged allocation units from the LBA addressing space so that they cannot be used for data storage and do not adversely affect the operation of the device. These solutions can be used to hide efficient allocation units, in which sensitive information may be hidden, outside the LBA addressing.
The purpose of steganography is to hide the existence of significant information from unauthorized persons. The basic difference between steganography and cryptography is that the purpose of cryptography is not to hide the existence of this information, but only to hide its content. Of course, steganography and cryptography can be used together. Steganography carries the risk of hiding data in case of selective overwriting.
The only device so far designed specifically to recover data from a magnetic platter independently of the hard drive is Active Front's Signal Trace. It is a device that, due to its design, can be classified as a spin-stand device, however, spin-stand devices are designed for testing hard drive components by factory laboratories.(...)
It is hard not to notice the possibility of using the potential of spin-stand devices in computer forensics and data recovery. A number of experiments have been carried out in this regard in the past. The most significant are the works of Isaak Mayergoyz and Chun Tse. Their interests also included the possibility of recovering overwritten data using a spin-stand device.(...)
Any of the spin-stand devices available on the market can be used to recover user data from the hard drive. Such a device, similarly to the previously described Signal Trace, requires appropriate adaptation to work with a specific hard disk model and servo pattern, and it is also necessary to properly program and interpret the obtained analog signal. This procedure was carried out in 2007 by Chun-Yang Tseng, who recovered data from a 2.57 GB disk produced in 1997 using the Guzik 1701-MP device.(...)
Carried out in the early 1990s by Romel Gomez's team using a magnetic force microscope, experiments showed the possibility of determining the previous magnetization of individual domains after overwriting the information. However, it should be remembered that the probability of repeating one of the two magnetization polarization values in a specific place is 50%. Since the probability of the joint occurrence of independent events is the product of the probabilities of each of them, this value decreases exponentially with each subsequent domain. Similarly, the probability of accumulating errors during disk operation increases and the probability of recovering any useful data decreases. This is more or less the result that can be obtained by trying to recover overwritten data by flipping a coin and interpreting the head/tails result as a specific logical state.(...)
Regardless of the analysis of the magnetization of the surface of the hard disk platter using magnetic force microscopes and spin-stand devices, you can attempt a detailed analysis of the signal read by the magnetic heads. Research in this direction was conducted by Serhiy Kozhenewski from the Kiev computer service Epos specializing in data recovery. During these studies, differences in the shape of the pulses caused in the signal waveform by changes in magnetization were noticed.(...)
Successful recovery of unoverwritten data, regardless of the drive, by the way showed the scale of problems associated with trying to recover overwritten data. In particular, it is very difficult to obtain a sufficiently strong signal when reading along the edges of the tracks to be suitable for decoding and use in data recovery. In addition, even after appropriate amplification of this signal, it will be a signal coming predominantly from the current state of the recording on the track. This results, among others, from from the greater width of the head writer. In addition, this signal will be strongly disturbed as a result of the influence of the signal coming from the adjacent path. (...)
Progress in the development of hard drives has led to a significant increase in their capacity and speed while reducing their size. This required a significant increase in the recording density, the development of the precision of mechanical components, the use of more and more effective methods of signal processing and coding, as well as error correction. The growing density of recording leaves less and less possibility of errors in the functioning of hard drives, which could result in inaccuracy when overwriting the previous information and allow for even fragmentary recovery.(...)
Two-Dimenssional Magnetic Recording is to help increase the resolution of the signal and distinguish it from noise and reduce the impact of inter-track interference. This technology provides for the use of information about adjacent magnetic domains in two dimensions when digitizing and decoding the signal.(...)
HAMR (Heat Assisted Magnetic Recording) and MAMR (Microwave Assisted Magnetic Recording), are technologies that are supposed to enable the use of magnetically harder materials. The use of materials with greater coercivity is expected to increase the recording density by further significantly reducing the size of the magnetic domains. Obstacle
in the use of magnetically hard materials, such as iron-platinum alloys (FePt) in disks, is that their remagnetization requires a magnetic field of much greater intensity than in the case of commonly used cobalt alloys.(...)
IMR - Interlaced Magnetic Recording, is a way of organizing data on drives that use the energy-assisted recording technologies HAMR and MAMR. It involves the use of two types of paths: lower and upper. so-called the upper tracks are narrower than the lower ones and are written using less energy, partially overwriting two adjacent lower tracks. Thus, each lower path is narrowed by two upper paths that overwrite its edges, and consequently the effective width of the lower path becomes smaller than the upper path. This method of recording allows you to expect a recording density of over a million tracks on the surface of a 2.5" disk(...)
BPM (Bit Patterned Media), is a technology based on separating individual magnetic domains with silicon dioxide. Separating domains allows you to limit the impact of neighboring areas and reduce their size. In this way, homogeneous magnetic islands of minimum dimensions can be obtained on the carrier surface. Separating adjacent magnetic domains with a diamagnetic material not only increases their resistance to superparamagnetism, but also improves the signal-to-noise ratio.(...)
Media based on the use of Flash-NAND memory chips have recently achieved better and better performance parameters. They are also becoming cheaper and more capacious, with the increase in their capacity resulting more from the increase in the capacity of the memory chips themselves than from the design of carriers using a larger number of chips. However, this progress comes at a price in the form of increased media complexity, especially error correction and defect management algorithms. The failure rate, especially of the cheapest Flash-NAND memory chips, is also increasing.(...)
resistive memories technology is based on linking logical states with the level of resistance of bit cells. This is a different approach than in the case of Flash or DRAM memory, where logical states are related to the level of electric charge stored in a memory cell. Electrochemical resistive memories of the Redox RAM type are most often marked as ReRAM or RRAM. They allow you to build memory cells with dimensions below 10 nanometers and write and read times below 10 nanoseconds. In addition, they enable the creation of three-dimensional memory chips similar to 3D-NAND chips and have the potential to use multi-level cell technology.
Magnetoresistive memories MRAM (Magnetic Random Access Memory) and STT-MRAM (Spin Transfer Torque Magnetic Random Access Memory) use magnetic recording to store information, but unlike other types of magnetic media, they do not have a mechanical subsystem, and only electronic systems are responsible for addressing, reading and writing data. The main advantage of magnetoresistive memories is to be fast writing (times comparable to writing in DRAM memories, much faster than in the case of Flash memories) and the possibility of addressing single bytes (as opposed to Flash-NAND memory, where the minimum addressing unit is a page containing from several hundred to several thousand bytes).(...)
Ferroelectric memories, most often marked as FerroRAM, FeRAM, FRAM or F-RAM, use the electrical polarization of the materials feroelectric. The idea of using ferroelectrics to build data carriers appeared in 1952 at Massachusets Institute of Technology. The first attempts to create ferroelectric data carriers were made in the Soviet Union in the 1960s and 1970s. The basic material used by Soviet engineers was barium titanate, but the memory chip they developed 307PB1 not widely used.(...)
Phase change memories use reversible phase changes between crystalline and amorphous chalcogenide states - typically an alloy of germanium(Ge), antimony(Sb) and tellurium (Te), however, research on the use of different materials is ongoing. silver (Ag) and indium (In) chalcogenides have been used in optical rewritable media since the 1990s. The first phase-change memories were presented by Intel in 2002. The abbreviations PCM, PCRAM are most often used to denote this type of memory. strong> or less often PRAM - from the English term Phase Change Random Access Memory.(...)
Graphene can be used not only as a replacement for silicon in the production of Flash memories, but can also be used in a completely different way in resistive systems. In this solution, layers of graphene rolled into nanotubes are used. Memory systems using carbon nanotubes, as well as Flash memories, can be made in the NOR and NAND variants . As a result, NanoRAM chips (also referred to as NRAM) may in the future replace Flash memories both for storing device firmware and as storage media. (...)
DNA sequences store information necessary for the functioning and replication of all organisms, from single-celled bacteria and slippers to baobabs, humans and whales. The deoxyribonucleic acid molecules found in each cell not only encode the information necessary for the production of various proteins, but also contain a kind of program that determines when and under what circumstances specific proteins are to be produced. This allowed for the formation of a variety of complex organisms with many specialized organs. This program also contains redundant information that allows you to fix most errors that may occur when copying DNA molecules, as well as recombination procedures that are performed during reproduction when cells containing DNA combine derived from two different organisms. No wonder that DNA sequences have attracted the attention of scientists working on new computer information carriers. (...)
So far no one has presented an effective method to recover overwritten data. Due to the adopted assumptions of the experiment and significant differences between the conditions created in the laboratory and the technical solutions used in hard drives being actual information carriers, the method of recovering overwritten data from the reference track using a device cannot be considered as such a method spin-stand presented by Isaak Mayergoyz and Chun Tse. While in the case of semiconductor media, the inability to recover overwritten data is not controversial, in the case of magnetic media, especially hard drives, doubts are constantly being raised in this regard. (...)
(The translation will be better when done by a professional translator...almost ready!!!)