- Home
- About Us
- Recovery Services Individual RecoveryEnterprise RecoveryAdditional Recovery
- Software
- Testimonials
- Locations
Windows 8 developers blog contains a large article describing the architecture of the new file system ReFS (Resilient File System), previously known under the code name Protogon that is developed for Windows Server 8 and in future would be refined and installed on the client Windows machines. Previous file system NTFS version 1.2 was presented in the far 1993 as part of Windows NT 3.1 and by introduction of Windows XP in 2001 NTFS reached version 3.1, and only then it was installed on to the client machines. Somehwat the same path awaits ReFS as well.
NTFS does not meet the requirements of modern file systems for many reasons. Furthermore, it has never been seen as an elegant file system and high-performance file system.
There was a time when the question of choosing the right file system did not bother users. Despite the fact that there were more than one file system already before personal computers had been introduced, there was no choice as such. Simply because there were many incompatible (or partially compatible) architectures and behind each was a certain company that used its own operating system and having its own understanding of what is good and what is bad. Moreover, data carriers also were different and incompatible with each other. And even it they were hardware-compatible (for example, floppy disk drives have been used by many computing systems and main typical sizes of disk drives were more or less standardized), they arranged the data in their own way. More or less compatible were tape drives because they were used most often to exchange data between different systems.
At kernel boot LILO stores information about the disk assignment numbers, and then uses this data when boot loader is being installed. When the boot device changes, the disk numbers change in BIOS settings the selected boot disk is numbered 0x80), so the information stored by LILO does not any longer match the actual configuration.
You must explicitly specify the disk numbers in /etc/lilo.conf:
disk=/dev/hda
bios=0x80
disk=/dev/hdb
bios=0x81
What does ext3 do when crash occurs due to the machine blackout in different operational stages of the file system?
According to Murphy's Law, fsck checking, occurring each N boots, always happens at the wrong time. By default, the checking interruption by using CTRL-C makes fsck to return an error code, which leads to remounting of the file system in the "read only" mode.
But this can be easily changed by editing /etc/e2fsck.conf:
[options]
allow_cancellation = true
The system monitor in Ubuntu 9.10 showed that there was a problem with one disk
(/Dev/sdb), which had been added to LVM.
I had to remove this disk from LVM as follows, all operations are dangerous and are executed as root.
1. First determine how much LVM should be reduced by
2. Then convert the EXT3 file system to EXT2, and reduce
3. Remove the physical volume and extract it from VG
4. Extend LVM and EXT2 to the maximum
5. Restore EXT3
The easiest way is to use the TestDisk universal utility (http://www.cgsecurity.org/wiki/TestDisk, /usr/ports/sysutils/testdisk), which supports a variety of file systems, for instance, ext2, ext3, ufs, fat, NTFS. Besides file recovery, TestDisk can find and recover the contents of the deleted drive partitions.
For recovery of deleted files by their type (i.e. photos), you can use the PhotoRec tool (http://www.cgsecurity.org/wiki/PhotoRec).
The blktrace utility (found in the Ubuntu and Debian repositories) allows monitoring what exactly data is transferred to the specified block device.
For example, you can see the general statistics and details of data exchange with /dev/sda by executing the command:
blktrace -d /dev/sda -o - | blkparse -i -
where blkparse is a filter for the result visualization.
In Centos 5.x, there is no normal iotop support, without which it is hard to see which process is loading the disk system the most.
But you can use the disktop.stp script, written for the
SystemTap dynamic routing subsystem.
MBR backup:
dd if=/dev/hda of=mbr_backup.bin bs=1 count=512
Swap if/of around to recover the entire MBR .
The partition table is located in MBR at 0x01BE (446) offset and consists of 4 records of 16 bytes.
To recover the partition table only:
dd if=mbr_backup.bin of=/dev/you-device bs=1 count=64 skip=446 seek=446