Code Monkey home page Code Monkey logo

gpt's People

Contributors

4ldo2 avatar berke avatar calebccff avatar cecton avatar cholcombe973 avatar dependabot-preview[bot] avatar dependabot-support avatar erichdongubler avatar gaochuntie avatar gradzik avatar kevinhoffman avatar lucab avatar lucafulchir avatar mzhong1 avatar oldgalileo avatar phcoder avatar quyzi avatar sjoerdsimons avatar soerenmeier avatar szabgab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

gpt's Issues

Creating a new table

I see methods for reading a table, and modifying a table that already exists, but is there a way to initialize a disk with a new table?

Rounding issue of `size` in `add_partition`

The add_partition method takes the partition size in bytes as argument. It then divides this size by the block size, which effectively rounds it down to the the next lower block size:

gpt/src/lib.rs

Lines 231 to 242 in 894899b

let size_lba = match size.checked_div(self.config.lb_size.into()) {
Some(s) => s,
None => {
return Err(io::Error::new(
io::ErrorKind::Other,
format!(
"size must be greater than {} which is the logical block size.",
self.config.lb_size
),
));
}
};

Later, the last_lba of the partition is then created through first_lba + size_lba - 1:

gpt/src/lib.rs

Lines 257 to 264 in 894899b

let part = partition::Partition {
part_type_guid: part_type,
part_guid: uuid::Uuid::new_v4(),
first_lba: starting_lba,
last_lba: starting_lba + size_lba - 1_u64,
flags,
name: name.to_string(),
};

So if the given partition size is not aligned to the block size, it is silently rounded down. I would instead expect that it is rounded up, or alternatively an error message.


A side note about the checked_div usage above: The error message doesn't fit the error cause. The checked_div call returns None if a division by zero is attempted, which means that the None branch is only entered if self.config.lb_size is zero. From the error message it seems like you wanted to guard against a size < block_size instead.

Partition creates empty partition entries

The num_partition from the gpt table header indicates the total number of possible partition array entries -> not the number of actually used partitions. As such, when partition is called, it creates a tree that has both the actual partitions used and a bunch of empty entries. Since empty entries are all zeroed, an easy fix is in file_read_partitions (in partition.rs), to check if the bytes read to create a Partition entry in the tree are 0's and skip that entry if so. See https://metebalci.com/blog/a-quick-tour-of-guid-partition-table-gpt/#gpt-partition-entry-array

Reading from arbitrary Read + Seek types

When using this crate for actual block device drivers with partitioning, some extra flexibility might be required, because doesn't really make sense to open a file at a Path in a driver that provides that path. AFAICT many functions in this crate could be renamed (read_partitions => read_partitions_from_file).

#51 adds the minimal support necessary to use this crate, but GptConfig and GptDisk might need changing.

Data loss after loading an existing partition table with unknown partition types

If the library doesn't know about a partition's UUID ahead of time, loading the partition from disk will consider give it a null UUID. Null UUIDs are "unused", and can be overwritten. Deleting a partition and adding a new partition will then overwrite valid, usable partitions and data.

The following test illustrates the behavior.

#[test]
fn test_remove_add_custom_partition_type() {
    const TOTAL_BYTES: usize = 1024 * 64 * 4;
    let mut mem_device = Box::new(std::io::Cursor::new(vec![0_u8; TOTAL_BYTES]));

    // Create a protective MBR at LBA0
    let mbr = gpt::mbr::ProtectiveMBR::with_lb_size(
        u32::try_from((TOTAL_BYTES / 512) - 1).unwrap_or(0xFF_FF_FF_FF));
    mbr.overwrite_lba0(&mut mem_device).unwrap();

    let mut gdisk = gpt::GptConfig::default()
        .initialized(false)
        .writable(true)
        .logical_block_size(disk::LogicalBlockSize::Lb512)
        .create_from_device(mem_device, None)
        .unwrap();
    // Initialize the headers using a blank partition table
    gdisk.update_partitions(BTreeMap::<u32, gpt::partition::Partition>::new()).unwrap();
    // At this point, gdisk.primary_header() and gdisk.backup_header() are populated...
    // Add a few partitions to demonstrate how...

    let parttype1 = gpt::partition_types::Type {
        guid: "8DBBAB53-6255-4B4E-9CC6-11672709F7B4",
        os: gpt::partition_types::OperatingSystem::Custom("Smattering of CPIOs".into()),
    };

    assert_eq!(1, gdisk.add_partition("test1", 1024 * 12, gpt::partition_types::BASIC, 0, None).unwrap());
    assert_eq!(2, gdisk.add_partition("test2", 1024 * 18, parttype1.clone(), 0, None).unwrap());
    assert_eq!(3, gdisk.add_partition("test3", 1024 * 18, parttype1.clone(), 0, None).unwrap());
    let mut mem_device = gdisk.write().unwrap();
    mem_device.seek(std::io::SeekFrom::Start(0)).unwrap();

    let mut gdisk = gpt::GptConfig::default()
        .initialized(true)
        .writable(true)
        .logical_block_size(disk::LogicalBlockSize::Lb512)
        .open_from_device(mem_device)
        .unwrap();
    assert_eq!(3, gdisk.remove_partition(Some(3), None).unwrap());

    let (_, length) = gdisk.find_free_sectors().last().unwrap().clone();
    let block_size_bytes = match gdisk.logical_block_size() {
        gpt::disk::LogicalBlockSize::Lb512 => 512,
        gpt::disk::LogicalBlockSize::Lb4096 => 4096,
    };

    println!("{:#?}", gdisk.partitions());

    assert_eq!(3, gdisk.add_partition("test3", length * block_size_bytes, gpt::partition_types::LINUX_FS, 0, None).unwrap());
}

This is out of spec, any UUID is valid.

I believe this line of partition.rs is at fault:

part_type_guid: Type::from_uuid(&type_guid).unwrap_or_default(),

Version 3.0.0 breaks on nightly

Due to rust-lang/rust#113152, crc ^1.8 breaks on nightly. A version bump to stabilize #87 should fix this unless I'm mistaken. Build of gpt 3.0.0 breaks with the following error currently.

error[E0635]: unknown feature `proc_macro_span_shrink`
  --> /home/fish/.cargo/registry/src/index.crates.io-6f17d22bba15001f/proc-macro2-1.0.51/src/lib.rs:92:30
   |
92 |     feature(proc_macro_span, proc_macro_span_shrink)
   |                              ^^^^^^^^^^^^^^^^^^^^^^

For more information about this error, try `rustc --explain E0635`.
error: could not compile `proc-macro2` (lib) due to previous error

Improve ProtectiveMBR API

Currently the API for creating a ProtectiveMBR is somewhat unclear, or at least easy to misuse. While the documentation does explain that the input should be the number of logical blocks not including the PMBR, the naming and much around this is unclear.

Specifically, the lb_size term is overloaded. On GptConfig, you have logical_block_size which expects a gpt::disk::LogicalBlockSize value. lb_size itself is set to be a gpt::disk::LogicalBlockSize in many of the tests and documentation.

let mbr = gpt::mbr::ProtectiveMBR::with_lb_size(<blah>); // Wants size _in_ Logical Blocks, not size _of_ Logical Block

How to Add File System Information to Partition Tables

I'm using this crate to build a tool to edit edit a disk image in GPT format. But when I used it to create a partition and used parted to read it, I found that the file system information column was empty. I have tried many times to add file system information. But I failed, otherwise I would not write this Issue. Could you please tell me how to add file system information to the partition tables?

Tilde dependency requirements cause issues with other crates

Due to this crate's usage of tilde requirements (such as bitflags = "~1.2") instead of the usual caret requirements (bitflags = "^1.2", or simply bitflags = "1.2"), it's easy to get into a situation where Cargo is unable to select a version to use for a dependency.

For example, a crate with the following in its Cargo.toml:

[dependencies]
gpt = "2" # depends on bitflags 1.2.x
uefi = "0.12" # depends on bitflags >=1.3.2

will fail to compile with:

error: failed to select a version for `bitflags`.
    ... required by package `gpt v2.0.0`
    ... which satisfies dependency `gpt = "^2"` of package `example v0.1.0 (/tmp/example)`
versions that meet the requirements `~1.2` are: 1.2.1, 1.2.0

all possible versions conflict with previously selected packages.

  previously selected package `bitflags v1.3.2`
    ... which satisfies dependency `bitflags = "^1.3.2"` of package `uefi v0.12.0`
    ... which satisfies dependency `uefi = "^0.12"` of package `example v0.1.0 (/tmp/example)`

failed to select a version for `bitflags` which could resolve this conflict

Switching this crate to caret requirements makes it compile successfully. Additionally, this crate's tests pass successfully after the switch.

Would it be possible to change this crate's dependencies, or am I missing something?

Thanks in advance.

Allow creating table from scratch

Right now there's a way of creating an "uninitialized" disk, but there's no easy way to create the table from scratch. Even writing the primary header and partitions can't do this, as it will later fail on write.

Idears

  • add configuration option to allow the primary or backup header to be invalid #89
  • Make GptDisk generic over it's ReadWrite
  • Better error handling #85
  • Maybe add a HeaderBuilder (to replace compute_new)
  • Add more tests, For example #73
  • Simplify GptDisks functions (remove, *_safe, *_embedded) work with cfg flags (write_backup, allow_first_usable_last_usable, change)
  • Improve MBR #84
  • Simplify naming, sectory_size, lb, lba, etc...

Derive Clone for GptDisk & mannual(fully customed) add_partition & pub partitions field

There are two situation in my code:

(1)

I am going to modify multi disk serially , so if any error occurs, I must revocer all thing.
I am going to do this by store unchanged disk instance to a list , if any error occus,just write all of them back.

Problem :

I can't push the disk instance into the list,nether the partitions field

`// part tabls backup , store orig part table in ram
let mut tables_backup=HashMap::new();

// move partition to target location
for (part_name, raw_part) in &target_slot.dyn_partition_set {
    // store orig part table in ram
    let mut sector=LogicalBlockSize::Lb512;
    let sector_bytes= get_disk_sector_size(&raw_part.driver);
    if sector_bytes == 4096 {
        sector = LogicalBlockSize::Lb4096;
    } else if sector_bytes == 512 {} else {
        panic!("Error: unsupported sector size !!!");
    };
    let gptcfg = GptConfig::new().writable(true).logical_block_size(sector.clone());
    let mut disk = gptcfg.open(&raw_part.driver).expect("Error: open disk failed");
    //tables_backup.insert(raw_part.driver.clone(),disk.clone());
    // delete part by name
};`

(2) add_partition

I want to add a partition at a customed location , I will mannually compute all thing.
So I want a new add_partition func like gptfdisk do
uint32_t CreatePartition(uint32_t partNum, uint64_t startSector, uint64_t endSector);
But your func is also nice , just keep part_type and flag so the part I want is
pub fn add_partition( &mut self, name: &str, id: Option<u32>, size: Option<u64>, part_type: partition_types::Type, flags: u64, part_alignment: Option<u64>, first_lba: Option<u64>, last_lba: Option<u64>, ) -> Result<u32, GptError>

Partition::size is of by one

the size function just returns the <last lba> - <first lba> as the size, however the last lba is inclusive. So the real size would be last - first + 1. The bytes_len function interestingly actually gets this right.

Filing an issue rather then a patch directly as users may depend on this behaviour by now, so not sure if this is something to see fixed (rather then documented) in a non-major update

partition-types: document whether/how codegen works

I'd like to add support for CoreOS partition types but I'm not exactly sure what is the procedure, and if there is some code-generation involved.

I see there is a huge JSON array in this repo, and then a lazy-static hashmap in the source code. Is there a scripted process to re-generate the latter from the former? Was the JSON manually assembled or is it coming from some external source?

Create standard-sized GPT linux disk image

The current disk image included in the test fixtures has a size (in blocks) of 29. This was due to an error where the block bounds of the partition were not treated as inclusive. With this error in mind, the image was created with a first LBA of 34 and a last LBA of 64 (inclusive), for a total length of 29.

This should likely be updated to have a first LBA of 34 and a final LBA of 29, or for even more rigorous adherence to best practices a first LBA of 2048 for 1MiB alignment (at the very least, it should be 4k aligned).

Off-by-one bug in `Partition::bytes_len`?

The function currently calculates the byte length through (last_lba - first_lba) * lb_size):

gpt/src/partition.rs

Lines 168 to 177 in 894899b

/// Return the length (in bytes) of this partition.
pub fn bytes_len(&self, lb_size: disk::LogicalBlockSize) -> Result<u64> {
let len = self
.last_lba
.checked_sub(self.first_lba)
.ok_or_else(|| Error::new(ErrorKind::Other, "partition length underflow - sectors"))?
.checked_mul(lb_size.into())
.ok_or_else(|| Error::new(ErrorKind::Other, "partition length overflow - bytes"))?;
Ok(len)
}

However, isn't last_lba an inclusive bound? From the rest of the code it seems like it is, and Wikipedia seems to agree.

If it is an inclusive bound, the byte length calculation should be (last_lba - first_lba + 1) * lb_size), shouldn't it?

Allow opening disk from existing `File`

Right now, via GptConfig, the only way to open a disk is to pass in a path, which opens a file. It would be nice to be able to pass in an already-opened File object instead.

As far as it stands, most of the functions in this crate explicitly take in paths, whereas they could be updated to take files instead.

Feedback and problem with backup header

I tried to use the library on a disk today and I couldn't get it working because of the backup header.

Apparently my disk doesn't have one. I had to debug a little bit to find out what's going on because the error message for the primary header and for the backup header is the same.

In the code I saw you have a todo:

TODO(lucab): complete support for skipping backup
header, etc, then expose all config knobs here.

So I suppose you are already aware of this...

I'm interested in the library so if you can explain to me what needs to be done I might be able to do a PR for this.

Best

Please allow not to detect the backup GPT table by default

My program run on an embedded device. When I execute the following statement, I get an error "invalid GPT signature":

let disk = gpt::GptConfig::new().writable(false).open(path)?;

Through tracing, I found that the execution flow is as follows:

open->open_from_device->header::read_backup_header->file_read_header:
    if sigstr != "EFI PART" {
        return Err(Error::new(ErrorKind::Other, "invalid GPT signature"));
    };

Obviously, the embedded device only has the primary partition table, but not the backup table. However, an error is reported due to the detection of the backup table.

Testing for GPT devices

Hello, thanks for this project! I'm not sure if there's a better way, but I was hoping to scan /dev/block and test each one to see if it's a GPT drive or not, like

for dev in /dev/block/by-path if not ends with "-part" {
    match read_gpt(dev) {
        Ok(Some(info)) => ...
        Ok(None) => continue
        Err(e) => panic(e)
    };
}

Right now the library indicates this in an error, but it's part of a human readable string that I wouldn't expect to be part of the specification (could change in a minor version update) so I'm not super comfortable matching on it.

How do I create new partition table on actual device for example /dev/loop0?

Hi,
I read through documentation and I was trying hard to create new partition table on /dev/loop0 to which I have previously attached file. This seems impossible. The documentation only explains how to do it on vector cursor which is NOT very useful in real life. Can you provide example on how to create new group partition table given path, for example "/dev/loop0"?

I tried this and it didn't work - no partition table on external program (kde partition manager)

   let mut gdisk = gpt::GptConfig::default()
        .initialized(false)
        .writable(true)
        .logical_block_size(gpt::disk::LogicalBlockSize::Lb512)
        .open(diskpath)
        .unwrap();
    
    gdisk.update_partitions(
        std::collections::BTreeMap::<u32, gpt::partition::Partition>::new()
    ).expect("failed to initialize blank partition table");
    
    gdisk.add_partition("test1", 1024 * 12, gpt::partition_types::BASIC, 0, None)
        .expect("failed to add test1 partition");
    
    let mut mem_device = gdisk.write().expect("failed to write partition table");

I tried this, but I get error: "memory map must have a non-zero length"

let mut file = fs::File::open(diskpath).unwrap();
let mut mmap_options = MmapOptions::new();
let mut mmap = unsafe { mmap_options.map(&file) };

let mut cursor = std::io::Cursor::new(mmap.unwrap());
let gpt = gptman::GPT::new_from(&mut cursor, SECTOR_SIZE as u64, [0xff; 16]).expect("could not make a partition table");
println!("gpt: {:?}", gpt);

The description on crates.io needs to be update

Hello,

I just found your lib by looking on crates.io and I think the description is misleading. It says it's a tool to read but according to the doc on docs.rs you can even write now. Maybe it would be a good idea to update crates.io.

Best

Maintenance

Hey @Quyzi
I feel this crate has lacked some love for a while.
if you'd like some help maintaining this crate, I'd be willing to help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.