Search Results

Search found 5849 results on 234 pages for 'partition scheme'.

Page 3/234 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Checking partition alignment with PowerCLI

    - by Julian
    I'm trying to verify that the file system partitions within each of the servers I'm working on are aligned correctly. I've got the following script that when I've tried running will either claim that all virtual servers are aligned or not aligned based on which if statement I use (one is commented out): $myArr = @() $vms = get-vm | where {$_.PowerState -eq "PoweredOn" -and $_.Guest.OSFullName -match "Microsoft Windows*" } | sort name foreach($vm in $vms){ $wmi = get-wmiobject -class "win32_DiskPartition" -namespace "root\CIMV2" -ComputerName $vm foreach ($partition in $wmi){ $Details = "" | Select-Object VMName, Partition, Status #if (($partition.startingoffset % 65536) -isnot [decimal]){ if ($partition.startingoffSet -eq "65536"){ $Details.VMName = $partition.SystemName $Details.Partition = $partition.Name $Details.Status = "Partition aligned" } else{ $Details.VMName = $partition.SystemName $Details.Partition = $partition.Name $Details.Status = "Partition not aligned" } $myArr += $Details } } $myArr | Export-CSV -NoTypeInformation "C:\users\myself\Documents\Scripts\PartitionAlignment.csv" Would anyone know what is wrong with my code? I'm still learning about partitions so I'm not sure how I need to check the starting off-set number to verify alignment. EDIT: $myArr = @() $vms = get-vm | where {$_.PowerState -eq "PoweredOn" -and $_.Guest.OSFullName -match "Microsoft Windows*" } | sort name $wmi = get-wmiobject -class "win32_DiskPartition" -namespace "root\CIMV2" -ComputerName $vm #foreach ($_ In Get-WMIObject Win32_DiskPartition | Select Name, BlockSize, NumberOfBlocks, StartingOffSet, @{n='Alignment'; e={$_.StartingOffSet/$_.BlockSize}}) {$_} foreach ($wmi| Select Name, BlockSize, NumberOfBlocks, StartingOffSet, @{n='Alignment'; e={$_.StartingOffSet/$_.BlockSize}}) {$_}

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Types in Lisp and Scheme

    - by user2054900
    I see now that Racket has types. At first glance it seems to be almost identical to Haskell typing. But is Lisp's CLOS covering some of the space Haskell types cover? Creating a very strict Haskell type and an object in any OO language seems vaguely similar. It's just that I've drunk some of the Haskell kool-aid and I'm totally paranoid that if I go down the Lisp road, I'll be screwed due to dynamic typing.

    Read the article

  • What is the meaning of # in R5RS Scheme number literals

    - by ikmac
    There is a partial answer on Stack Overflow, but I'm asking something a teeny bit more specific than the answers there. So... Does the formal semantics (Section 7.2) specify the meaning of such a numeric literal? Does it specify the meaning of numeric operations on the value resulting from interpreting the literal? If yes, what are the meanings (in English -- denotational semantics is all greek characters to me :))?

    Read the article

  • How do I make a module in PLT Scheme?

    - by kunjaan
    I tried doing this: #lang scheme (module duck scheme/base (provide num-eggs quack) (define num-eggs 2) (define (quack n) (unless (zero? n) (printf "quack\n") (quack (sub1 n))))) But I get this error: module: illegal use (not at top-level) in: (module duck scheme/base (provide num-eggs quack) (define num-eggs 2) (define (quack n) (unless (zero? n) (printf "quack\n") (quack (sub1 n))))) what is the correct way?

    Read the article

  • Linux DD command partition -to- partition

    - by Ben Jackson
    I just used the DD command to copy the contents of one partition over to another partition on another drive, like this: dd if=/dev/sda2 of=/dev/sdb2 bs=4096 conv=noerror sda2 partition was 66GB and sdb2 was 250GB. I read that by doing this the extra space on the drive I am copying to will be wasted, is this true? I wasn't worried about loosing the extra space for the time being however, I just ran: sudo kill -USR1 (PID) to view the current status of DD and it has written over 66GB of data, will it continue to write data until it gets to 250GB? If so, is there a way to stop the process without corrupting it as waiting for it to write blank space seems like a waste of time.

    Read the article

  • Switch from encrypted partition to unencrypted (Error: cryptsetup: evms_activate is not available)

    - by Chris Lercher
    I initially installed Ubuntu 11.04 with an encrypted file system (from the alternate install CD: Guided Partitioning, LVM encrypted). Now I wanted to change this setup to have my root file system on an unencrypted partition. I had the following setup before: /dev/mapper/my-root on / type ext4 (rw,noatime,errors=remount-ro,commit=0,commit=0) /dev/sda1 on /boot type ext2 (rw,noatime) I backed up /, reformatted /dev/sda5 (which had contained the encrypted LVM device) to an ext3 partition, and restored / to that partition. I edited /etc/fstab, removed the line /dev/mapper/my-root / ..., and added the line: /dev/sda5 / ext3 noatime,rw,errors=remount-ro,commit=0 0 1 I edited /etc/crypttab, and commented out the single entry. On reboot, I get the grub screen as usual, but then I get the message cryptsetup:evms_activate is not available, waiting for encrypted source device. I tried reinstalling Grub2 using a LiveCD with the ChRoot method, but that didn't make any difference. Why is Ubuntu still searching for an encrypted device?

    Read the article

  • Triple-Boot + 4 partition Limit

    - by dsimcha
    I just bought a new hard drive so that I could convert my XP-only machine into an XP-Ubuntu-Windows 7 triple boot machine. Since the drive is absurdly huge (1 TB) I wouldn't mind throwing ReactOS into the mix, too. I just found out that master boot records are limited to 4 entries, meaning 4 primary partitions. I had Windows XP set up on my old drive as a boot partition, a program files partition and a media partition. Since I really didn't want to install XP from scratch, I cloned this setup on my new drive. This leaves me one MBR partition entry for installing Windows 7, Ubuntu and ReactOS. I'd like to avoid having to install XP from scratch like the plague, partly because it's supposed to be a safety net in case things go wrong with my other OS's and because I've invested a lot of time getting it set up exactly the way I like it. Here are the options I've considered and why I don't like them: Install Windows 7 on my media partition. This would work, but I prefer to keep my media partition completely separate from any OS, so that I can reformat an OS partition without affecting my media partition at all. Use wubi or something to install Ubuntu in the same partition as something else. Again, this is brittle. Move all my media to a logical drive on an extended partition. Create another logical drive on this extended partition for Ubuntu. The problem here is that extended partitions are rather brittle--if you nuke one, it renders the rest useless. Just put the old drive back in my computer and run XP off it. Use the new one for the other OS's. The problem here is that the old drive is slower and uses extra power, generates extra heat, etc. Can anyone suggest any other possibilities that I may have overlooked?

    Read the article

  • Can I install Natty alongside Maverick and retain my encrypted /home partition?

    - by Jon
    This is my partitioning scheme: 10GB partition empty -- will be installing Natty here 10GB partition containing Maverick 2GB swap partition 300GB encrypted /home partition I've had few problems in the past with having two ubuntu installs on two separate partitions, giving /home it's own partition, but I'm a little concerned since I'm now using an encrypted /home partition. Install won't try to wipe my /home if I click " encrypt home directory," will it?

    Read the article

  • Partition table corrupted (USB flash drive)

    - by 13ren
    It's an 8 GB Patriot thumb drive, which I've used extensively with lots of data. Today, it is detected, but all data is gone: (EDIT at least some data is still there, but the partition table is gone) EDIT @Sathya (thanks) here's the relevant output from sudo fdisk -l: Disk /dev/sdc: 8019 MB, 8019509248 bytes 247 heads, 62 sectors/track, 1022 cylinders Units = cylinders of 15314 * 512 = 7840768 bytes Disk /dev/sdc doesn't contain a valid partition table It looks like it is /dev/sdc, with that 8 GB... and no partition table. I tried to mount /dev/sdc (and then dmesg | tail): /media> sudo mount /dev/sdc mytmp mount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so /media> dmesg | tail [ 24.300000] sdc: unknown partition table [ 24.320000] sd 2:0:0:0: Attached scsi removable disk sdc [ 24.370000] usb-storage: device scan complete [ 26.870000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 26.870000] EXT2-fs: group descriptors corrupted! [ 50.420000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 5565.470000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 5565.470000] EXT2-fs: group descriptors corrupted! EDIT @Col: results from testdisk Disk /dev/sdc - 8013 MB / 7642 MiB - CHS 1022 247 62 Current partition structure: Partition Start End Size in sectors Partition sector doesn't have the endmark 0xAA55 After I hit [proceed], it says: Structure: Ok. Keys A: add partition, L: load backup, Enter: to continue The "Structure: Ok." seems reassuring... will "A: add partition" make my old data accessible (if it's still there), or will it make a new, fresh partition? Another option is "[ MBR Code ] Write TestDisk MBR code to first sector" - would it be better to do this? EDIT I found that at least some of my data is still on the flash drive, by using the below, and searching for English text in less (like " the "): cat /dev/sde | tr -cd '\11\12\40\1540-\176' | less (The drive changed from "/dev/sdb" to "/dev/sde" because I connected some extra drives today). I've learnt that "/dev/sde1" would be the first partition; and "/dev/sde" is the whole drive. Because unix treats these devices just like files, you can use all the ordinary unix file commands on them, like cat, and then process them like any other stream of data. The tr above removes non-printable characters ("\40" is space, which I wanted to preserve). In less, you can use "/" to search, similar to Vim. How can I get my data back (assuming it's still there)? If only the partition table is corrupted, is there a standard "partition recovery tool"? Is there a way to "repartition" without deleting everything?

    Read the article

  • Strange error when Bootcamp attempts to create partition for Windows

    - by mozillalives
    I get a strange error when I tell Bootcamp to create a partition for Windows. I get to the Create a Partition stage. I select 20GB for Windows leaving 91GB (39GB free) for OS X. I then click Partition and it gives me the following error The disk cannot be partitioned because some files cannot be moved. Back up the disk and use Disk Utility to format it as a single Mac OS Extended (Journaled) volume. Restore your information to the disk and try using Boot Camp Assistant again. My disk is formatted in Mac OS Extended (Journaled), I have closed all applications (besides Bootcamp Assistant) and I have even restarted and tried again to see if that might help. Nothing. I can't get it to partition. I also tried to create the partition using Disk Utility and I got the following error Partition failed Partition failed with the error: Could not modify partition map because filesystem verification failed Any ideas? BTW - I am running OS X 10.6.2

    Read the article

  • what's the difference between a Volume and a Partition in Windows 7 diskpart

    - by user170232
    I was trying to follow the Intel guide for setting up iRST (Intel Rapid Start Technology) on my new laptop. The Intel manual says you need to create a *Volume that is as big or bigger than your available memory, set it to a specific id (id=84), then go into the iRST tool and adjust some settings. Looking at the disk manager on the laptop, I see there is already a Partition labeled as "Hibernation Partition" which is a little bigger than the memory in my system. So it looks like iRST was already set up...BUT, it's a Partition, not a Volume. Here's what the manual says to do: (from: http://download.intel.com/support/motherboards/desktop/sb/rapid_start_technology_user_guide.pdf) diskpart list disk select disk x (where x is the disk to use, there's only one disk in this laptop) create partition primary size=X000 (where X000 is the size to create) detail disk (which lists details for the disk. This is where i get hung up) select volume Z (where Z is the *partition you created previously) ** it says the 'detail disk' command will list the volume #, but it doesn't. ** 'detail disk' only lists two "volumes" for Recovery and OS. ** if i do 'list partition', i see the 8 GB *partition labeled as "Hibernation Partition") ** so I can't continue with the following steps: set id=84 override exit The reason I went looking for the manual is because when iRST is enabled in the BIOS, the system won't resume from sleep. When it's disabled, it works fine, but the system goes into (legacy?) Hibernation mode and takes a while to come out of Hibernation. the iRST is supposed to resume from deep sleep very quickly. So, what's the difference between a Volume and a Partition? Should I delete the Hibernation Partition and create a Hibernation Volume? Anyone have any ideas? (if it matters, this is on a Dell XPS 13 with BIOS A08) Thanks! J

    Read the article

  • Active Directory Partition Error

    - by BLAKE
    Right now my active directory is failing a dcdiag test. I can find no info online about this error. When I run dcdiag /test:crossrefvalidation, I get the output: .... Doing primary tests Testing server: Default-First-Site-Name\ad01 Running partition tests on : ForestDnsZones Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Running partition tests on : DomainDnsZones Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Running partition tests on : mydomain Starting test: CrossRefValidation ......................... mydomain passed test CrossRefValidation Running partition tests on : t Starting test: CrossRefValidation This cross-ref has a non-standard dNSRoot attribute. Cross-ref DN: CN=a3a24d3a-4782-460b-9148-86ac2d86b9ae,CN=Partitions,CN=Configuration, DC=mydomain,DC=com nCName attribute (Partition name): DC=t Bad dNSRoot attribute: dc01.mydomain.com Check with your network administrator to make sure this dNSRoot attribute is correct, and if not please change the attribute to the value below. dNSRoot should be: t It appears this partition (DC=t) failed to get completely created. This cross-ref (CN=a3a24d3a-4782-460b-9148-86ac2d86b9ae,CN=Partitions,CN=Configurat ion,DC=mydomain,DC=com) is dead and should be removed from the Active Directory. ......................... t failed test CrossRefValidation .... I used LDP from the windows support tools. I searched for the dnsRoot attribute in "cn=partitions,cn=configuration,dc=mydomain,dc=com", with the filter "(&(objectcategory=crossref)(systemFlags:1.2.840.113556.1.4.803:=5))" I got the result: ***Searching... ldap_search_s(ld, "cn=partitions,CN=Configuration,DC=mydomain,DC=com", 1, "(& (objectcategory=crossref)(systemFlags:1.2.840.113556.1.4.803:=5))", attrList, 0, &msg) Result <0>: (null) Matched DNs: Getting 3 entries: >> Dn: CN=65502be3-fc90-442a-83d8-4b3b91e82439,CN=Partitions,CN=Configuration,DC=mydomain,DC=com 1> dnsRoot: ForestDnsZones.mydomain.com; >> Dn: CN=a3a24d3a-4782-460b-9148-86ac2d86b9ae,CN=Partitions,CN=Configuration,DC=mydomain,DC=com 1> dnsRoot: ad01.mydomain.com; >> Dn: CN=f0ef5771-6225-4984-acd9-c08f582eb4e2,CN=Partitions,CN=Configuration,DC=mydomain,DC=com 1> dnsRoot: DomainDnsZones.mydomain.com; It looks like the bad partition has the name of my first domain controller 'ad01.mydomain.com'. I have googled for a while and have not been able to find any help or documentation about application partitions in Active Directory. Does anyone have any advice on how to cleanup this partition (or what the partition is for)? Does anyone know the repercussions for deleting this partition?

    Read the article

  • Setup was unable to create a new system partition or locate an existing system partition

    - by PearlFactory
    Have got a new kickn server as new DEV machine It has got two 3ware 9650 Cached Controllers with 8 x 300gig Velociraptor Drives First Problem was the 9.5.1.1 drivers Had to press F8 as soon as the Win 2008 r2 server cd started to load. Once in Adavanced Startup options Disable Driver Signing options Next Issue was I got everything running and accidently selected wrong raid part to do install once I restarted All I would get after waiting the 10 mins for the reboot to start & loading the driver was "setup was unable to create a new system partition or locate an existing system partition"  Finally after about 1 hour I removed all drives apart from the 2 needed for system part on cont 0 deleted system part and recreated this RAID1 mirror. (ALso make sure all USB drives are out on boot..only add them when browsing  the driver to be added )  Restarted loaded driver selected install and Once system is up I will go back and add drives and new parts on both controllers AT least I did not get stuck for a day as is the norm..lol

    Read the article

  • Is there an equivalent to Lisp's "runtime" primitive in Scheme?

    - by Bill the Lizard
    According to SICP section 1.2.6, exercise 1.22: Most Lisp implementations include a primitive called runtime that returns an integer that specifies the amount of time the system has been running (measured, for example, in microseconds). I'm using DrScheme, where runtime doesn't seem to be available, so I'm looking for a good substitute. I found in the PLT-Scheme Reference that there is a current-milliseconds primitive. Does anyone know if there's a timer in Scheme with better resolution?

    Read the article

  • repair partition table

    - by m.sr
    Hallo. I've just overwritten my partition table of my system's hard disk. i made a cfdisk on the wrong device (/dev/sda instead of /dev/sdd), deleted all partitions, made one new primary spanning over the whole device, set its type to 07 (NTFS) and hit write. So here i am with my system running. Until i reboot, i hope/guess nothing will change - meaning: all my data is accessible (I'm currently making a dd-backup of the whole device and plan to make a .tar.gz-backup of the most important data later). I also backed up /proc/partitions, /proc/diskstats (even though i guess this is more about throughput and stuff like this ...) and /sys/block/sda/sda?/{start,size}. Some further things i know: 4 primary partitions 1st partition: ~100Mb, ext3, /boot 2nd partition: ~100Mb, "Win7 Boot Partition", ntfs(?) 3rd partition: ~20...30GB, Win7, ntfs 4th partition: ~20...30GB, luks-encrypted device The luks- de crypted device is a LVM-PV The /, /home & swap-partitions are all LVs on the (VG on the) above noted PV So my questions: What is the simplest way to just write the kernels partition table to the disk? What is the simplest way to take the above mentioned (and perhaps other I don't know of ...) data and generate the partition table? Are there any problems to take care of regarding to luks and/or lvm? Is there any data I should backup before rebooting (meanig stuff from kernel [ /sys/..., /proc/...] and so on, which could help me regenerate the partition table)? Thanks a lot! P.S.: debian sid, Kernel 2.6.34-1-amd64 from debian-experimental, 80GB Intel SSD

    Read the article

  • linux hardware raid 10 / lvm / virtual machine partition alignment and filesystem optimization

    - by Jason Ward
    I've been reading everything I can find about partition alignment and filesystem optimization (ext4 and xfs) but still don't know enough to be confident in setting up my current configuration. My remaining confusion comes from the LVM layer and if I should use raid parameters on the filesystem in guest os'es. My main questions are: When I use 'pvcreate --dataalignment' do I use the stripe-width as calculated for a filesystem on RAID (128kB for ext4 in my situation), the Stripe size of the RAID set (256kB), something else altogether, or do I not need this? When I create ext2/3/4 or xfs filesystems in guests on the Logical Volumes, should I add the settings for the underlying RAID (e.g. mkfs.ext4 -b 4096 -E stride=64,stripe-width=128)? Does anyone see any glaring errors in my set up below? I'm running some benchmarks now but haven't done enough to start comparing results. I have four drives in RAID 10 on a 3ware 9750-4i controller (more details on the settings below) giving me a 6.0TB device at /dev/sda. Here is my partition table: Model: LSI 9750-4i DISK (scsi) Disk /dev/sda: 5722024MiB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1.00MiB 257MiB 256MiB ext4 BOOTPART boot 2 257MiB 4353MiB 4096MiB linux-swap(v1) 3 4353MiB 266497MiB 262144MiB ext4 4 266497MiB 4460801MiB 4194304MiB Partition 1 is to be the /boot partition for my xen host. Partition 2 is swap. Partition 3 is to be the root (/) for my xen host. Partition 4 is to be (the only) physical volume to be used by LVM (for those who are counting, I left about 1.2TB unallocated for now) For my Xen guests, I usually create a Logical Volume of the needed size and present it to the guests for them to partition as needed. I know there are other ways of handling that but this method works best for my situation. Here's the hardware of interest on my CentOS 6.3 Xen Host: 4x Seagate Barracuda 3TB ST3000DM001 Drives (sector size: 512 logical/4096 physical) 3ware 9750-4i w/BBU (sector size reported: 512 logical/512 physical) All four drives make up a RAID 10 array. Stripe: 256kB Write Cache enabled Read Cache: intelligent StoreSave: Balance Thanks!

    Read the article

  • Testdisk won’t list files for an ext4 partition inside a LVM inside a LUKS partition

    - by user1598585
    I have accidentally deleted a file that I want to recover. The partition is an ext4 partition inside an LVM partition that is encrypted with dm-crypt/LUKS. The encrypted LUKS partition is: /dev/sda2 which contains a physical volume, with a single volume group, mapped to: /dev/mapper/system And the logical volume, the ext4 partition is mapped to: /dev/mapper/system-home A # testdisk /dev/mapper/system-home will notice it as an ext4 partition but tells me that the partition seems damaged when I try to list the files. If I # testdisk /dev/mapper/system it will detect all the partitions, but the same happens if I try to list their files. Am I doing something wrong or is it a known bug? I have searched but haven’t found any clue.

    Read the article

  • Combine OS partion with data partition on NAS4Free/FreeNAS

    - by Pak
    I recently built a NAS4Free (formerly FreeNAS) machine using a 256MB (yes, MB) USB drive for the OS. When I did the original install, I had the bright idea of making the OS partition just big enough for the OS and a then creating a second partition using the remainder of the drive to store stuff pertaining to the OS. I never really found a use for the data partition and I ended up running out of space on the OS partition, so now I'd like to combine the partitions into a single partition. Is this something that is possible to do while everything is up and running? If it comes down to it, I can take down the machine and do a fresh install of the OS using the entire space of the USB drive, but I'd like to use this as an opportunity to better familiarize myself with FreeBSD/UNIX type systems. If this is possible, will it interfere with the NAS4Free things? The data partition shows up in the web interface under the disks section. If I end up manually changing the partitions, I'd be concerned with NAS4Free getting confused by the missing partition.

    Read the article

  • [MINI HOW-TO] Change the Default Color Scheme in Office 2010

    - by Mysticgeek
    Like in Office 2007 the default color scheme for 2010 is blue. If you are not a fan of it, here we show you how to change it to silver or black. In this example we are using Microsoft Word, but it works the same way in Excel, Outlook, and PowerPoint as well. Once you change the color scheme in one Office application, it will change it for all of the other apps in the suite. Change Color Scheme To change the color scheme click on the File tab to access Backstage View and click on Options. In Word Options the General section should open by default…use the dropdown menu next to Color Scheme to change it to Silver, Blue, or Black then click OK. Here is what Black looks like…who knows why Microsoft decided to leave the blue around the edges. This is the default Blue color scheme… And finally we take a look at the Silver color scheme in Excel… That is all there is to it! It would be nice if they would incorporate other color schemes to Office 2010, as some of you may not be happy with only three choices. If you’re using Office 2007 check out our article on how to change the color scheme in it. Also, The Geek has a cool article on how to set the Color Scheme of Office 2007 with a quick registry hack. Similar Articles Productive Geek Tips Set the Office 2007 Color Scheme With a Quick Registry HackChange The Default Color Scheme In Office 2007Maximize Space by "Auto-Hiding" the Ribbon in Office 2007How To Personalize the Windows Command PromptOrganize & Group Your Tabs in Firefox the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • In PLT scheme, can I export functions after another function has been called?

    - by Jason Baker
    I'm trying to create a binding to libpython using scheme's FFI. To do this, I have to get the location of python, create the ffi-lib, and then create functions from it. So for instance I could do this: (module pyscheme scheme (require foreign) (unsafe!) (define (link-python [lib "/usr/lib/libpython2.6.so"]) (ffi-lib lib)) This is all well and good, but I can't think of a way to export functions. For instance, I could do something like this: (define Py_Initialize (get-ffi-obj "Py_Initialize" libpython (_fun -> _void))) ...but then I'd have to store a reference to libpython (created by link-python) globally somehow. Is there any way to export these functions once link-python is called? In other words, I'd like someone using the module to be able to do this: (require pyscheme) (link-python) (Py_Initialize) ...or this: (require pyscheme) (link-python "/weird/location/for/libpython.so") (Py_Initialize) ...but have this give an error: (require pyscheme) (Py_Initialize) How can I do this?

    Read the article

  • Ubuntu 10.04 preseed unattended install results in faulty partition table

    - by joschi
    I'm currently trying to set up an unattended installation of Ubuntu 10.04 (Lucid Lynx) through preseeding. But whenever I try to create a custom partition scheme, the Debian installer (which Ubuntu is using) produces a faulty partition table. I've taken the partition scheme described in the example preseed file: d-i partman-auto/expert_recipe string \ boot-root :: \ 40 50 100 ext3 \ $primary{ } $bootable{ } \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext3 } \ mountpoint{ /boot } \ . \ 500 10000 1000000000 ext3 \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext3 } \ mountpoint{ / } \ . \ 64 512 300% linux-swap \ method{ swap } format{ } \ . Unfortunately it also produces an incorrect partition table on the disk. The installation process itself is working and the installed system eventually boots and is working, as far as I can tell. But fdisk and cfdisk are still complaining: # fdisk -l /dev/sda Disk /dev/sda: 17.2 GB, 17179869184 bytes 255 heads, 63 sectors/track, 2088 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000a1cdd Device Boot Start End Blocks Id System /dev/sda1 * 1 5 37888 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 5 2089 16736257 5 Extended /dev/sda5 5 2013 16121856 83 Linux /dev/sda6 2013 2089 613376 82 Linux swap / Solaris cfdisk even refuses to start at all: # cfdisk /dev/sda FATAL ERROR: Bad primary partition 1: Partition ends in the final partial cylinder parted on the other hand does not complain about the cylinder boundary of /dev/sda1: # parted /dev/sda p Model: VMware Virtual disk (scsi) Disk /dev/sda: 17.2GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 39.8MB 38.8MB primary ext4 boot 2 40.9MB 17.2GB 17.1GB extended 5 40.9MB 16.5GB 16.5GB logical ext4 6 16.6GB 17.2GB 628MB logical linux-swap(v1) Since the installed system is working, it shouldn't be a big problem but I'm afraid that this will mean trouble in the future.

    Read the article

  • re-partition new drive and use new partition as 'home'

    - by vector
    Linux noob here. I installed Ubuntu 12.04 on a brand new drive (dual boot with windows on another drive) and re-partitioned it afterwards (with gparted off of live cd) like so (sudo fdisk -l) : Device Boot Start End Blocks Id System /dev/sdb1 * 2048 63735807 31866880 83 Linux /dev/sdb2 1448509438 1465147391 8318977 5 Extended Partition 2 does not start on physical sector boundary. /dev/sdb3 63735808 1448507391 692385792 83 Linux /dev/sdb5 1448509440 1465147391 8318976 82 Linux swap / Solaris I'd like to use sdb3 as default home for all work and fun related program installs and files, but I haven't even gotten as far as changing permissions on it. Any help will be most appreciated.

    Read the article

  • Windows 7: moved system partition, need to update boot partition

    - by Actorclavilis
    So, I have a decently standard Windows7/Ubuntu dual-boot setup, and (since Ubuntu is my usual operating system) I found I needed to grow my Ubuntu partition and shrink my W7 partition. Originally, my system (500G) looked like this: W7 Boot Partition (1.5G) Ubuntu (around 240G) W7 (same as Ubuntu) (on an extended partition, all by itself) Swap (rest of disk, around 16G) Now I'm no stranger to partitioning and filesystem tools, especially GParted, which I used on a Linux boot disk. After my partition editing, the partitions are laid out the same, except the Ubuntu partition is now 407G and the W7 partition is smaller to compensate. I had supposed, based on http://www.gparted.org/faq.php, that I would be able to run the W7 install disk in recovery mode and have it deal with the rearrangement, then possibly reinstall GRUB or something. Well, now the W7 install disk doesn't even see my W7 installation. All my files are there, the NTFS is perfectly clean, no problems there, but the install disk won't notice it. (Of course, the GRUB entry works fine but the W7 boot partition (which I didn't change) refuses to boot it.) So, basically, any ideas on how to fix this? I don't especially want to rerun the entire install procedure because I'll have a bunch of programs to reinstall (never mind redoing GRUB), but I fear that might be the only option. Thanks.

    Read the article

  • Recover an HP recovery partition

    - by eric.chartier
    I have a (semi)-dead hard drive with an HP recovery partition on it. My goal is to Buy a new hard drive Copy the recovery partition to a drive ( dd if=/dev/sdb1 of=~/recovery.bak ) Make a new partition of 12000 mb with Windows 7 Copy back recovery partition to the new drive ( dd if=~/recovery.bak of=/dev/sdb1 ) Then press F11 when the laptop boots. However, this doesn't work. Any idea why? Edit: I suspect the F11 doesn't work because the laptop tries to boot the laptop, because my partition is the primary partition of the drive. Does anyone have any experience dealing with stuff like this?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >