Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 7/837 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Transfer disk image to larger/smaller disk

    - by forthrin
    I need to switch the hard drive on a 2006 iMac to a new SSD. I don't have the original installation CDs. I know I can order CDs from Apple, but this costs money. Someone told me it's possible to rip the image of the old drive and transfer to the new drive. If so, does the size of the new drive have to be exactly the same as the old? If not, my questions are: Is it possible to "stretch" the image from 120 MB disk to a 256 MB disk (numbers are examples)? If so, what is the command line for this? Likewise, is it possible to "shrink" an image from a larger disk (eg. 256 MB) to a smaller disk (eg. 120 MB), provided that the actual space used on the disk does not exceed 120 MB? How do you do this on the command line?

    Read the article

  • I want to reformat my external hard disk but disk utility said it could not be unmounted

    - by Faith
    Hi I wanted to reformat my HP 500GB hard disk to MS DOS (FAT) by using Disk Utility but Disk Utility kept saying that the hard disk cannot be unmounted. I tried googling it but nothing seems to answer my question. I'm not trying to reformat the hard disk in my Mac but my external hard disk! I usually format my hard disks to clear the documents I no longer need. And I'll change it to ExFat but currently, whenever I plug in my hard disk on Windows, the whole system just hangs there. Can someone help??

    Read the article

  • Graphics performance of 945GME

    - by l0b0
    Edit: Since setting Appearance - Visual Effects up to a stunning "Normal", I now get ~35 FPS in glxgears right after login, with nothing else running :( I'm getting terrible graphics performance in NeverWinter Nights (native with SoU+HotU+CEP2) on my Eee PC 1005HAB. Even with all graphics settings (including the "advanced" ones) at minimum I get about 2-10 FPS, depending on the scene. Firefox is really sluggish as well - Changing tabs often takes a second, scrolling is laggy, and typing this I notice the delay between pressing keys and seeing the text on screen. The rest of the OS is running OK, although general performance seems to be even worse than my old Eee PC 900. glxgears gives about 60 FPS, which is apparently as it should be (synchronized with the monitor refresh rate). Bugs like Launchpad #252094 and the instructions for Reverting the Jaunty Xorg intel driver to 2.4 are old enough that I'm afraid following the instructions would render the system unusable. Are there any tips for improving graphics performance on this system that are still relevant for 10.10? $ uname -a Linux l0b0eee 2.6.35-28-generic #49-Ubuntu SMP Tue Mar 1 14:40:58 UTC 2011 i686 GNU/Linux $ lspci -nn | grep VGA 00:02.0 VGA compatible controller [0300]: Intel Corporation Mobile 945GME Express Integrated Graphics Controller [8086:27ae] (rev 03) $ glxinfo name of display: :0.0 display: :0 screen: 0 direct rendering: Yes server glx vendor string: SGI server glx version string: 1.4 ...

    Read the article

  • Bad 3D Performance in Ubuntu 12.04

    - by Pandem
    I already posted a question before but I didn't really get any advice/help. I'll be a bit more brief/general in hope it'll help. I have an MSI HD 7850 with the Catalyst 12.4 drivers installed. I've found that I'm having bad 3D performance for some reason but I'm not entirely sure what. I suspect it may just that the graphics card is new and AMD just need to work on their drivers but it would be nice to get advice and narrow the problem down so that I can be sure rather than wait for driver updates that may not even help. I ran gxlgears to give some general idea of how bad the performance is. At default size it is averaging around 2000 FPS. The command glxinfo confirms the renderer is using AMD Radeon HD 7800 Series with OpenGL version 4.2. Edits below: As asked for others: lspci -v output is here. fglrxinfo output is here xvinfo output is here glxinfo | grep rendering says yes for direct rendering. These confirmed that everything was configured correctly. Within Unity and Gnome Classic: glxgears had an FPS of around 2000 FPS fgl_glxgears had an FPS of around 544 FPS Within LDXE: glxgears had an FPS of around 4600 FPS fgl_glxgears had an FPS of around 1600 FPS In the end it was discovered that Compiz was causing a large performance decrease and solution was simply to change window manager for the time being. Thanks to TechZilla for all his help!

    Read the article

  • Thunderbird 16.0.1 filling disk space

    - by Kris
    I'm on Ubuntu 12.04 with Thunderbird 16.0.1 and Kernel 3.6.0-030600rc4-generic. I used Thunderbird for quite a while and never had any problems with it. But now it seems to fill up my disk space very fast: watch -n 1 df -h . so Ubuntu started giving out warnings. First I removed some files but not much later it had filled up around 600 MB. It eats around 50 MB/min while I just download 10 emails or so via IMAP. This behaviour is new and seems to be some kind of bug. I don't want to delete my old mails, so what else could I do?

    Read the article

  • Investigate disk writes further to find out which process writes to my SSD

    - by zuba
    I try to minimize disk writes to my new SSD system drive. I'm stuck with iostat output: ~ > iostat -d 10 /dev/sdb Linux 2.6.32-44-generic (Pluto) 13.11.2012 _i686_ (2 CPU) Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 8,60 212,67 119,45 21010156 11800488 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 3,00 0,00 40,00 0 400 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 1,70 0,00 18,40 0 184 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 1,20 0,00 28,80 0 288 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 2,20 0,00 32,80 0 328 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 1,20 0,00 23,20 0 232 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 3,40 19,20 42,40 192 424 As I see there are writes to sdb. How can I resolve which process writes? I know about iotop, but it doesn't show which filesystem is being accessed.

    Read the article

  • external disk dont work on ubuntu

    - by Skirki
    i have problem with in my ubuntu: when i connect external imation apollo disc i dont see a file system. Disk is working because in windows OS on this same computer it works as well. when i typed dmesg i got following logs [12913.260078] usb 4-1: new low speed USB device using ohci_hcd and address 15 [12913.440072] usb 4-1: device descriptor read/64, error -62 [12913.730058] usb 4-1: device descriptor read/64, error -62 [12914.020085] usb 4-1: new low speed USB device using ohci_hcd and address 16 [12914.200082] usb 4-1: device descriptor read/64, error -62 [12914.490150] usb 4-1: device descriptor read/64, error -62 [12914.780064] usb 4-1: new low speed USB device using ohci_hcd and address 17 [12915.200070] usb 4-1: device not accepting address 17, error -62 [12915.380081] usb 4-1: new low speed USB device using ohci_hcd and address 18 [12915.800065] usb 4-1: device not accepting address 18, error -62 [12915.800100] hub 4-0:1.0: unable to enumerate USB device on port 1 i trying to fix that by use commands: cd /sys/bus/pci/drivers/ehci_hcd echo -n "0000:00:02.1" unbind but it did not match the result and i don't have any other idea

    Read the article

  • SSD becomes hot, disk failure warning

    - by Aegluin
    I have a two weeks old SSD (Kingston SSDnow 64GB). Yesterday, the computer shutdown twice and after rebooting I was bombarded with disk failure warnings. I usually take such warnings serious (and backed up), but skeptical. After cooling down, the laptop boots again and the only red Smart value was the temperature (Ubuntu did not show the temperature of failure, but the at that time 29°). After refreshing the Smart status and doing a "self test", everything is green. Before contacting Kingston support, I would like to know whether it could be due to a software issue: Is it possible that it is false alarm, and how can I check? I installed Ubuntu 12.04 32bit and took care of alignment. I supposed Ubuntu set up with optimal settings for SSDs, how can I check that there was no mistake? The current temperature is around 40-56°. Is such a temperature abnormal for SSDs? Output of sudo smartctl --all /dev/sda: http://pastebin.ubuntu.com/1175940/

    Read the article

  • ddrescue a non-solaris disk gets I/O error

    - by rleir
    I am trying to get a disk image of a SCSI disk using ddrescue on Solaris10 (Sparc). The disk is non-solaris, and ddrescue gets an immediate I/O error (as does dd). I used format to label the disk as Solaris, and now ddrescue reads it fine. Is there any way to get the image without labelling the disk as Solaris?

    Read the article

  • Performance analysis strategies

    - by Bernd
    I am assigned to a performance-tuning-debugging-troubleshooting task. Scenario: a multi-application environment running on several networked machines using databases. OS is Unix, DB is Oracle. Business logic is implemented across applications using synchronous/asynchronous communication. Applications are multi-user with several hundred call center users at peak time. User interfaces are web-based. Applications are third party, I can get access to developers and source code. I only have the production system and a functional test environment, no load test environment. Problem: bad performance! I need fast results. Management is going crazy. I got symptom examples like these: user interface actions taking minutes to complete. Seaching for a customer usually takes 6 seconds but an immediate subsequent search with same parameters may take 6 minutes. What would be your strategy for finding root causes?

    Read the article

  • Cloned partition not seen correctly by disk utility & gparted

    - by enrico
    Some days ago I cloned my /dev/sda1 partition with clonezilla in partition-to-partition mode to /dev/sda3. It worked, but now that I've finished the setup of system in /dev/sda3, I wanna reinstall /dev/sda1 for other stuffs. This partition is NOT mounted, but ubuntu's DISK UTILITY thinks it is, while it doesn't see as mounted the currently active / partition /dev/sda3. This is the df- TH output : Filesystem Type Size Used Avail Use% Mounted on /dev/sda3 ext4 32G 6.2G 24G 21% / udev devtmpfs 2.2G 13k 2.2G 1% /dev tmpfs tmpfs 845M 906k 844M 1% /run none tmpfs 5.3M 0 5.3M 0% /run/lock none tmpfs 2.2G 115k 2.2G 1% /run/shm /dev/sdb1 fuseblk 321G 147G 174G 46% /Dati Gparted instead, sees /dev/sda1 as NOT mounted (checked with Information option), but it display the BOOT flag on this partition, while the real booted partition /dev/sda3, hasn't it. If I try to format the /dev/sda1 partition, it gives me this error : GParted 0.8.1 --enable-libparted-dmraid Libparted 2.3 Format /dev/sda1 as ext2 00:00:02 ( ERROR ) calibrate /dev/sda1 00:00:00 ( SUCCESS ) path: /dev/sda1 start: 2048 end: 62500863 size: 62498816 (29.80 GiB) set partition type on /dev/sda1 00:00:02 ( SUCCESS ) new partition type: ext2 create new ext2 file system 00:00:00 ( ERROR ) mkfs.ext2 -L "" /dev/sda1 mke2fs 1.41.14 (22-Dec-2010) /dev/sda1 is mounted; will not make a filesystem here! How is possible to correct this behaviour ? Is this due to some lacking option in clonezilla phase ? TIA Enrico

    Read the article

  • Disk Space Full

    - by Loki
    Setting up Ubuntu 10.04 server, the / disk space shows full under df, however the du does not show any of the space used. This has several mounts to Gluster FS'. I have tried a forced FSCK and to no avail. ~# df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 141G 132G 0 100% / none 3.0G 224K 3.0G 1% /dev none 3.0G 0 3.0G 0% /dev/shm none 3.0G 76K 3.0G 1% /var/run none 3.0G 0 3.0G 0% /var/lock none 3.0G 0 3.0G 0% /lib/init/rw /dev/sdb1 9.0T 7.1T 1.9T 80% /brick1 /dev/sdb2 9.0T 7.9T 1.1T 88% /brick2 localhost:/sanvol09 385T 330T 56T 86% /mnt/sanvol09 <- Gluster FS uses local software to contact the DFS I've attempted a tune2fs and same issue arises # du -h --max-depth=1 --one-file-system / 4.0K /selinux 0 /proc 47M /boot 31M /mnt 8.0K /brick1 8.0K /brick2 391M /lib 4.0K /opt 7.4M /bin 0 /sys 379M /var 5.6M /etc 16K /lost+found 43M /root 4.0K /srv 5.7M /home 4.0K /media 7.0M /sbin 0 /dev 4.0K /tmp 4.0K /cdrom 631M /usr 1.6G / more info # df -ih Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md0 9.0M 91K 8.9M 1% / none 746K 770 745K 1% /dev none 747K 1 747K 1% /dev/shm none 747K 32 747K 1% /var/run none 747K 1 747K 1% /var/lock none 747K 3 747K 1% /lib/init/rw /dev/sdb1 583M 1.8M 581M 1% /brick1 /dev/sdb2 583M 1.9M 581M 1% /brick2 localhost:/sanvol09 25G 76M 25G 1% /mnt/sanvol09 The final question: df show's 100% used, and its not, any other known fixes?

    Read the article

  • Cloning a failing disk (Win 7)

    - by daveh551
    I have a Windows 7 machine with several partitions on a 1.5T drive. Windows has been complaining about disk errors and imminent failure, so I have purchased a new 2TB drive. The failing disk has not completely failed, and, in fact, I was able to boot Windows from it (after a couple tries) and examine the SMART logs - the only RED item was 1 sector being reallocated. But when I try to Clone it to the new Drive using Acronis True Image Home (2010), True Image can see the drive, the partitions, and the contents, but when it goes to actually do the clone, it says "Failed to move. Make sure the destination disk is not smaller than the source disk, and that there are not errors on the disk" (or something like that). What are some other options for simply cloning the failing drive. I'd like to clone the entire disk, but am willing to do it partition by partition if necessary. Was this a known failing of the 2010 edition of ATI, or is it really something hosed in my system. Would upgrading to the 2012 edition be likely to work any better? (I'd download the trial and try it out, but if I remember right, the cloning operation is disabled in the trial version), and I don't have enough free disk space to make an entire image.) What are some other cloning software packages if ATI won't work? Note that I'm only looking to clone the disk, not make an image as a back up - I use Ghost for that, and can fall back to that if I have to. It looks to me like CloneZilla would do the job. Any recommendations? Thanks, and if this duplicates other questions, I apologize.

    Read the article

  • disk write cache buffer and separate power supply

    - by HugoRune
    Windows has a setting to turn off the write-cache buffer (see image) Turn off Windows write-cache buffer flushing on the device To prevent data loss, do not select this check box unless the device has a separate power supply that allows the device to flush its buffer in case of power failure. Is it feasible and economical to get such a "separate power supply" for the internal sata drives of a non-server PC? Under what name is such a power supply sold? I know that there are UPS devices that can be connected to external drives,but what is required to be able to switch this setting safely on for an internal disk? The setting has different descriptions in different version of windows Windows XP: Enable write caching on the disk This setting enables write caching in Windows to improve disk performance, but a power outage or equipment failure might result in data loss or corruption. Windows Server 2003: Enable write caching on the disk Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power. Windows Vista: Enable advanced performance Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power. Windows 7 and 8: Turn off Windows write-cache buffer flushing on the device To prevent data loss, do not select this check box unless the device has a separate power supply that allows the device to flush its buffer in case of power failure. This article by Raymond Chen has some more detailed information about what the setting does.

    Read the article

  • Practical Performance Monitoring and Tuning Event

    - by Andrew Kelly
      For any of you who may be interested or know of someone in the market for a performance Monitoring and Tuning class I have just the ticket for you. It’s a 3 day event that will be held in Atlanta Ga. on January 25th to the 27th 2011. For those of you that know me or have been to my sessions you realize I like to provide more than just classroom theory and like to teach real world and above all practical methodology when it comes to performance in SQL Server. This class covers all the essentials...(read more)

    Read the article

  • SQL SERVER – Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2

    - by Pinal Dave
    This is the second part of the series Incremental Statistics. Here is the index of the complete series. What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1 Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2 DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3 In part 1 we have understood what is incremental statistics and now in this second part we will see a simple example of incremental statistics. This blog post is heavily inspired from my friend Balmukund’s must read blog post. If you have partitioned table and lots of data, this feature can be specifically very useful. Prerequisite Here are two things you must know before you start with the demonstrations. AdventureWorks – For the demonstration purpose I have installed AdventureWorks 2012 as an AdventureWorks 2014 in this demonstration. Partitions – You should know how partition works with databases. Setup Script Here is the setup script for creating Partition Function, Scheme, and the Table. We will populate the table based on the SalesOrderDetails table from AdventureWorks. -- Use Database USE AdventureWorks2014 GO -- Create Partition Function CREATE PARTITION FUNCTION IncrStatFn (INT) AS RANGE LEFT FOR VALUES (44000, 54000, 64000, 74000) GO -- Create Partition Scheme CREATE PARTITION SCHEME IncrStatSch AS PARTITION [IncrStatFn] TO ([PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY]) GO -- Create Table Incremental_Statistics CREATE TABLE [IncrStatTab]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [ModifiedDate] [datetime] NOT NULL) ON IncrStatSch(SalesOrderID) GO -- Populate Table INSERT INTO [IncrStatTab]([SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [OrderQty], [ProductID], [SpecialOfferID], [UnitPrice],   [UnitPriceDiscount], [ModifiedDate]) SELECT     [SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [OrderQty], [ProductID], [SpecialOfferID], [UnitPrice],   [UnitPriceDiscount], [ModifiedDate] FROM       [Sales].[SalesOrderDetail] WHERE      SalesOrderID < 54000 GO Check Details Now we will check details in the partition table IncrStatSch. -- Check the partition SELECT * FROM sys.partitions WHERE OBJECT_ID = OBJECT_ID('IncrStatTab') GO You will notice that only a few of the partition are filled up with data and remaining all the partitions are empty. Now we will create statistics on the Table on the column SalesOrderID. However, here we will keep adding one more keyword which is INCREMENTAL = ON. Please note this is the new keyword and feature added in SQL Server 2014. It did not exist in earlier versions. -- Create Statistics CREATE STATISTICS IncrStat ON [IncrStatTab] (SalesOrderID) WITH FULLSCAN, INCREMENTAL = ON GO Now we have successfully created statistics let us check the statistical histogram of the table. Now let us once again populate the table with more data. This time the data are entered into a different partition than earlier populated partition. -- Populate Table INSERT INTO [IncrStatTab]([SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [OrderQty], [ProductID], [SpecialOfferID], [UnitPrice],   [UnitPriceDiscount], [ModifiedDate]) SELECT     [SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [OrderQty], [ProductID], [SpecialOfferID], [UnitPrice],   [UnitPriceDiscount], [ModifiedDate] FROM       [Sales].[SalesOrderDetail] WHERE      SalesOrderID > 54000 GO Let us check the status of the partition once again with following script. -- Check the partition SELECT * FROM sys.partitions WHERE OBJECT_ID = OBJECT_ID('IncrStatTab') GO Statistics Update Now here has the new feature come into action. Previously, if we have to update the statistics, we will have to FULLSCAN the entire table irrespective of which partition got the data. However, in SQL Server 2014 we can just specify which partition we want to update in terms of Statistics. Here is the script for the same. -- Update Statistics Manually UPDATE STATISTICS IncrStatTab (IncrStat) WITH RESAMPLE ON PARTITIONS(3, 4) GO Now let us check the statistics once again. -- Show Statistics DBCC SHOW_STATISTICS('IncrStatTab', IncrStat) WITH HISTOGRAM GO Upon examining statistics histogram, you will notice that now the distribution has changed and there is way more rows in the histogram. Summary The new feature of Incremental Statistics is indeed a boon for the scenario where there are partitions and statistics needs to be updated frequently on the partitions. In earlier version to update statistics one has to do FULLSCAN on the entire table which was wasting too many resources. With the new feature in SQL Server 2014, now only those partitions which are significantly changed can be specified in the script to update statistics. Cleanup You can clean up the database by executing following scripts. -- Clean up DROP TABLE [IncrStatTab] DROP PARTITION SCHEME [IncrStatSch] DROP PARTITION FUNCTION [IncrStatFn] GO Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Statistics, Statistics

    Read the article

  • How to achieve best performance in DirectX 9.0 while rendering on multiple monitors

    - by Vibhore Tanwer
    I am new to DirectX, and trying to learn best practice. Please suggest what are the best practices for rendering on multiple monitors different things at the same time? how can I boost performance of application? I have gone through this article http://msdn.microsoft.com/en-us/library/windows/desktop/bb147263%28v=vs.85%29.aspx . I am making use of some pixel shaders to achieve some effects. At most 4 effect(4 shader effects) can be applied at same time. What are the best practices to achieve best performance with DirectX 9.0. I read somewhere that DirectX 11 provides support for parallel rendering, but I am not able to get any working sample for DirectX 11.0. Please help me with this, Any help would be of great value. Thanks

    Read the article

  • SQL SERVER – BI Quiz – Troubleshooting Cube Performance

    - by pinaldave
    My friend Jacob Sebastian runs SQL BI Quiz competition. Where there are 30 different questions on each day of the month. Winners get opportunity to participate in this Quiz, learn something new and win great awards. Working with huge data is very common when it is about Data Warehousing. It is necessary to create Cubes on the data to make it meaningful and consumable. There are cases when retrieving the data from cube takes lots of the time. Let us assume that your cube is returning you data very quickly. Suddenly on one day it is returning the data very slowly. What are the three things will you in order to diagnose this. After diagnose what you will do to resolve performance issue. Participate in my question over here Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Pinal Dave, PostADay, Readers Question, SQL, SQL Authority, SQL Performance, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Will JVisualVM degrade application performance?

    - by rocky
    I have doubts in JVisual VM profiler tool related to performance. I have requirement to implement a JVM Monitoring tool for my enterpise java application. I have gone through some profiling tools in market but all them are having some kind of agent file which we need include in server startup. I have a fear that these client agent will degrade my application performance will more. So I have decided to JVisual VM because this profiler tool comes with JDK itself but before implementing JVisualVM, does anybody faces any issues with JVisualVM profiler tool? As well as, is this safe if I implement in application?

    Read the article

  • Programmer performance

    - by RSK
    I am a PHP programmer with 1 year of experience. As I am just starting my career, I am learning a lot of things now. I can say I am a little bit of a perfectionist. When I am assigned a problem I start off by Googling. Then, even when I find a solution, I keep trying for a better one until I find 2-3 options. Then I start learning it and choose the best performing solution. Even though I am learning a lot, this process gets me labeled as a low performer. My questions: As a novice, should I continue to use this learning process and not worry about my performance? Should I focus more on my performance and less on how the code performs?

    Read the article

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • SQL SERVER – NTFS File System Performance for SQL Server

    - by pinaldave
    Note: Before practicing any of the suggestion of this article, consult your IT Infrastructural Admin, applying the suggestion without proper testing can only damage your system. Question: “Pinal, we have 80 GB of data including all the database files, we have our data in NTFS file system. We have proper backups are set up. Any suggestion for our NTFS file system performance improvement. Our SQL Server box is running only SQL Server and nothing else. Please advise.” When I receive questions which I have just listed above, it often sends me deep thought. Honestly, I know a lot but there are plenty of things, I believe can be built with community knowledge base. Today I need you to help me to complete this list. I will start the list and you help me complete it. NTFS File System Performance Best Practices for SQL Server Disable Indexing on disk volumes Disable generation of 8.3 names (command: FSUTIL BEHAVIOR SET DISABLE8DOT3 1) Disable last file access time tracking (command: FSUTIL BEHAVIOR SET DISABLELASTACCESS 1) Keep some space empty (let us say 15% for reference) on drive is possible (Only on Filestream Data storage volume) Defragement the volume Add your suggestions here… The one which I often get a pretty big debate is NTFS allocation size. I have seen that on the disk volume which stores filestream data, when increased allocation to 64K from 4K, it reduces the fragmentation. Again, I suggest you attempt this after proper testing on your server. Every system is different and the file stored is different. Here is when I would like to request you to share your experience with related to NTFS allocation size. If you do not agree with any of the above suggestions, leave a comment with reference and I will modify it. Please note that above list prepared assuming the SQL Server application is only running on the computer system. The next question does all these still relevant for SSD – I personally have no experience with SSD with large database so I will refrain from comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at StackOverflow

    Read the article

  • Service and/or tool to monitor performance?

    - by chris
    I am seeing wildly different performance from a clients web site, and would like to set up some sort of monitoring. What I'm looking for is a service that will issue requests to a couple of URLs, and report on the time it took to process the page - TTFB and time to download the entire page - that means I need something that will process javascript & css. Are there services like this? I've seen a few that monitor uptime, but they don't seem to report on the overall page performance.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >