Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 252/837 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • Ubuntu installation does not recognize drive partinioning

    - by Woltan
    I have a 1TB drive and installed Windows 7 on a 128GB partition. When I now try to install Ubuntu 11.04 it does not recognize the Windows partition but offers the complete 1TB drive to install Ubuntu on instead. It displays: However, in the Ubuntu Disk Utility the Windows partitions are recognized. What do I need to do in order for Ubuntu to recognize the Windows 7 partition and install Ubuntu as a dual boot? Response to comments The following commands were executed and the results are shown below: fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x34a38165 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 16318 130969600 7 HPFS/NTFS Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x14a714a6 Device Boot Start End Blocks Id System /dev/sdb1 1 60801 488384001 83 Linux parted -l Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only. Error: /dev/sr0: unrecognised disk label

    Read the article

  • Wubi installation broken by update - now unable to mount

    - by Outspaced
    As of today, my wubi installation of Ubuntu won't boot (goes straight to 'minimal bash-like interface) and I am unable to mount it when I boot straight to Ubuntu. If I boot straight into Ubuntu (not using Wubi, not going via Windows), I am able to mount the Windows partition and see the Wubi partition there: sudo mount -t ntfs /dev/sda5 /media/winxp And from there I can see the wubi root disk: /media/winxp/ubuntu/disks/root.disk But if I try to mount this: sudo mount -o loop /media/winxp/ubuntu/disks/root.disk /media/wubi I get an error: /media/winxp/ubuntu/disks/root.disk Input/Output Error If I then try fsck: sudo fsck /media/winxp/ubuntu/disks/root.disk I get this: Input/Output Error while trying to open /media/winxp/ubuntu/disks/root.disk The Superblock could not be read or does not describe a correct etx2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt and you might want to try running e2fsk with an alternate superblock: e2fsck -b 8193 <device ... this gives me the same result. There is data on this partition that I really need to be able to access, so I can't delete and reinstall. Any help much appreciated. Thanks.

    Read the article

  • Getting file system free space

    - by Fred Riley
    This isn't a problem as such, more a request for information based on ignorance of the Linux filesystem. The very short question is: How do I find out how much free and used space there is on the volume from which Ubuntu is running? More detail: I'm running Ubuntu 12.04 from a 64Gb USB3 stick, created from booting up a year-old Ubuntu 12.04 DVD and running Startup Disk Creator. The reason for this is that the Master Boot Record on my hard disk, holding Windoze 7, has gone belly-up, and whilst awaiting a recovery disk I'm running Ubunto off USB or DVD as a 'trial'. (And will continue to run Ubuntu after restoring Windoze, as I've rediscovered my love of the penguin :o)) After installing Ubuntu on the stick I ran the software update app, which downloaded some 450Mb of updates and took a couple of hours to install to the stick. A couple of times I got a message saying that disk space was short. So I looked in the file manager (or whatever it's called these days) and couldn't see the stick listed, just: SYSTEM hard disk (listed as 479Gb Filesystem) two other partitions that had been created by Windoze "4.3GB Filesystem" which when I try to open gives the error "Could not find /cow", and when I try to unmount it tells me I can't because it's not mounted - D'OH!! Edit: screenshot of file manager Edit: screenshot of low disk space warning What I can't see is the USB stick from which I'm running Ubuntu. Where's it gone, anybody know? This is tangentially related to a previous question of mine about system tools, in that I'm trying to get control and knowledge of the system in the newest incarnation of Ubuntu.

    Read the article

  • Win7 no longer available after installing 12.04

    - by Michael
    I have installed Ubuntu 12.04 but my Windows 7 partition seems to have been lost. It is in sda2. Can anyone help me how to get this Windows 7 partition back without having to reinstall Windows 7? Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd45cd45c Device Boot Start End Blocks Id System /dev/sda1 2048 61433855 30715904 83 Linux /dev/sda2 * 61433856 122873855 30720000 7 HPFS/NTFS/exFAT /dev/sda3 122873856 976769023 426947584 7 HPFS/NTFS/exFAT Disk /dev/sdb: 203.9 GB, 203928109056 bytes 255 heads, 63 sectors/track, 24792 cylinders, total 398297088 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x03ee03ee Device Boot Start End Blocks Id System /dev/sdb1 * 63 20482874 10241406 c W95 FAT32 (LBA) /dev/sdb2 20482875 40965749 10241437+ 1c Hidden W95 FAT32 (LBA) /dev/sdb3 40965750 398283479 178658865 f W95 Ext'd (LBA) /dev/sdb5 40965813 76694309 17864248+ 7 HPFS/NTFS/exFAT /dev/sdb6 76694373 108856439 16081033+ 7 HPFS/NTFS/exFAT /dev/sdb7 108856503 398283479 144713488+ 7 HPFS/NTFS/exFAT Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 240 heads, 63 sectors/track, 129201 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000001 Device Boot Start End Blocks Id System /dev/sdc1 * 63 20480543 10240240+ 82 Linux swap / Solaris /dev/sdc2 20480605 1953519119 966519257+ f W95 Ext'd (LBA) /dev/sdc5 20480607 1953519119 966519256+ 7 HPFS/NTFS/exFAT

    Read the article

  • Announcing Two Papers Addressing the RPAS Fusion Client

    - by Oracle Retail Documentation Team
    Oracle Retail has published two documents to My Oracle Support addressing the Retail Predictive Application Server (RPAS) Fusion Client, a web-based rich client developed using the latest Oracle Application Development Framework (ADF). The Fusion Client provides an enhanced user experience for communicating with the RPAS server. Oracle Retail Predictive Application Server Fusion Client Getting Started Guide Doc ID 1492759.1The Retail Predictive Application Server (RPAS) is a configurable platform that provides capabilities such as a multidimensional database structure, batch and online processing, a configurable user interface, a configurable calculation engine, user security, and utility functions such as importing and exporting, all on a highly scalable technical environment that can be deployed on a variety of hardware. This paper addresses typical questions that arise during setting up and deploying the Fusion Client, provides performance recommendations, and highlights the differences between the Classic Client and the Fusion Client. Oracle Retail RPAS Fusion Client Performance Issue Report Doc ID 1493747.1Performance issues can be frustrating for customers, and Oracle Retail will strive to assist you as you attempt to enhance the performance of your systems. To ensure the timeliest processing of your issue, retailers and partners are encouraged to respond as thoroughly as possible to each question in this document, which can be sent back for analysis by logging a Service Request and following typical Customer Support processes. The sections of the document solicit information about the following: Performance Issue Description Performance Issue Details System Configuration Data Application Configuration Data Performance Log Files

    Read the article

  • Dual Boot Windows 8 and Ubuntu

    - by Nick
    My laptop has two hard drives, one 320GB HDD and a 30GB SSD. I installed Windows 8 on the HDD and Ubuntu on the SSD. However, after I installed Ubuntu, Windows 8 did not appear on the boot list. I tried boot-repair, but this didn't help.Here is the output of my fdisk -l: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6cd9314a Device Boot Start End Blocks Id System /dev/sda1 * 2048 625139711 312568832 7 HPFS/NTFS/exFAT Disk /dev/sdb: 30.0 GB, 30016659456 bytes 255 heads, 63 sectors/track, 3649 cylinders, total 58626288 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6cd93132 Device Boot Start End Blocks Id System /dev/sdb1 * 2048 207126 102539+ 83 Linux /dev/sdb2 208894 58626047 29208577 5 Extended /dev/sdb5 208896 4112383 1951744 82 Linux swap / Solaris /dev/sdb6 4114432 58626047 27255808 83 Linux Disk /dev/mmcblk0: 3965 MB, 3965190144 bytes 49 heads, 48 sectors/track, 3292 cylinders, total 7744512 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009c694 Device Boot Start End Blocks Id System /dev/mmcblk0p1 * 8192 7744511 3868160 b W95 FAT32 I also tried sudo grub-update, but that also did nothing.

    Read the article

  • How do I get 12.04 to recognize swap partition so that I can hibernate?

    - by Kayla
    I justed installed 12.04 and used gparted to erase and enlarge my swap partition. When I rebooted, gparted said that the file partition for the swap was unknown. Gparted doesn't let me change the file partition to "linux-swap". It does let me change it to NTFS, but when I reboot, it goes back to "unknown". Thanks in advance for your help. Output from sudo swapon -s: Filename Type Size Used Priority /dev/mapper/cryptswap1 partition 9025532 0 -1 Output from sudo fdisk -l: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9d63ac84 Device Boot Start End Blocks Id System /dev/sda1 * 2048 2459647 1228800 7 HPFS/NTFS/exFAT /dev/sda2 2459648 197836472 97688412+ 7 HPFS/NTFS/exFAT /dev/sda3 466890752 488395119 10752184 7 HPFS/NTFS/exFAT /dev/sda4 197836798 466890751 134526977 5 Extended /dev/sda5 197836800 448837631 125500416 83 Linux /dev/sda6 448839680 466890751 9025536 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/mapper/cryptswap1: 9242 MB, 9242148864 bytes 255 heads, 63 sectors/track, 1123 cylinders, total 18051072 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x951b7f53 Disk /dev/mapper/cryptswap1 doesn't contain a valid partition table

    Read the article

  • How can I get rid of the motd message "*** /dev/sdb1 will be checked for errors at next reboot ***"? [duplicate]

    - by kmm
    This question already has an answer here: Persistent “disk will be checked…” in the message of the day (motd) even after reboot 3 answers My motd persistently has: *** /dev/sdb1 will be checked for errors at next reboot *** The problem is that I don't have /dev/sdb1 on my system. I only have /dev/sdb2 (mouted as /) and /dev/sda1 which mounts to /media/backup. I delete that line from /etc/motd, but it reappears after reboot. Here's my df output: Filesystem Size Used Avail Use% Mounted on /dev/sdb2 73G 3.7G 66G 6% / udev 490M 4.0K 490M 1% /dev tmpfs 200M 760K 199M 1% /run none 5.0M 0 5.0M 0% /run/lock none 498M 0 498M 0% /run/shm /dev/sda1 1.9T 429G 1.4T 25% /media/backup Update Here is the output of sudo fdisk -l Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003dfc2 Device Boot Start End Blocks Id System /dev/sda1 63 3907024064 1953512001 83 Linux Disk /dev/sdb: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00049068 Device Boot Start End Blocks Id System /dev/sdb1 152301568 156301311 1999872 82 Linux swap / Solaris /dev/sdb2 * 2048 152301567 76149760 83 Linux Partition table entries are not in disk order I guess /dev/sdb1 is my swap space.

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 1 of 2 &ndash; CLR Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible.  Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind…  In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve.  One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well.  In this review, I am going to cover some of the features of the ANTS profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program.  I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction The ANTS Profiler pack provided by Red Gate was something that I had not heard of before receiving an email regarding an offer to review it for a license.  Since I look to make my code efficient, it was a no brainer for me to try it out!  One thing that I have to say took me by surprise is that upon downloading the program and installing it you fill out a form for your usual contact information.  Sure enough within 2 hours, I received an email from a sales representative at Red Gate asking if she could help me to achieve the most out of my trial time so it wouldn’t go to waste.  After replying to her and explaining that I was looking to review its feature set, she put me in contact with someone that setup a demo session to give me a quick rundown of its features via an online meeting.  After having dealt with a massive ordeal with one of my utility companies and their complete lack of customer service, Red Gates friendly and helpful representatives were a breath of fresh air, and something I was thankful for. ANTS CLR Profiler The ANTS CLR profiler is the thing I want to focus on the most in this post, so I am going to dive right in now. Install was simple and took no time at all.  It installed both the profiler for the CLR and Memory, but also visual studio extensions to facilitate the usage of the profilers (click any images for full size images): The Visual Studio menu options (under ANTS menu) Starting the CLR Performance Profiler from the start menu yields this window If you follow the instructions after launching the program from the start menu (Click File > New Profiling Session to start a new project), you are given a dialog with plenty of options for profiling: The New Session dialog.  Lots of options.  One thing I noticed is that the buttons in the lower right were half-covered by the panel of the application.  If I had to guess, I would imagine that this is caused by my DPI settings being set to 125%.  This is a problem I have seen in other applications as well that don’t scale well to different dpi scales. The profiler options give you the ability to profile: .NET Executable ASP.NET web application (hosted in IIS) ASP.NET web application (hosted in IIS express) ASP.NET web application (hosted in Cassini Web Development Server) SharePoint web application (hosted in IIS) Silverlight 4+ application Windows Service COM+ server XBAP (local XAML browser application) Attach to an already running .NET 4 process Choosing each option provides a varying set of other variables/options that one can set including options such as application arguments, operating path, record I/O performance performance counters to record (43 counters in all!), etc…  All in all, they give you the ability to profile many different .Net project types, and make it simple to do so.  In most cases of my using this application, I would be using the built in Visual Studio extensions, as they automatically start a new profiling project in ANTS with the options setup, and start your program, however RedGate has made it easy enough to profile outside of Visual Studio as well. On the flip side of this, as someone who lives most of their work life in Visual Studio, one thing I do wish is that instead of opening an entirely separate application/gui to perform profiling after launching, that instead they would provide a Visual Studio panel with the information, and integrate more of the profiling project information into Visual Studio.  So, now that we have an idea of what options that the profiler gives us, its time to test its abilities and features. Horrendous Example Code – Prime Number Generator One of my interests besides development, is Physics and Math – what I went to college for.  I have especially always been interested in prime numbers, as they are something of a mystery…  So, I decided that I would go ahead and to test the abilities of the profiler, I would write a small program, website, and library to generate prime numbers in the quantity that you ask for.  I am going to start off with some terrible code, and show how I would see the profiler being used as a development tool. First off, the IPrimes interface (all code is downloadable at the end of the post): interface IPrimes { IEnumerable<int> GetPrimes(int retrieve); } Simple enough, right?  Anything that implements the interface will (hopefully) provide an IEnumerable of int, with the quantity specified in the parameter argument.  Next, I am going to implement this interface in the most basic way: public class DumbPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _analyzing = 4; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; //start dividing at 2 //divide until number is reached, or determined not prime for (int i = 2; i < _analyzing && isPrime; i++) { //if (i) goes into _analyzing without a remainder, //_analyzing is NOT prime if (_analyzing % i == 0) isPrime = false; } //if it is prime, add to found list if (isPrime) _foundPrimes.Add(_analyzing); //increment number to analyze next _analyzing++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } This is the simplest way to get primes in my opinion.  Checking each number by the straight definition of a prime – is it divisible by anything besides 1 and itself. I have included this code in a base class library for my solution, as I am going to use it to demonstrate a couple of features of ANTS.  This class library is consumed by a simple non-MVVM WPF application, and a simple MVC4 website.  I will not post the WPF code here inline, as it is simply an ObservableCollection<int>, a label, two textbox’s, and a button. Starting a new Profiling Session So, in Visual Studio, I have just completed my first stint developing the GUI and DumbPrimes IPrimes class, so now I want to check my codes efficiency by profiling it.  All I have to do is build the solution (surprised initiating a profiling session doesn’t do this, but I suppose I can understand it), and then click the ANTS menu, followed by Profile Performance.  I am then greeted by the profiler starting up and already monitoring my program live: You are provided with a realtime graph at the top, and a pane at the bottom giving you information on how to proceed.  I am going to start by asking my program to show me the first 15000 primes: After the program finally began responding again (I did all the work on the main UI thread – how bad!), I stopped the profiler, which did kill the process of my program too.  One important thing to note, is that the profiler by default wants to give you a lot of detail about the operation – line hit counts, time per line, percent time per line, etc…  The important thing to remember is that this itself takes a lot of time.  When running my program without the profiler attached, it can generate the 15000 primes in 5.18 seconds, compared to 74.5 seconds – almost a 1500 percent increase.  While this may seem like a lot, remember that there is a trade off.  It may be WAY more inefficient, however, I am able to drill down and make improvements to specific problem areas, and then decrease execution time all around. Analyzing the Profiling Session After clicking ‘Stop Profiling’, the process running my application stopped, and the entire execution time was automatically selected by ANTS, and the results shown below: Now there are a number of interesting things going on here, I am going to cover each in a section of its own: Real Time Performance Counter Bar (top of screen) At the top of the screen, is the real time performance bar.  As your application is running, this will constantly update with the currently selected performance counters status.  A couple of cool things to note are the fact that you can drag a selection around specific time periods to drill down the detail views in the lower 2 panels to information pertaining to only that period. After selecting a time period, you can bookmark a section and name it, so that it is easy to find later, or after reloaded at a later time.  You can also zoom in, out, or fit the graph to the space provided – useful for drilling down. It may be hard to see, but at the top of the processor time graph below the time ticks, but above the red usage graph, there is a green bar. This bar shows at what times a method that is selected in the ‘Call tree’ panel is called. Very cool to be able to click on a method and see at what times it made an impact. As I said before, ANTS provides 43 different performance counters you can hook into.  Click the arrow next to the Performance tab at the top will allow you to change between different counters if you have them selected: Method Call Tree, ADO.Net Database Calls, File IO – Detail Panel Red Gate really hit the mark here I think. When you select a section of the run with the graph, the call tree populates to fill a hierarchical tree of method calls, with information regarding each of the methods.   By default, methods are hidden where the source is not provided (framework type code), however, Red Gate has integrated Reflector into ANTS, so even if you don’t have source for something, you can select a method and get the source if you want.  Methods are also hidden where the impact is seen as insignificant – methods that are only executed for 1% of the time of the overall calling methods time; in other words, working on making them better is not where your efforts should be focused. – Smart! Source Panel – Detail Panel The source panel is where you can see line level information on your code, showing the code for the currently selected method from the Method Call Tree.  If the code is not available, Reflector takes care of it and shows the code anyways! As you can notice, there does seem to be a problem with how ANTS determines what line is the actual line that a call is completed on.  I have suspicions that this may be due to some of the inline code optimizations that the CLR applies upon compilation of the assembly.  In a method with comments, the problem is much more severe: As you can see here, apparently the most offending code in my base library was a comment – *gasp*!  Removing the comments does help quite a bit, however I hope that Red Gate works on their counter algorithm soon to improve the logic on positioning for statistics: I did a small test just to demonstrate the lines are correct without comments. For me, it isn’t a deal breaker, as I can usually determine the correct placements by looking at the application code in the region and determining what makes sense, but it is something that would probably build up some irritation with time. Feature – Suggest Method for Optimization A neat feature to really help those in need of a pointer, is the menu option under tools to automatically suggest methods to optimize/improve: Nice feature – clicking it filters the call tree and stars methods that it thinks are good candidates for optimization.  I do wish that they would have made it more visible for those of use who aren’t great on sight: Process Integration I do think that this could have a place in my process.  After experimenting with the profiler, I do think it would be a great benefit to do some development, testing, and then after all the bugs are worked out, use the profiler to check on things to make sure nothing seems like it is hogging more than its fair share.  For example, with this program, I would have developed it, ran it, tested it – it works, but slowly. After looking at the profiler, and seeing the massive amount of time spent in 1 method, I might go ahead and try to re-implement IPrimes (I actually would probably rewrite the offending code, but so that I can distribute both sets of code easily, I’m just going to make another implementation of IPrimes).  Using two pieces of knowledge about prime numbers can make this method MUCH more efficient – prime numbers fall into two buckets 6k+/-1 , and a number is prime if it is not divisible by any other primes before it: public class SmartPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _k = 1; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; int potentialPrime; //analyze 6k-1 //assign the value to potential potentialPrime = 6 * _k - 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); if (_foundPrimes.Count() == retrieve) break; //analyze 6k+1 //assign the value to potential potentialPrime = 6 * _k + 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); //increment k to analyze next _k++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } Now there are definitely more things I can do to help make this more efficient, but for the scope of this example, I think this is fine (but still hideous)! Profiling this now yields a happy surprise 27 seconds to generate the 15000 primes with the profiler attached, and only 1.43 seconds without.  One important thing I wanted to call out though was the performance graph now: Notice anything odd?  The %Processor time is above 100%.  This is because there is now more than 1 core in the operation.  A better label for the chart in my mind would have been %Core time, but to each their own. Another odd thing I noticed was that the profiler seemed to be spot on this time in my DumbPrimes class with line details in source, even with comments..  Odd. Profiling Web Applications The last thing that I wanted to cover, that means a lot to me as a web developer, is the great amount of work that Red Gate put into the profiler when profiling web applications.  In my solution, I have a simple MVC4 application setup with 1 page, a single input form, that will output prime values as my WPF app did.  Launching the profiler from Visual Studio as before, nothing is really different in the profiler window, however I did receive a UAC prompt for a Red Gate helper app to integrate with the web server without notification. After requesting 500, 1000, 2000, and 5000 primes, and looking at the profiler session, things are slightly different from before: As you can see, there are 4 spikes of activity in the processor time graph, but there is also something new in the call tree: That’s right – ANTS will actually group method calls by get/post operations, so it is easier to find out what action/page is giving the largest problems…  Pretty cool in my mind! Overview Overall, I think that Red Gate ANTS CLR Profiler has a lot to offer, however I think it also has a long ways to go.  3 Biggest Pros: Ability to easily drill down from time graph, to method calls, to source code Wide variety of counters to choose from when profiling your application Excellent integration/grouping of methods being called from web applications by request – BRILLIANT! 3 Biggest Cons: Issue regarding line details in source view Nit pick – Processor time vs. Core time Nit pick – Lack of full integration with Visual Studio Ratings Ease of Use (7/10) – I marked down here because of the problems with the line level details and the extra work that that entails, and the lack of better integration with Visual Studio. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Especially with its large variety of performance counters, a definite plus! Features (9/10) – Besides the real time performance monitoring, and the drill downs that I’ve shown here, ANTS also has great integration with ADO.Net, with the ability to show database queries run by your application in the profiler.  This, with the line level details, the web request grouping, reflector integration, and various options to customize your profiling session I think create a great set of features! Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (8/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (8/10) – Overall, I am happy with the Performance Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  I WOULD recommend you trying the application and seeing if it would fit into your process, BUT, remember there are still some kinks in it to hopefully be worked out. My next post will definitely be shorter (hopefully), but thank you for reading up to here, or skipping ahead!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Performance using T-SQL PIVOT vs SSIS PIVOT Transformation Component.

    - by Nev_Rahd
    Hi I am in process of building Dimension from EDW (source), wherein I need to pivot columns of source to load Dimension. Currently most of the pivoting stuff am doing is by using T-SQL PIVOT which further get used in my SSIS package to merge with Dim table This pivoting can also be achieved by SSIS PIVOT Transformation component. In regards to Performance which approach would be the best? Thanks

    Read the article

  • How does changing armv6/armv7 architecture to armv6 affect my iPad app? Will there be performance/st

    - by Flocked
    Hello, I need to change the the architectures of "Any iPhone OS Device" from "Optimized (armv6 armv7)" to "Standard (armv6)" for a library. I'm not exactly sure what effect will this have on the performance and stability of my iPad app. If I understand it right, the iPad has the armv7 architecture. I'm not so familiar with architectures, so I don't know what it means.

    Read the article

  • Is there any performance advantage to using DirectorySearcher over SearchRequest for LDAP queries.

    - by WayneC
    I understand that System.DirectoryServices is a "layer above" System.DirectoryServices.Protocols and abstracts some of the complexity. Are there any other advantages, performance or otherwise, to using System.DirectoryServices.DirectorySearcher vs. System.DirectoryServices.Protocols.SearchRequest for LDAP queries from .NET. What criteria would cause you to use one approach over the other?

    Read the article

  • The Spotlight is on You

    - by Claudia McDonald
    On the field or off the field, in ballet slippers or singing your heart out on stage, offering a stellar performance every time is key to holding the attention of your audience and having them come back hungry for more. Similarly, showing up to a new business meeting wearing pink tights and a tutu might be one way to holding the attention of your customer, but offering them an unmatched and ground-breaking software solution certainly will get their attention! Simply put, the Oracle Exastack program enables both ISV's and OEM's to rapidly build and deliver faster, more reliable applications. It comes as no surprise that the success of the Oracle Exastack program is centered on establishing Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud as the highest performance, lowest cost platforms available in the industry today.  But here is where the real standing-ovation-worthy facts come in. The Oracle Exadata Database Machine is the only database machine that provides extreme performance for both data warehousing and online transaction processing (OLTP) workloads, making it the ideal platform for consolidating onto private clouds. Whereas the Oracle Exalogic Elastic Cloud is an engineered hardware and software system tested and tuned by Oracle to provide the best foundation for cloud computing, while allowing Java applications, Oracle Applications and other enterprise applications to run with extreme performance. – And the crowd goes wild, ladies and gentlemen! In just four months alone, our partners have already achieved over 150 Oracle Exastack Ready milestones for Oracle Solaris, Oracle Linux, Oracle Database and Oracle WebLogic Server.  As Judson has said, “With the Oracle Exastack program, Oracle is helping partners test, tune and optimize their applications to deliver optimal performance and reliability, accelerating innovation and delivering superior value to customers." And get this, not only are their applications running faster and more efficiently, they are actually being delivered at a lower cost to customers than ever before – extreme performance well deserving of 3 consecutive arabesques! If you haven’t already, check out what some of our partners are saying about the Oracle Exastack program in this video, and find out all that is available to you today. By participating in the Oracle Exastack program, partners now have the ability to achieve Oracle Exadata Optimized, Oracle Exalogic Optimized, Oracle Exadata Ready and Oracle Exalogic Ready status for their solutions. New Oracle Exastack labs, provide OPN members with access to Oracle technical resources, on-premise facilities and remote lab environments. With Oracle Exastack Optimized, partners experience faster and more reliable applications to run on the Oracle Exadata Database Machine, as well as the long awaited Oracle Exalogic Elastic Cloud. Savvy OPN members are leveraging the Oracle Exastack Optimized program toward their advancement to Platinum or Diamond level in OPN. Partners are achieving Oracle Exadata Ready and Oracle Exalogic Ready giving them a competitive advantage and signaling to customers that their applications readily support Oracle Exadata Database Machine or Oracle Exalogic Elastic Cloud to deliver extreme performance. Get your dancing shoes on, The OPN Communications Team

    Read the article

  • Is Minus Zero some sort of JavaScript performance trick?

    - by James Wiseman
    Looking in the jQuery core I found the folloiwng code convention: nth: function(elem, i, match){ return match[3] - 0 === i; }, And I was really curious about the snippet match[3] - 0 Hunting around for '-0' on google isn't too productive, and a search for 'minus zero' brings back a reference to a Bob Dylan song. So, can anyone tell me. Is this some sort of performance trick, or is there a reason for doing this rather than a parseInt or parseFloat? Thanks

    Read the article

  • Transaction on Entity FrameWork Refactoring and best performance how can i?

    - by programmerist
    i try to use transaction in Entity FrameWork. i have 3 tables Personel, Prim, Finans. in Prim table you look SatisTutari (int) if i add data in SatisTutari.Text instead of int value adding float value. Trannsaction must be run! Everything is ok but how can i refactoring or give best performance or best writing Transaction coding! i have 3 table so i have 3 entities: CREATE TABLE Personel (PersonelID integer PRIMARY KEY identity not null, Ad varchar(30), Soyad varchar(30), Meslek varchar(100), DogumTarihi datetime, DogumYeri nvarchar(100), PirimToplami float); Go create TABLE Prim (PrimID integer PRIMARY KEY identity not null, PersonelID integer Foreign KEY references Personel(PersonelID), SatisTutari int, Prim float, SatisTarihi Datetime); Go CREATE TABLE Finans (ID integer PRIMARY KEY identity not null, Tutar float); Personel, Prim,Finans my tables. if you look Prim table you can see Prim value float value if i write a textbox not float value my transaction must run. protected void btnSave_Click(object sender, EventArgs e) { using (TestEntities testCtx = new TestEntities()) { using (TransactionScope scope = new TransactionScope()) { Personel personel = new Personel(); Prim prim = new Prim(); Finans finans = new Finans(); //-----------------------------------------------------------------------Step 1 personel.Ad = txtName.Text; personel.Soyad = txtSurName.Text; personel.Meslek = txtMeslek.Text; personel.DogumTarihi = DateTime.Parse(txtSatisTarihi.Text); personel.DogumYeri = txtDogumYeri.Text; personel.PirimToplami = float.Parse(txtPrimToplami.Text); testCtx.AddToPersonel(personel); testCtx.SaveChanges(); //----------------------------------------------------------------------- step 2 prim.PersonelID = personel.PersonelID; prim.SatisTutari = int.Parse(txtSatisTutari.Text); prim.SatisTarihi = DateTime.Parse(txtSatisTarihi.Text); prim.Prim1 = double.Parse(txtPrim.Text); finans.Tutar = prim.SatisTutari * prim.Prim1; testCtx.AddToPrim(prim); testCtx.SaveChanges(); //----------------------------------------------------------------------- step 3 lblTutar.Text = finans.Tutar.Value.ToString(); testCtx.AddToFinans(finans); testCtx.SaveChanges(); scope.Complete(); } } How can i rearrange codes. i need best practice refactoring and best solution for reading easly and performance!!!

    Read the article

  • How can this PHP/FQL code be modified to increase the performance and usability?

    - by Kaoukkos
    I try to get some insights from the pages I am administrator on Facebook. What my code does, it gets the IDs of the pages I want to work with through mySQL. I did not include that part though.After this, I get the page_id, name and fan_count of each of those facebook IDs and are saved in fancounts[]. Using the IDs ( pages[] ) I get two messages max from each page. There may be no messages, there may be 1 or 2 messages max. Possibly I will increase it later. messages[] holds the messages of each page. I have two problems with it. It has a very slow performance I can't find a way to echo the data like this: ID - Name of the page - Fan Count Here goes the first message Here goes the second one //here is a break ID - Name of the page 2 - Fan Count Here goes the first message of page 2 Here goes the second one of page 2 My questions are, how can the code be modified to increase performance and show the data as above? I read about fql.multiquery. Can it be used here? Please provide me with code examples. Thank you $pages = array(); // I get the IDs I want to work with $pagesIds = implode(',', $pages); // fancounts[] holds the page_id, name and fan_count of the Ids I work with $fancounts = array(); $pagesFanCounts = $facebook->api("/fql", array( "q" => "SELECT page_id, name, fan_count FROM page WHERE page_id IN ({$pagesIds})" )); foreach ($pagesFanCounts['data'] as $page){ $fancounts[] = $page['page_id']."-".$page['name']."-".$page['fan_count']; } //messages[] holds from 0 to 2 messages from each of the above pages $messages = array(); foreach( $pages as $id) { $getMessages = $facebook->api("/fql", array( "q" => "SELECT message FROM stream WHERE source_id = '$id' LIMIT 2" )); $messages[] = $getMessages['data']; } // this is how I print them now but it does not give me the best output. ( thanks goes to Mark for providing me this code ) $count = min(count($fancounts),count($messages)); for($i=0; $i<$count; ++$i) { echo $fancounts[$i],'<br>'; foreach($messages[$i] as $msg) { echo $msg['message'],'<br>'; } }

    Read the article

  • external USB HD doesn't show up on 'sudo fdisk -l' on one Ubuntu, shows up on another (both 'precise')

    - by Menelaos
    I have a 1.5TB WD external USB HDD and two Ubuntu systems, both 'precise'. When I plug the disk on system A it doesn't show up on the output of sudo fdisk -l nor is it automatically mounted (note that that wasn't always the case - it used to appear in the past). When I plug it on system B (again Ubuntu precise) it shows up when I do a sudo fdisk -l (output appended at the end) and mounts automatically just fine. What does this discrepancy point to and what kind of diagnostics should I run / tools I should use to troubleshoot the problem? I followed the suggestion I received to do a sudo tail -f /var/log/syslog and the output is the following when I plug, unplug and replug the USB cable on System A: Sep 14 23:27:09 thorin mtp-probe: checking bus 2, device 3: "/sys/devices/pci0000:00/0000:00:13.2/usb2/2-2" Sep 14 23:27:09 thorin mtp-probe: bus: 2, device: 3 was not an MTP device Sep 14 23:28:01 thorin kernel: [ 338.994295] usb 2-2: USB disconnect, device number 3 Sep 14 23:28:04 thorin kernel: [ 341.808139] usb 2-2: new high-speed USB device number 4 using ehci_hcd Sep 14 23:28:04 thorin mtp-probe: checking bus 2, device 4: "/sys/devices/pci0000:00/0000:00:13.2/usb2/2-2" Sep 14 23:28:04 thorin mtp-probe: bus: 2, device: 4 was not an MTP device Sep 14 23:29:54 thorin AptDaemon: INFO: Quitting due to inactivity Sep 14 23:29:54 thorin AptDaemon: INFO: Quitting was requested (I guess the last two message are irrelevant). Output of sudo fdisk -l on System B $ sudo fdisk -l Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00070db4 Device Boot Start End Blocks Id System /dev/sda1 * 2048 2905114623 1452556288 83 Linux /dev/sda2 2905116670 2930276351 12579841 5 Extended /dev/sda5 2905116672 2930276351 12579840 82 Linux swap / Solaris Disk /dev/sdb: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9c849c84 Device Boot Start End Blocks Id System /dev/sdb1 * 63 488375999 244187968+ 7 HPFS/NTFS/exFAT Disk /dev/sdg: 1500.3 GB, 1500299395072 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930272256 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003e17f Device Boot Start End Blocks Id System /dev/sdg1 2048 2930272255 1465135104 7 HPFS/NTFS/exFAT

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >