Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 296/3203 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • Why would Linux VM in vSphere ESXi 5.5 show dramatically increased disk i/o latency?

    - by mhucka
    I'm stumped and I hope someone else will recognize the symptoms of this problem. Hardware: new Dell T110 II, dual-core Pentium G860 2.9 GHz, onboard SATA controller, one new 500 GB 7200 RPM cabled hard drive inside the box, other drives inside but not mounted yet. No RAID. Software: fresh CentOS 6.5 virtual machine under VMware ESXi 5.5.0 (build 174 + vSphere Client). 2.5 GB RAM allocated. The disk is how CentOS offered to set it up, namely as a volume inside an LVM Volume Group, except that I skipped having a separate /home and simply have / and /boot. CentOS is patched up, ESXi patched up, latest VMware tools installed in the VM. No users on the system, no services running, no files on the disk but the OS installation. I'm interacting with the VM via the VM virtual console in vSphere Client. Before going further, I wanted to check that I configured things more or less reasonably. I ran the following command as root in a shell on the VM: for i in 1 2 3 4 5 6 7 8 9 10; do dd if=/dev/zero of=/test.img bs=8k count=256k conv=fdatasync done I.e., just repeat the dd command 10 times, which results in printing the transfer rate each time. The results are disturbing. It starts off well: 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.451 s, 105 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.4202 s, 105 MB/s ... but after 7-8 of these, it then prints 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GG) copied, 82.9779 s, 25.9 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 84.0396 s, 25.6 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 103.42 s, 20.8 MB/s If I wait a significant amount of time, say 30-45 minutes, and run it again, it again goes back to 105 MB/s, and after several rounds (sometimes a few, sometimes 10+), it drops to ~20-25 MB/s again. Plotting the disk latency in vSphere's interface, it shows periods of high disk latency hitting 1.2-1.5 seconds during the times that dd reports the low throughput. (And yes, things get pretty unresponsive while that's happening.) What could be causing this? I'm comfortable that it is not due to the disk failing, because I also had configured two other disks as an additional volume in the same system. At first I thought I did something wrong with that volume, but after commenting the volume out from /etc/fstab and rebooting, and trying the tests on / as shown above, it became clear that the problem is elsewhere. It is probably an ESXi configuration problem, but I'm not very experienced with ESXi. It's probably something stupid, but after trying to figure this out for many hours over multiple days, I can't find the problem, so I hope someone can point me in the right direction. (P.S.: yes, I know this hardware combo won't win any speed awards as a server, and I have reasons for using this low-end hardware and running a single VM, but I think that's besides the point for this question [unless it's actually a hardware problem].) ADDENDUM #1: Reading other answers such as this one made me try adding oflag=direct to dd. However, it makes no difference in the pattern of results: initially the numbers are higher for many rounds, then they drop to 20-25 MB/s. (The initial absolute numbers are in the 50 MB/s range.) ADDENDUM #2: Adding sync ; echo 3 > /proc/sys/vm/drop_caches into the loop does not make a difference at all. ADDENDUM #3: To take out further variables, I now run dd such that the file it creates is larger than the amount of RAM on the system. The new command is dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct. Initial throughput numbers with this version of the command are ~50 MB/s. They drop to 20-25 MB/s when things go south. ADDENDUM #4: Here is the output of iostat -d -m -x 1 running in another terminal window while performance is "good" and then again when it's "bad". (While this is going on, I'm running dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct.) First, when things are "good", it shows this: When things go "bad", iostat -d -m -x 1 shows this:

    Read the article

  • Load random swf object with jquery

    - by Eggnog
    I'm working on a site that utilizes full screen background video. I have three flash videos I'd like to load into the page which would be displayed randomly on page refresh. I know you can achieve this through action script loading the movies into one main .swf, but I'm rubbish with Flash and am looking for an alternative solution. My search so far has uncovered a method using jquery. Unfortunately my attempts to implement it have failed. I'm hoping someone more knowledgeable than myself can tell me if this technique is valid or what I'm doing wrong. Here's my code to date: <script type="text/javascript" src="files/js/swfobject.js"></script> <script type="text/javascript"> $(document).ready(function() { var sPath; var i = Math.floor(Math.random()*2); switch(i) { case 0: sPath = "flash/homepage_video.swf"; break; case 1: sPath = "flash/homepage_video.swf"; break; } // load the flash version of the image var so = new SWFObject(sPath, "flash-background", "100%", "100%"); so.addParam("scale", "exactFit"); so.addParam("movie", sPath); // NOTE: the sPath variable is also used here so.addParam("quality", "high"); so.addParam("wmode", "transparent"); so.write("homeBackground"); // NOTE: The value in this call to write() MUST match the name of the // HTML element (<div>) you're expecting the swf to show up in }); </script> <body> <div id="homeBackground"> <p>This is alternative content</p> </div> </body> I'm open to other suggestions that don't involve jquery (php perhaps). Thanks.

    Read the article

  • How can I use different metadata for a Dynamic Data form view and the list view?

    - by ProfK
    We often need a summarised list view with more detailed form view. So far, the only two way I can think of doing this are using a 'custom' list or form view than uses a sumplimentary 'remove' or 'add' list of fields anb generalise this over all entity sets, or create a custom metadata provider that infers somehow which columns to supply. Are there any other ways of distinguishing these two views? PS, I wrote a fun little general details page, that handles insert, edit, and view, all on one page template. Maybe I could somehow use that? It's here.

    Read the article

  • Generics vs Object performance

    - by Risho
    I'm doing practice problems from MCTS Exam 70-536 Microsft .Net Framework Application Dev Foundation, and one of the problems is to create two classes, one generic, one object type that both perform the same thing; in which a loop uses the class and iterated over thousand times. And using the timer, time the performance of both. There was another post at C# generics question that seeks the same questoion but nonone replied. Basically if in my code I run the generic class first it takes loger to process. If I run the object class first than the object class takes longer to process. The whole idea was to prove that generics perform faster. I used the original users code to save me some time. I didn't particularly see anything wrong with the code and was puzzled by the outcome. Can some one explain why the unusual results? Thanks, Risho Here is the code: class Program { class Object_Sample { public Object_Sample() { Console.WriteLine("Object_Sample Class"); } public long getTicks() { return DateTime.Now.Ticks; } public void display(Object a) { Console.WriteLine("{0}", a); } } class Generics_Samle<T> { public Generics_Samle() { Console.WriteLine("Generics_Sample Class"); } public long getTicks() { return DateTime.Now.Ticks; } public void display(T a) { Console.WriteLine("{0}", a); } } static void Main(string[] args) { long ticks_initial, ticks_final, diff_generics, diff_object; Object_Sample OS = new Object_Sample(); Generics_Samle<int> GS = new Generics_Samle<int>(); //Generic Sample ticks_initial = 0; ticks_final = 0; ticks_initial = GS.getTicks(); for (int i = 0; i < 50000; i++) { GS.display(i); } ticks_final = GS.getTicks(); diff_generics = ticks_final - ticks_initial; //Object Sample ticks_initial = 0; ticks_final = 0; ticks_initial = OS.getTicks(); for (int j = 0; j < 50000; j++) { OS.display(j); } ticks_final = OS.getTicks(); diff_object = ticks_final - ticks_initial; Console.WriteLine("\nPerformance of Generics {0}", diff_generics); Console.WriteLine("Performance of Object {0}", diff_object); Console.ReadKey(); } }

    Read the article

  • SQL Server Performance

    - by khan24
    I have tables in which 35000 to 40000 records are available. Inspite using ajax the performance of the website is very low. Can any body please suggest some ideas or tips for the problem. Thanks in advance.

    Read the article

  • Some images fails to load on Windows Server 2008

    - by Guffa
    I have an application running on a Windows Server 2008, that is processing uploaded images. Currently it successfully processes about 8000 images per day, creating 11 different sizes of each image. The problem that I have is that sometimes the application fails to load some images, I get the error "System.Runtime.InteropServices.ExternalException: A generic error occurred in GDI+.". The upload only accepts files with a JPEG extension (jpg/jpeg/jpe) or with a JPEG MIME type, and from what I can tell those images are really JPEG images. If I look at the image file in windows explorer on the server, it can successfully extract the thumbnail from the file, but if I try to open it, I get the error message "This is not a valid bitmap file, or it's format is not currently supported." from Paint. If I copy the image to my own computer, running Windows 7, there is no problem opening the image. It works in Paint, Windows Photo Viewer, Adobe Bridge, and Photoshop. If I try to load the image using Image.FromStream the same way as in the application running on the server, it loads just fine. (I have copied the file back to the server, and it still doesn't work, so there is nothing in the copying process that changes it.) When I look at the image information in Bridge, I see that the images are created using Picasa 3.0, but other than that I can't see anything special about them. I haven't yet found anyone having the same problem, or any known problems like this with the Picasa application. Has anyone had any similar problem, or know if there is something special about images created using Picasa? Is there any image codec that needs installing on the server to handle all kinds of JPEG images?

    Read the article

  • Tomcat Load Balalncing - Programatic Parameter Based ??

    - by Gala101
    Here's the scenario: Many users access an application (running on tomcat), the user's data is segmented into multiple databases, say each db containing 1000 user's data. Now is it somehow possible to have 100s of tomcat servers running on 'inexpensive' PC class machines with each connecting to a single db, with user's session getting passed to appropriate tomcat and becoming 'Sticky' there. Can have some sort of 'gateway' deciding which user goes where and doing the load-balancing appropriately. Would make a great scalability solution :)

    Read the article

  • O(log N) == O(1) - Why not?

    - by phoku
    Whenever I consider algorithms/data structures I tend to replace the log(N) parts by constants. Oh, I know log(N) diverges - but does it matter in real world applications? log(infinity) < 100 for all practical purposes. I am really curious for real world examples where this doesn't hold. To clarify: I understand O(f(N)) I am curious about real world examples where the asymptotic behaviour matters more than the constants of the actual performance. If log(N) can be replaced by a constant it still can be replaced by a constant in O( N log N). This question is for the sake of (a) entertainment and (b) to gather arguments to use if I run (again) into a controversy about the performance of a design.

    Read the article

  • Can I use memcpy in C++ to copy classes that have no pointers or virtual functions

    - by Shane MacLaughlin
    Say I have a class, something like the following; class MyClass { public: MyClass(); int a,b,c; double x,y,z; }; #define PageSize 1000000 MyClass Array1[PageSize],Array2[PageSize]; If my class has not pointers or virtual methods, is it safe to use the following? memcpy(Array1,Array2,PageSize*sizeof(MyClass)); The reason I ask, is that I'm dealing with very large collections of paged data, as decribed here, where performance is critical, and memcpy offers significant performance advantages over iterative assignment. I suspect it should be ok, as the 'this' pointer is an implicit parameter rather than anything stored, but are there any other hidden nasties I should be aware of?

    Read the article

  • How to create a template to display data from a class in WPF

    - by Dave Colwell
    Hey I have a data layer which is returning lists of classes containing data. I want to display this data in my form in WPF. The data is just properties on the class such as Class.ID, Class.Name, Class.Description (for the sake of example) How can i create a custom control or template an existing control so that it can be given one of these classes and display its data in a data-bound fashion. Thanks :)

    Read the article

  • Under what circumstances will an entity be able to lazily load its relationships in JPA

    - by Mowgli
    Assuming a Java EE container is being used with JPA persistence and JTA transaction management where EJB and WAR packages are inside a EAR package. Say an entity with lazy-load relationships has just been returned from a JPQL search, such as the getBoats method below: @Stateless public class BoatFacade implements BoatFacadeRemote, BoatFacadeLocal { @PersistenceContext(unitName = "boats") private EntityManager em; @Override public List<Boat> getBoats(Collection<Integer> boatIDs) { if(boatIDs.isEmpty()) { return Collections.<Boat>emptyList(); } Query query = em.createNamedQuery("getAllBoats"); query.setParameter("boatID", boatIDs); List<Boat> boats = query.getResultList(); return boats; } } The entity: @Entity @NamedQuery( name="getAllBoats", query="Select b from Boat b where b.id in : boatID") public class Boat { @Id private long id; @OneToOne(fetch=FetchType.LAZY) private Gun mainGun; public Gun getMainGun() { return mainGun; } } Where will its lazy-load relationships be loadable (assuming the same stateless request): Same JAR: A method in the same EJB A method in another EJB A method in a POJO in the same EJB JAR Same EAR, but outside EJB JAR: A method in a web tier managed bean. A method in a web tier POJO. Different EAR: A method in a different EAR which receives the entity through RMI. What is it that restricts the scope, for example: the JPA transaction, persistence context or JTA transaction?

    Read the article

  • How can I share dynamic data array between Applications?

    - by Ehsan
    Hi, I use CreateFileMapping, but this method was not useful,because only static structure can be shared by this method. for example this method is good for following structure: struct MySharedData { unsigned char Flag; int Buff[10]; }; but it's not good for : struct MySharedData { unsigned char Flag; int *Buff; }; would be thankful if somebody guide me on this, Thanks in advance!

    Read the article

  • Would it be simply better to use the system's functions rather than use the language?

    - by Nullw0rm
    There are many scenarios where I've questioned PHP's performance with some of its functions, and whether I should build a complex class to handle specific things using its seemingly slow tools. For example, Complex regular expressions with sed and processing with awk would seemingly be exponential in performance rather than making PHP's regular expression and seemingly excessive functions parse and in time manage to finish it. If I were to do a lot of network tasks such as MX lookups/DIGging/retrieving simultaneously I would rather pass it via system() and let the OS handle it itself. There are simply too many functions in PHP, that are inefficient and result in slow pages or can be handled easier by the OS. What are your opinions? Do you think I should do the hard work with the OS in its own/custom functions?

    Read the article

  • Dynamic Select boxes page load

    - by Chris
    Hello, I have a dynamic chained select box that I am attempting to show the value of on a page load. In my chained select box, it will default to the first option within the select box on page load, could anyone provide assitance? I stumbled upon this thread, but I can't seem to translate what they are doing with that answer to my language of CF. Dynamic chained drop downs on page refresh Here is the JS script I am using. function dynamicSelect(id1, id2) { // Feature test to see if there is enough W3C DOM support if (document.getElementById && document.getElementsByTagName) { // Obtain references to both select boxes var sel1 = document.getElementById(id1); var sel2 = document.getElementById(id2); // Clone the dynamic select box var clone = sel2.cloneNode(true); // Obtain references to all cloned options var clonedOptions = clone.getElementsByTagName("option"); // Onload init: call a generic function to display the related options in the dynamic select box refreshDynamicSelectOptions(sel1, sel2, clonedOptions); // Onchange of the main select box: call a generic function to display the related options in the dynamic select box sel1.onchange = function() { refreshDynamicSelectOptions(sel1, sel2, clonedOptions); } } } function refreshDynamicSelectOptions(sel1, sel2, clonedOptions) { // Delete all options of the dynamic select box while (sel2.options.length) { sel2.remove(0); } // Create regular expression objects for "select" and the value of the selected option of the main select box as class names var pattern1 = /( |^)(select)( |$)/; var pattern2 = new RegExp("( |^)(" + sel1.options[sel1.selectedIndex].value + ")( |$)"); // Iterate through all cloned options for (var i = 0; i < clonedOptions.length; i++) { // If the classname of a cloned option either equals "select" or equals the value of the selected option of the main select box if (clonedOptions[i].className.match(pattern1) || clonedOptions[i].className.match(pattern2)) { // Clone the option from the hidden option pool and append it to the dynamic select box sel2.appendChild(clonedOptions[i].cloneNode(true)); } } } Thanks so much for any assistance

    Read the article

  • How to share datastores between multiple exchange servers?

    - by Johan
    I have an Exchange 2003 box that is seriously overstressed. I want to transfer its duties to a new and faster box. I don't cannot suffer downtime, so I have to do this stuff live. Here's what I plan to do: Install Exchange 2003 on the new server Set up the new server, so it will accept requests from users for their mailboxes I want to do as little manual set up as possible, because that 'll eat up my time and is too error prone Than I want to transfer my datastores one by one to the new server and have those users (once the datastore in the new server is up and running) to get their data from the new server (without them noticing) I don't have to transfer all the datastores, some of them need to stay on the old box (because I'm still waiting for extra HD space to arrive from the supplier) What steps do I need to follow to do this? The new box has never seen this domain before, the old exchange server is not the DC, we have a dedicated DC.

    Read the article

  • Windows 7 slowing down during hard drive activity

    - by Iniquities of evil men
    Sometimes when normally using my PC, it will (seemingly) randomly slow down, and maybe sometimes even freeze for several seconds. During this slow down period, it looks like a (I don't know which drive it is) hard drive is constantly being written to. During the last slow down, I started Windows's Ressource Monitor and found out that the System process was writing up to 10MB/s to a drive (I suspect it's the system drive, C:\, but I don't know for sure). I'm not doing anything unusual (at least, I don't think I am), and most of the time, it will work normally, but, as I said, it just randomly slows down for some times. Any ideas on what might be causing this and how I can prevent this from happening again? (I have a triple-core processor and 4GB of RAM. My system drive is a WD Caviar Black 500GB, my secondary, 'data' drive is a Samsung drive, which I don't know the model number of, but I can look it up. I can also post my full PC specs if needed.)

    Read the article

  • Architectural Design for a Data-Driven Silverlight WP7 app

    - by Rosarch
    I have a Silverlight Windows Phone 7 app that pulls data from a public API. I find myself doing much of the same thing over and over again: In the UI, set a loading message or loading progress bar in place of where the content is Get the content, which may be already in memory, cached in isolated file storage, or require an HTTP request If the content can not be acquired (no network connection, etc), display an error message If the content is acquired, display it in the UI Keep the content in main memory for subsequent queries The content that is displayed to the user can be taken directly from a data source, such as an ObservableCollection, or it may be a query on a data source. I would like to factor out this repetitive process into a framework where ideally only the following needs to be specified: Where to display the content in the UI The UI elements to show while loading, on failure, and on success The URI of the HTTP request How to parse the HTTP response into the data structure that will kept in memory The location of the file in isolated storage, if it exists How to parse the file contents into the data structure that will be kept in memory It may sound like a lot, but two strings, three FrameworkElements, and two methods is less than the overhead that I currently have. Also, this needs to work for however the data is maintained in memory, and needs to work for direct collections and queries on those collections. My questions are: Has something like this already been implemented? Are my thoughts about the topic above fundamentally wrong in some way? Here is a design I'm thinking of: There are two components, a View and a Model. The View is given the FrameworkElements for loading, failure, and success. It is also given a reference to the corresponding Model. The View is a UserControl that is placed somewhere in the UI. The Model a class that is given the URI for the data, a method of how to parse the data, and optionally a filename and how to parse the file. It is responsible for retrieving the data and notifying the View whenever the current status (loading/fail/success) changes. If the data downloaded from the network is different from the cache, the network data takes precedence. When the app closes or is tombstoned, the model writes the data to the cache. How does that sound?

    Read the article

  • Errors with large data sources

    - by The Sheek Geek
    I'm doing some benchmarking on large data sources and binding/exporting data for reporting. I started with using a data set, filling it with 100000 rows and then attempting to open a crystal report with the retrieved data. I noticed that the data set filled just fine (took about 779 milliseconds) however, when attempting to export the data to the report or even bind to a gridview the application would fail with an OutOfMemoryException. Does anyone experienced this before or have an idea of how to get around it? It is very possible that clients will run reports for years worth of data and 100000 rows are not inconceivable. The application and the benchmark code are written in C# using ORACLE and SQL Server databases. I still have some data sources to test, but would like to know how to get around this just in case I don't find a better solution.

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >