Search Results

Search found 48586 results on 1944 pages for 'page performance'.

Page 13/1944 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Openfiler iSCSI performance

    - by Justin
    Hoping someone can point me in the right direction with some iSCSI performance issues I'm having. I'm running Openfiler 2.99 on an older ProLiant DL360 G5. Dual Xeon processor, 6GB ECC RAM, Intel Gigabit Server NIC, SAS controller with and 3 10K SAS drives in a RAID 5. When I run a simple write test from the box directly the performance is very good: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 4.64468 s, 226 MB/s So I created a LUN, attached it to another box I have running ESXi 5.1 (Core i7 2600k, 16GB RAM, Intel Gigabit Server NIC) and created a new datastore. Once I created the datastore I was able to create and start a VM running CentOS with 2GB of RAM and 16GB of disk space. The OS installed fine and I'm able to use it but when I ran the same test inside the VM I get dramatically different results: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 26.8786 s, 39.0 MB/s [root@localhost ~]# Both servers have brand new Intel Server NIC's and I have Jumbo Frames enabled on the switch, the openfiler box as well as the VMKernel adapter on the ESXi box. I can confirm this is set up properly by using the vmkping command from the ESXi host: ~ # vmkping 10.0.0.1 -s 9000 PING 10.0.0.1 (10.0.0.1): 9000 data bytes 9008 bytes from 10.0.0.1: icmp_seq=0 ttl=64 time=0.533 ms 9008 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.736 ms 9008 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.570 ms The only thing I haven't tried as far as networking goes is bonding two interfaces together. I'm open to trying that down the road but for now I am trying to keep things simple. I know this is a pretty modest setup and I'm not expecting top notch performance but I would like to see 90-100MB/s. Any ideas?

    Read the article

  • Terminal server performance over high latency links

    - by holz
    Our datacenter and head office is currently in Brisbane, Australia, and we have a branch office in the UK. We have a private WAN with a 768k link to our UK office and the latency is at about 350ms. The terminal server performance is reeeeealy bad. Applications that don't have too much animation or any images seem to be okay. But as soon as they do, the session is almost unusable. Powerpoint and internet explorer are good examples of apps that make it run slow. And if there is an image in your email signature, outlook will hang for about 10 seconds each time a new line is inserted, while the image gets moved down a few pixels. We are currently running server 2003. I have tried Server 2008 R2 RDS, and also a third party solution called Blaze by a company called Ericom, but it is still not too much better. We currently have a 5 levels dynamic class of service with the priority in the following order. VoIP Video Terminal Services Printing Everything else When testing the terminal server performance, the link monitored using net-flows, and have plenty we of bandwidth available, so I believe that it is a latency issue rather than bandwidth. Is there anything that can be done to improve performance. Would citrix help at all?

    Read the article

  • mysql medium int vs. int performance?

    - by aviv
    Hi, I have a simple users table, i guess the maximum users i am going to have is 300,000. Currently i am using: CREATE TABLE users ( id INT UNSIGEND AUTOINCEREMENT PRIMARY KEY, .... Of course i have many other tables that the users(id) is a FOREIGN KEY in them. I read that since the id is not going to use the full maximum of INT it is better to use: MEDIUMINT and it will give better performance. Is it true? (I am using mysql on Windows Server 2008) Thanks.

    Read the article

  • Performance monitoring on Linux/Unix

    - by ervingsb
    I run a few Windows servers and (Debian and Ubuntu) Linux and AIX servers. I would like to continously monitor performance on these systems in order to easily identify bottlenecks as well as to have an overview of the general activity on the servers. On Windows, I use Windows Performance Monitor (perfmon) for this. I set up these counters: For bottlenecks: Processor utilization : System\Processor Queue Length Memory utilization : Memory\Pages Input/Sec Disk Utilization : PhysicalDisk\Current Disk Queue Length\driveletter Network problems: Network Interface\Output Queue Length\nic name For general activity: Processor utilization : Processor\% Processor Time_Total Memory utilization : Process\Working Set_Total (or per specific process) Memory utilization : Memory\Available MBytes Disk Utilization : PhysicalDisk\Bytes/sec_Total (or per process) Network Utilization : Network Interface\Bytes Total/Sec\nic name (More information on the choice of these counters on: http://itcookbook.net/blog/windows-perfmon-top-ten-counters ) This works really well. It allows me to look in one place and identify most common bottlenecks. So my question is, how can I do something equivalent (or just very similar) on Linux servers? I have looked a bit on nmon (http://www.ibm.com/developerworks/aix/library/au-analyze_aix/) which is a free performance monitoring tool developed for AIX but also availble for Linux. However, I am not sure if nmon allows me to set up the above counters. Maybe it is because Linux and AIX does not allow monitoring these exact same measures. Is so, which ones should I choose and why? If nmon is not the tool to use for this, then what do you recommend?

    Read the article

  • SQL Server: One 12-drive RAID-10 array or 2 arrays of 8-drives and 4-drives

    - by ben
    Setting up a box for SQL Server 2008, which would give the best performance (heavy OLTP)? The more drives in a RAID-10 array the better performance, but will losing 4 drives to dedicate them to the transaction logs give us more performance. 12-drives in RAID-10 plus one hot spare. OR 8-drives in RAID-10 for database and 4-drives RAID-10 for transaction logs plus 2 hot spares (one for each array). We have 14-drive slots to work with and it's an older PowerVault that doesn't support global hot spares.

    Read the article

  • Read only array, deep copy or retrieve copies one by one? (Performance and Memory)

    - by Arthur Wulf White
    In a garbage collection based system, what is the most effective way to handle a read only array if such a structure does not exist natively in the language. Is it better to return a copy of an array or allow other classes to retrieve copies of the objects stored in the array one by one? @JustinSkiles: It is not very broad. It is performance related and can actually be answered specifically for two general cases. You only need very few items: in this situation it's more effective to retrieve copies of the objects one by one. You wish to iterate over 30% or more objects. In this cases it is superior to retrieve all the array at once. This saves on functions calls. Function calls are very expansive when compared to reading directly from an array. A good specific answer could include performance, reading from an array and reading indirectly through a function. It is a simple performance related question.

    Read the article

  • Page cache flushing behavior under heavy append load

    - by Bryce
    I'm trying to understand the behavior of the Linux pdflush daemon when: The page cache is initially pretty much empty There is a large amount of free memory The system starts undergoing heavy write load My understanding right now is that the vm.dirty_ratio and vm.dirty_background_ratio that control page cache flushing behavior are with respect to the present size of the page cache, which means that my writes will flush earlier than they would if the page cache was pre-populated (even with dummy data from some random file), and thus throughput will be lower. Is this accurate?

    Read the article

  • Using Durandal to Create Single Page Apps

    - by Stephen.Walther
    A few days ago, I gave a talk on building Single Page Apps on the Microsoft Stack. In that talk, I recommended that people use Knockout, Sammy, and RequireJS to build their presentation layer and use the ASP.NET Web API to expose data from their server. After I gave the talk, several people contacted me and suggested that I investigate a new open-source JavaScript library named Durandal. Durandal stitches together Knockout, Sammy, and RequireJS to make it easier to use these technologies together. In this blog entry, I want to provide a brief walkthrough of using Durandal to create a simple Single Page App. I am going to demonstrate how you can create a simple Movies App which contains (virtual) pages for viewing a list of movies, adding new movies, and viewing movie details. The goal of this blog entry is to give you a sense of what it is like to build apps with Durandal. Installing Durandal First things first. How do you get Durandal? The GitHub project for Durandal is located here: https://github.com/BlueSpire/Durandal The Wiki — located at the GitHub project — contains all of the current documentation for Durandal. Currently, the documentation is a little sparse, but it is enough to get you started. Instead of downloading the Durandal source from GitHub, a better option for getting started with Durandal is to install one of the Durandal NuGet packages. I built the Movies App described in this blog entry by first creating a new ASP.NET MVC 4 Web Application with the Basic Template. Next, I executed the following command from the Package Manager Console: Install-Package Durandal.StarterKit As you can see from the screenshot of the Package Manager Console above, the Durandal Starter Kit package has several dependencies including: · jQuery · Knockout · Sammy · Twitter Bootstrap The Durandal Starter Kit package includes a sample Durandal application. You can get to the Starter Kit app by navigating to the Durandal controller. Unfortunately, when I first tried to run the Starter Kit app, I got an error because the Starter Kit is hard-coded to use a particular version of jQuery which is already out of date. You can fix this issue by modifying the App_Start\DurandalBundleConfig.cs file so it is jQuery version agnostic like this: bundles.Add( new ScriptBundle("~/scripts/vendor") .Include("~/Scripts/jquery-{version}.js") .Include("~/Scripts/knockout-{version}.js") .Include("~/Scripts/sammy-{version}.js") // .Include("~/Scripts/jquery-1.9.0.min.js") // .Include("~/Scripts/knockout-2.2.1.js") // .Include("~/Scripts/sammy-0.7.4.min.js") .Include("~/Scripts/bootstrap.min.js") ); The recommendation is that you create a Durandal app in a folder off your project root named App. The App folder in the Starter Kit contains the following subfolders and files: · durandal – This folder contains the actual durandal JavaScript library. · viewmodels – This folder contains all of your application’s view models. · views – This folder contains all of your application’s views. · main.js — This file contains all of the JavaScript startup code for your app including the client-side routing configuration. · main-built.js – This file contains an optimized version of your application. You need to build this file by using the RequireJS optimizer (unfortunately, before you can run the optimizer, you must first install NodeJS). For the purpose of this blog entry, I wanted to start from scratch when building the Movies app, so I deleted all of these files and folders except for the durandal folder which contains the durandal library. Creating the ASP.NET MVC Controller and View A Durandal app is built using a single server-side ASP.NET MVC controller and ASP.NET MVC view. A Durandal app is a Single Page App. When you navigate between pages, you are not navigating to new pages on the server. Instead, you are loading new virtual pages into the one-and-only-one server-side view. For the Movies app, I created the following ASP.NET MVC Home controller: public class HomeController : Controller { public ActionResult Index() { return View(); } } There is nothing special about the Home controller – it is as basic as it gets. Next, I created the following server-side ASP.NET view. This is the one-and-only server-side view used by the Movies app: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that I set the Layout property for the view to the value null. If you neglect to do this, then the default ASP.NET MVC layout will be applied to the view and you will get the <!DOCTYPE> and opening and closing <html> tags twice. Next, notice that the view contains a DIV element with the Id applicationHost. This marks the area where virtual pages are loaded. When you navigate from page to page in a Durandal app, HTML page fragments are retrieved from the server and stuck in the applicationHost DIV element. Inside the applicationHost element, you can place any content which you want to display when a Durandal app is starting up. For example, you can create a fancy splash screen. I opted for simply displaying the text “Loading app…”: Next, notice the view above includes a call to the Scripts.Render() helper. This helper renders out all of the JavaScript files required by the Durandal library such as jQuery and Knockout. Remember to fix the App_Start\DurandalBundleConfig.cs as described above or Durandal will attempt to load an old version of jQuery and throw a JavaScript exception and stop working. Your application JavaScript code is not included in the scripts rendered by the Scripts.Render helper. Your application code is loaded dynamically by RequireJS with the help of the following SCRIPT element located at the bottom of the view: <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> The data-main attribute on the SCRIPT element causes RequireJS to load your /app/main.js JavaScript file to kick-off your Durandal app. Creating the Durandal Main.js File The Durandal Main.js JavaScript file, located in your App folder, contains all of the code required to configure the behavior of Durandal. Here’s what the Main.js file looks like in the case of the Movies app: require.config({ paths: { 'text': 'durandal/amd/text' } }); define(function (require) { var app = require('durandal/app'), viewLocator = require('durandal/viewLocator'), system = require('durandal/system'), router = require('durandal/plugins/router'); //>>excludeStart("build", true); system.debug(true); //>>excludeEnd("build"); app.start().then(function () { //Replace 'viewmodels' in the moduleId with 'views' to locate the view. //Look for partial views in a 'views' folder in the root. viewLocator.useConvention(); //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id"); app.adaptToDevice(); //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); }); }); There are three important things to notice about the main.js file above. First, notice that it contains a section which enables debugging which looks like this: //>>excludeStart(“build”, true); system.debug(true); //>>excludeEnd(“build”); This code enables debugging for your Durandal app which is very useful when things go wrong. When you call system.debug(true), Durandal writes out debugging information to your browser JavaScript console. For example, you can use the debugging information to diagnose issues with your client-side routes: (The funny looking //> symbols around the system.debug() call are RequireJS optimizer pragmas). The main.js file is also the place where you configure your client-side routes. In the case of the Movies app, the main.js file is used to configure routes for three page: the movies show, add, and details pages. //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id");   The route for movie details includes a route parameter named id. Later, we will use the id parameter to lookup and display the details for the right movie. Finally, the main.js file above contains the following line of code: //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); This line of code causes Durandal to load up a JavaScript file named shell.js and an HTML fragment named shell.html. I’ll discuss the shell in the next section. Creating the Durandal Shell You can think of the Durandal shell as the layout or master page for a Durandal app. The shell is where you put all of the content which you want to remain constant as a user navigates from virtual page to virtual page. For example, the shell is a great place to put your website logo and navigation links. The Durandal shell is composed from two parts: a JavaScript file and an HTML file. Here’s what the HTML file looks like for the Movies app: <h1>Movies App</h1> <div class="container-fluid page-host"> <!--ko compose: { model: router.activeItem, //wiring the router afterCompose: router.afterCompose, //wiring the router transition:'entrance', //use the 'entrance' transition when switching views cacheViews:true //telling composition to keep views in the dom, and reuse them (only a good idea with singleton view models) }--><!--/ko--> </div> And here is what the JavaScript file looks like: define(function (require) { var router = require('durandal/plugins/router'); return { router: router, activate: function () { return router.activate('movies/show'); } }; }); The JavaScript file contains the view model for the shell. This view model returns the Durandal router so you can access the list of configured routes from your shell. Notice that the JavaScript file includes a function named activate(). This function loads the movies/show page as the first page in the Movies app. If you want to create a different default Durandal page, then pass the name of a different age to the router.activate() method. Creating the Movies Show Page Durandal pages are created out of a view model and a view. The view model contains all of the data and view logic required for the view. The view contains all of the HTML markup for rendering the view model. Let’s start with the movies show page. The movies show page displays a list of movies. The view model for the show page looks like this: define(function (require) { var moviesRepository = require("repositories/moviesRepository"); return { movies: ko.observable(), activate: function() { this.movies(moviesRepository.listMovies()); } }; }); You create a view model by defining a new RequireJS module (see http://requirejs.org). You create a RequireJS module by placing all of your JavaScript code into an anonymous function passed to the RequireJS define() method. A RequireJS module has two parts. You retrieve all of the modules which your module requires at the top of your module. The code above depends on another RequireJS module named repositories/moviesRepository. Next, you return the implementation of your module. The code above returns a JavaScript object which contains a property named movies and a method named activate. The activate() method is a magic method which Durandal calls whenever it activates your view model. Your view model is activated whenever you navigate to a page which uses it. In the code above, the activate() method is used to get the list of movies from the movies repository and assign the list to the view model movies property. The HTML for the movies show page looks like this: <table> <thead> <tr> <th>Title</th><th>Director</th> </tr> </thead> <tbody data-bind="foreach:movies"> <tr> <td data-bind="text:title"></td> <td data-bind="text:director"></td> <td><a data-bind="attr:{href:'#/movies/details/'+id}">Details</a></td> </tr> </tbody> </table> <a href="#/movies/add">Add Movie</a> Notice that this is an HTML fragment. This fragment will be stuffed into the page-host DIV element in the shell.html file which is stuffed, in turn, into the applicationHost DIV element in the server-side MVC view. The HTML markup above contains data-bind attributes used by Knockout to display the list of movies (To learn more about Knockout, visit http://knockoutjs.com). The list of movies from the view model is displayed in an HTML table. Notice that the page includes a link to a page for adding a new movie. The link uses the following URL which starts with a hash: #/movies/add. Because the link starts with a hash, clicking the link does not cause a request back to the server. Instead, you navigate to the movies/add page virtually. Creating the Movies Add Page The movies add page also consists of a view model and view. The add page enables you to add a new movie to the movie database. Here’s the view model for the add page: define(function (require) { var app = require('durandal/app'); var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToAdd: { title: ko.observable(), director: ko.observable() }, activate: function () { this.movieToAdd.title(""); this.movieToAdd.director(""); this._movieAdded = false; }, canDeactivate: function () { if (this._movieAdded == false) { return app.showMessage('Are you sure you want to leave this page?', 'Navigate', ['Yes', 'No']); } else { return true; } }, addMovie: function () { // Add movie to db moviesRepository.addMovie(ko.toJS(this.movieToAdd)); // flag new movie this._movieAdded = true; // return to list of movies router.navigateTo("#/movies/show"); } }; }); The view model contains one property named movieToAdd which is bound to the add movie form. The view model also has the following three methods: 1. activate() – This method is called by Durandal when you navigate to the add movie page. The activate() method resets the add movie form by clearing out the movie title and director properties. 2. canDeactivate() – This method is called by Durandal when you attempt to navigate away from the add movie page. If you return false then navigation is cancelled. 3. addMovie() – This method executes when the add movie form is submitted. This code adds the new movie to the movie repository. I really like the Durandal canDeactivate() method. In the code above, I use the canDeactivate() method to show a warning to a user if they navigate away from the add movie page – either by clicking the Cancel button or by hitting the browser back button – before submitting the add movie form: The view for the add movie page looks like this: <form data-bind="submit:addMovie"> <fieldset> <legend>Add Movie</legend> <div> <label> Title: <input data-bind="value:movieToAdd.title" required /> </label> </div> <div> <label> Director: <input data-bind="value:movieToAdd.director" required /> </label> </div> <div> <input type="submit" value="Add" /> <a href="#/movies/show">Cancel</a> </div> </fieldset> </form> I am using Knockout to bind the movieToAdd property from the view model to the INPUT elements of the HTML form. Notice that the FORM element includes a data-bind attribute which invokes the addMovie() method from the view model when the HTML form is submitted. Creating the Movies Details Page You navigate to the movies details Page by clicking the Details link which appears next to each movie in the movies show page: The Details links pass the movie ids to the details page: #/movies/details/0 #/movies/details/1 #/movies/details/2 Here’s what the view model for the movies details page looks like: define(function (require) { var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToShow: { title: ko.observable(), director: ko.observable() }, activate: function (context) { // Grab movie from repository var movie = moviesRepository.getMovie(context.id); // Add to view model this.movieToShow.title(movie.title); this.movieToShow.director(movie.director); } }; }); Notice that the view model activate() method accepts a parameter named context. You can take advantage of the context parameter to retrieve route parameters such as the movie Id. In the code above, the context.id property is used to retrieve the correct movie from the movie repository and the movie is assigned to a property named movieToShow exposed by the view model. The movie details view displays the movieToShow property by taking advantage of Knockout bindings: <div> <h2 data-bind="text:movieToShow.title"></h2> directed by <span data-bind="text:movieToShow.director"></span> </div> Summary The goal of this blog entry was to walkthrough building a simple Single Page App using Durandal and to get a feel for what it is like to use this library. I really like how Durandal stitches together Knockout, Sammy, and RequireJS and establishes patterns for using these libraries to build Single Page Apps. Having a standard pattern which developers on a team can use to build new pages is super valuable. Once you get the hang of it, using Durandal to create new virtual pages is dead simple. Just define a new route, view model, and view and you are done. I also appreciate the fact that Durandal did not attempt to re-invent the wheel and that Durandal leverages existing JavaScript libraries such as Knockout, RequireJS, and Sammy. These existing libraries are powerful libraries and I have already invested a considerable amount of time in learning how to use them. Durandal makes it easier to use these libraries together without losing any of their power. Durandal has some additional interesting features which I have not had a chance to play with yet. For example, you can use the RequireJS optimizer to combine and minify all of a Durandal app’s code. Also, Durandal supports a way to create custom widgets (client-side controls) by composing widgets from a controller and view. You can download the code for the Movies app by clicking the following link (this is a Visual Studio 2012 project): Durandal Movie App

    Read the article

  • MySQL MyISAM table performance... painfully, painfully slow

    - by Salman A
    I've got a table structure that can be summarized as follows: pagegroup * pagegroupid * name has 3600 rows page * pageid * pagegroupid * data references pagegroup; has 10000 rows; can have anything between 1-700 rows per pagegroup; the data column is of type mediumtext and the column contains 100k - 200kbytes data per row userdata * userdataid * pageid * column1 * column2 * column9 references page; has about 300,000 rows; can have about 1-50 rows per page The above structure is pretty straight forwad, the problem is that that a join from userdata to page group is terribly, terribly slow even though I have indexed all columns that should be indexed. The time needed to run a query for such a join (userdata inner_join page inner_join pagegroup) exceeds 3 minutes. This is terribly slow considering the fact that I am not selecting the data column at all. Example of the query that takes too long: SELECT userdata.column1, pagegroup.name FROM userdata INNER JOIN page USING( pageid ) INNER JOIN pagegroup USING( pagegroupid ) Please help by explaining why does it take so long and what can i do to make it faster. Edit #1 Explain returns following gibberish: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE userdata ALL pageid 372420 1 SIMPLE page eq_ref PRIMARY,pagegroupid PRIMARY 4 topsecret.userdata.pageid 1 1 SIMPLE pagegroup eq_ref PRIMARY PRIMARY 4 topsecret.page.pagegroupid 1 Edit #2 SELECT u.field2, p.pageid FROM userdata u INNER JOIN page p ON u.pageid = p.pageid; /* 0.07 sec execution, 6.05 sec fecth */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE u ALL pageid 372420 1 SIMPLE p eq_ref PRIMARY PRIMARY 4 topsecret.u.pageid 1 Using index SELECT p.pageid, g.pagegroupid FROM page p INNER JOIN pagegroup g ON p.pagegroupid = g.pagegroupid; /* 9.37 sec execution, 60.0 sec fetch */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE g index PRIMARY PRIMARY 4 3646 Using index 1 SIMPLE p ref pagegroupid pagegroupid 5 topsecret.g.pagegroupid 3 Using where Moral of the story Keep medium/long text columns in a separate table if you run into performance problems such as this one.

    Read the article

  • Performance problems when running Java desktop applications on Citrix Metaframe

    - by demetriusnunes
    We have a desktop Java application running within a Citrix Metaframe server farm and the performance, specially while starting up the app, is very unreliable. Sometimes it takes 15 seconds and sometimes it takes over a minute. It's really unpredicatable. Is there any way to optimize running Java desktop applications within Citrix Metaframe Terminal server sessions to a more reliable performance level? Are there any optimization directed specifically toward Java, such as pre-load JVMs or something like that? Any help would be greatly appreciated.

    Read the article

  • Need help diagnosing network performance issues

    - by tokes
    I am currently working in a developing country as a system analyst for a government department. My area of expertise is software projects, but I've come across a few issues with the network setup in my office. (Unfortunately, being a developing country, there's not a lot of professional help available for this sort of thing.) Most recently, I am trying to diagnose a problem with slowness on the network. Our office is connected to the internet via an ADSL wireless modem/router (called Router). The modem is connected via ethernet to a switch (called Switch). The modem also acts as a wireless access point (called Wireless1), though because it is in a room at the end of the floor, it's range is limited. There are ethernet ports installed around the office. The cables of these all lead back to the same switch. In closer vicinity to the bulk of the client computers, there is another wireless router that acts as an access point for those clients (called Wireless2). That router is connected via ethernet to a wall port, and therefore to Switch. There is also a Windows server which acts as a DNS server (called DNSBox) which is located in the same room and is connected directly to Switch. ---Internet----------| Router/Wireless1 192.168.10.1 ---------------| |----|=========| DNSBox | |-------------------- 192.168.10.4 --------------------| Switch |---Other clients---- | |-------------------- |----|=========| Wireless2 ------------------| 192.168.10.198 One final thing to mention about the network setup. All clients are configured with manual IP addresses. Their router/gateway is set to the IP address of Router, and their DNS server is set to the IP address of DNSBox (with a secondary IP set to an external IP - that of our ISP's DNS server). Here are the symptoms we are experiencing: Clients connected to Wireless2 AP experience slow and unstable connections to the internet. (Slow here is defined as speeds of ~1KB/s, though ping response times seem to be as normal.) Clients connected via ethernet to Switch also experience the same slowness. Clients connected to Wireless1 AP (i.e. connecting via wireless directly to the ADSL modem) experience normal connections to the internet. Clients connected via ethernet to Router (i.e. connecting via ethernet directly to the ADSL modem) also experience normal connections to the internet. I also tried to gauge the connection performance between two machines on the network via ethernet: A file transfer between two clients who were both directly connected to Switch was the fastest; A file transfer between one client directly connected to Switch, and one client directly connected to Router (which is directly connected to Switch) performed much slower; A file transfer between two clients directly connected to Router also performed slowly. Things I have attempted to diagnose the problem: Restarted Switch -- no change. We tried unplugging ethernet jacks from Switch 4 at a time and testing the internet connection. The thought here was that perhaps a client on the network has contracted a virus, and is possibly spamming the network with traffic? (Not very technical, I know.) Unfortunately we couldn't get any significant increases in performance using this method. There were a couple of times when it seemed to be better, but then the connection speed quickly dropped back to slow/dead pace. I didn't want to unplug all jacks from Switch because I was concerned that users might be affected or that I would re-plug in the jacks incorrectly (should I even be worried about that? a port is a port on a switch, right?) I tried swapping the ethernet cable used to connect Router to Switch -- no change in performance. I tried swapping the port used on Switch for Router -- no change in performance. Anyone got any ideas on what this could be? Should I be mentioning specific brand names/models of the hardware used? Virii outbreaks are common in this country/office -- what could I be doing to figure out if a virus is at fault? If it is a virus, it doesn't seem to be generating a lot of traffic to/from the internet, because a) I can still get a good speed if I am directly connected to Router / Wireless1 and b) our ISP data usage has not risen suspiciously. Thanks for your help! Update #1 Here are the specs of some of the hardware: Switch is an Edimax ES3132RL 32-Port 10/100 Rackmount Switch Router is a D-Link DSL-G604T Update #2 I just tried unplugging everything except a laptop and Router from Switch. Speeds are still slow. I guess that means that Router / Switch are not being flooded? It seems more and more likely that the cause is something to do with the interaction between Router and Switch. However, I still can't find any useful resources on setting the LAN speed for either (and I'm not well-versed in these advanced networking configurations).

    Read the article

  • Microsoft Office 2010 start page numbering on page 5

    - by Brian Byrne
    This is driving me absolutely insane. I want to start the page numbering on page 5 and for it to continue to the end of the document. I've tried using page/section breaks but that didn't work. I've also tried pressing ctrl + F9 while in the footer and adding it in the format { if{ page } < 5 "{ page *roman }" "{ page }" } but that doesn't work either. Does anyone have any ideas? I've been trying to fix this for the past hour.

    Read the article

  • Windows XP IIS5 performance across Network

    - by davidsleeps
    Hi, Just wondering whether Windows XP with IIS5 running needs any extra configuration to be suitable as a web server...I'm not considering using this for anything other than a web server on a small network for testing development etc One of the reasons I'm concerned though is that we've deployed an asp.net application to a workstation with Windows XP, and running the application using a browser on the machine (so accessing it through localhost/myApp/page.aspx and not accessing it through the network) runs the application really quickly. If another machine on the LAN accesses the same page (using http://ComputerName/myApp/page.apx) then the whole application runs noticeably slower...yet the computers are connected on a gigabit switch...so I wouldn't have thought network latency or bandwidth could be an issue... Does Windows XP need anything etc enabled or changed or network settings for it to work correctly?

    Read the article

  • Slow performance by PHP directory operations on virtual machine (Ubuntu libvirt)

    - by thonixx
    Some days ago I installed an Ubuntu server and two running virtual machines with libvirt. Everything works fine except one performance problem. Everytime when I call a PHP script with directory operations the operations are very slow and not performant. Here is an example: http://zother.white-tiger.ch/ And here you see an example without a directory operation and how fast it is: http://michaeltanner.ch/ It's all on the same virtual server. The virtual machine uses 6 cores (8 are available) and 7500 megabytes RAM (8 Gigabyte are available). The disk image format is qcow2. How can I improve the performance?

    Read the article

  • Performance Alert Writing to event Log but not running program

    - by TooFat
    I followed the instructions here How to create and configure performance alerts in Windows Server 2003 to set up an alert if the available logical disk space on one of my drives goes below a certain number. I selected the option to write to the application event log and select the "run this program" option and put in the path to a script that sends me an email. If I copy the path to the script and run it everything works and I get the email. When I start the alert I can see that the limit I set is being exceeded and the logs are being written to the application log, but the email is never being sent. I have the runas user and pword set to a Domain Admin. If I make the "run this program path" to C:\Windows\System32\calc.exe" it also doesn't start up the calculator. The Performance Logs and alerts services is running as Local Admin with allow to interact with desktop. What am I doing wrong?

    Read the article

  • How to troubleshoot performance issues of PHP, MySQL and generic I/O

    - by jbx
    I have a WordPress based website running on a shared hosting. Its response time is very decent (around 2s to retrieve the HTML page and 5s to load all the resources). I was planning to move it to a dedicated virtual server (Ubuntu 12.04 LTS), which should theoretically improve things and make them more consistent given its not shared. However I observed severe performance degredation, with the page taking 10seconds to be generated. I ruled out network issues by editing /etc/hosts on the server and mapping the domain to 127.0.0.1. I used the Apache load tester ab to get the HTML, so JS, CSS and images are all excluded. It still took 10 seconds. I have Zpanel installed on the server which also uses MySQL, and its pages come up quite fast (1.5s) and also phpMyAdmin. Performing some queries on the wordpress database directly through phpMyAdmin returns them quite fast too, with query times in the 10 to 30 millisecond region. Memory is also sufficient, with only 800Mb being used of the 1Gb physical memory available, so it doesn't seem to be a swap issue either. I have also installed APC to try to improve the PHP performance, but it didn't have any effect. What else should I look for? What could be causing this degradation in performance? Could it be some kind of I/O issue since I am running on a cloud based virtual server? I wish to be able to raise the issue with my provider but without showing actual data from some diagnosis I am afraid he will just blame my application. UPDATE with sar output (every second) when I did an HTTP request: 02:31:29 CPU %user %nice %system %iowait %steal %idle 02:31:30 all 0.00 0.00 0.00 0.00 0.00 100.00 02:31:31 all 2.22 0.00 2.22 0.00 0.00 95.56 02:31:32 all 41.67 0.00 6.25 0.00 2.08 50.00 02:31:33 all 86.36 0.00 13.64 0.00 0.00 0.00 02:31:34 all 75.00 0.00 25.00 0.00 0.00 0.00 02:31:35 all 93.18 0.00 6.82 0.00 0.00 0.00 02:31:36 all 90.70 0.00 9.30 0.00 0.00 0.00 02:31:37 all 71.05 0.00 0.00 0.00 0.00 28.95 02:31:38 all 14.89 0.00 10.64 0.00 2.13 72.34 02:31:39 all 2.56 0.00 0.00 0.00 0.00 97.44 02:31:40 all 0.00 0.00 0.00 0.00 0.00 100.00 02:31:41 all 0.00 0.00 0.00 0.00 0.00 100.00 My suspicion that this comes from I/O related issue is also because a caching plugin I use to reduce the amount of queries to the database, by precompiling PHP pages is actually making things worse instead of better. It seems that file access is making things worse instead.

    Read the article

  • VMWare Workstation Performance

    - by tekiegreg
    Hi there, awhile ago I upgraded my laptop to Windows 7 x64 from Windows XP 32 bit edition. However not before virtualizing the physical installation and I continue to run it under VMWare Workstation today. The performance on the resulting VM is just absolutely atrocious! I've done a lot of uninstalling stuff that's not longer needed since the machine is virtual in an effort to reduce RAM, but in general the responsiveness seems sluggish. I also run the Virtual Machine on it's own separate HD that is seldom used by the host OS. I'm just hoping for some general tips in increasing VMWare performance anywhere, thoughts? EDIT: Both of the below answers were excellent starting points for me. However I did like the selected answer's strategies on disk management. I am running the Virtual Machine in a separate external hard disk, likely I'm going to have to reconfigure somehow. Thanks all!

    Read the article

  • Optimizing PHP<>MySQL performance

    - by BarsMonster
    I am trying to optimize my PHP<MySQL on this test script: <? for($i=0;$i<100;$i++)//Itterations count $res.= var_dump(loadRow("select body_ru from articles where id>$i*50 limit 100")); print_r($res); ?> I have APC, and article table have an index on id. Also, all these queries are hitting query cache, so sole MySQL performance if great. But when I am using ab -c 10 -t 10 to bench this scipt, I am getting: 100 itterations: ~100req/sec (~10'000 MySQL queries per second) 5 itteration: ~200req/sec 1 itteration: ~380req/sec 0 itteration: ~580req/sec I've tried to disable persistent connections in PHP - it made it slower a bit. So, how can I make it work faster, provided that MySQL is not limiting performance here?

    Read the article

  • Performance affects of compressing Program Files on Windows / NTFS

    - by SRobertJames
    What are the performance affects of compressing Program Files on Windows NTFS? On a fast, multicore machine, the overhead of decompression is minimal. Machines are generally disk bound, and if you can reduce the disk load by compression, you often speed things up. (Microsoft says that the built in compression of Windows Search indexes actually improves speed for this reason.) On the other hand, Windows' virtual memory is complicated. Perhaps if files are compressed, they can't be paged out simply. And there may be other issues. In short: On a fast, multicore machine with a relatively slow disk, what performance affects will compressing Program Files have?

    Read the article

  • Verify server performance

    - by George Kesler
    I'm looking for a quick and SIMPLE way to verify that new servers are performing as expected. The most important metric is disk performance, second is network performance. I’m trying to prevent problems caused by misconfiguration of RAID arrays, NIC teaming etc. The solution should work with both physical and virtual servers. I don’t need sophisticated analysis with different workloads, just one set of benchmarks which I would run against a reference server and later compare to new ones. One problem is that most benchmarks are not giving accurate results when running on a VM.

    Read the article

  • General video performance affected on Mac OSX 10.5 (PowerBook G4)

    - by r0ca
    Hi all, I'm quite new to Mac and I just got a PowerBook G4 for free. I installed OSX 10.5 on it and for the first two weeks, everything was going kinda smooth even if this is similar to a P3. I'm not expecting awsome video performance but at least be able to watch some videos from Youtube. Yesterday night, I installed Office 2008 for mac and this morning, even after a reboot, my computer is way much slower that I used to know. I watched a youtube video and the framerate was 1:1. I also noticed it on flash adds, it's way slower! Is there anything that I can do to increase video performance, see what's the process list running and taking more GPU or CPU, what's taking more ram and stuff like that?! What do you guys, Mac pros, would do on an old laptop with OSX 10.5 Thanks!

    Read the article

  • Skyrim: Heavy Performance Issues after a couple of location changes

    - by Derija
    Okay, I've tried different solutions: ENB Series, removing certain mods, checking my FPS Rate, monitoring my resources, .ini tweaks. It's all just fine, I don't see what I'm missing. A couple of days ago, I bought Skyrim. Before I bought the game, I admit I had a pirated copy because my girlfriend actually wanted to buy me the game as a present, then said she didn't have enough money. Sick of waiting, I decided to buy the game by myself. The ridiculous part is, it worked better cracked than it does now uncracked. As the title suggests, after entering and leaving houses a couple of times, my performance obviously drops extremely. My build is just fine, Intel i5 quad core processor, NVIDIA GTX 560 Ti from Gigabyte, actually stock-OC, but manually downclocked to usual settings using appropriate Gigabyte software. This fixed the CTD issues I had before with both Skyrim and BF3. I have 4GB RAM. A website about Game Tweaks suggested that my HDD may be too slow. A screenshot of a Windows Performance Index sample with the subscription "This is likely to cause issues" showed the HDD with a performance index of 5.9, the exact same mine has, so I was playing with the thought to purchase an SSD instead, load games onto it that really need it like Skyrim, and hope it'd do the trick. Unfortunately, SSDs are likewise expensive, compared to "normal" HDDs... I'm really getting desperate about it. My save is gone because the patches made it impossible to load saves of the unpatched version and I already saved more than 80 times despite being only level 8, just because every time I interact with a door leading me to another location I'm scared the game will drop again. I can't even play for 30 mins straight anymore, it's just no fun at all. I've researched for a couple of days before I decided to post my question here. Any help is appreciated, I don't want to regret having bought the game... Since it actually is the best game I've played possibly for ever. Sincerely. P.S.: I don't think it's necessary to say, but still, of course I'm playing on PC. P.P.S.: After monitoring both my PC resources including CPU usage and HDD usage as well as the GPU usage, I don't see any changes even after the said event. P.P.P.S.: Original question posted here where I've been advised to ask here.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >