Search Results

Search found 60072 results on 2403 pages for 'application performance'.

Page 205/2403 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • Using Microsoft&apos;s Chart Controls In An ASP.NET Application: Serializing Chart Data

    In most usage scenarios, the data displayed in a Microsoft Chart control comes from some dynamic source, such as from a database query. The appearance of the chart can be modified dynamically, as well; past installments in this article series showed how to programmatically customize the axes, labels, and other appearance-related settings. However, it is possible to statically define the chart's data and appearance strictly through the control's declarative markup. One of the demos examined in the <a href="http://www.4guysfromrolla.com/articles/072209-1.aspx">Getting Started</a> article rendered a column chart with seven columns whose labels and values were defined statically in the <code>&lt;asp:Series&gt;</code> tag's

    Read the article

  • Optimal disk partitions for database setup (15 Drives)

    - by Jason
    We are setting up a new database system and have 15 drives to play with (+2 on-board for the OS). With a total of 15 drives would it be better to setup all 14 as one RAID-10 block (+1 hot spare) OR split into two RAID-10 sets one for Data (8 disks) and one for logs/backups (6 disks). My question boils down to the following: is there a specific point where having more drives in a RAID-10 setup will out preform having the drives broken into smaller RAID-10 sets.

    Read the article

  • Linux Read-Ahead Downsides

    - by JPerkSter
    Hi Everyone, Hope all is well. I have a question regarding read-ahead caching. Are there any downsides to raising the size of the read-ahead cache? On our farm, we're currently running at 256, and upon raising that higher, we are seeing significant throughput gains.   [root@server~]# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 7352 MB in 2.00 seconds = 3677.62 MB/sec 3 Timing buffered disk reads: 244 MB in 3.10 seconds = 78.68 MB/sec [root@server ~]# blockdev --setra 10240 /dev/sda [root@server ~]# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 11452 MB in 2.00 seconds = 5728.52 MB/sec Timing buffered disk reads: 422 MB in 3.17 seconds = 133.04 MB/sec We are running on 2.6. Thanks!

    Read the article

  • How to benchmark kernel (-Os vs -O2)

    - by NightwishFan
    It seems logical to me that on a 64-bit kernel compiling it to optimize for size might help overall. (My distro of choice uses -O2) It has the benefits of more registers and memory and perhaps less cache contention than normal optimized code. I have a kernel compiled like this and it seems excellent. However my question is how can I prove this? I like using Phoronix for "real world" sort of benchmarks so I would prefer to test cases like that. What should I pick to test? Does anyone else have any alternatives? Thank you very much in advance.

    Read the article

  • How do I improve picture quality while streaming live football (soccer) from my Dell D600 to an HDTV?

    - by Bob
    I have fibre broadband with speeds up to 38mbs, my Dell D600 has its max 2gb ram and has an ATI Mobility RADEON 9000 4xAGP 32mb card in it...Its TV support it says is NTSC or PAL in S-video and composite modes with a 7-pin mini-DIN connector (optional S-video to composite video adapter cable) and a vga port which i am using at the moment... The laptop runs Windows XP, an 80g HD with only windows + necessary updates and anti virus software on it..... There is HDMI on the TV, but not the laptop Fairly slow moving and close up pictures arent too bad, but when the movment is fast(a shot on goal) or in the distance, I cant see the ball and the images go out of focus.

    Read the article

  • My laptop suppose to have 6 GB ram but it's only 2 GB installed

    - by monablo
    My laptop is supposed to have 6 GB ram but it only detects 2 GB of it, so I don't know what to do. I searched the web for answer and what i get that there's 2 possible problem: 1- That 4 GB ram card is broken and not working 2- or the 2 ram card have a different path so the windows chose the path of the smaller ram card which would be the 2 GB ram I don't know what is the path in first place so I really don't know what to do. How would I troubleshoot, and work out why the 4 GB stick is not being detected? My laptop's specifications are as follows: Windows 7 Home premium 64- bit Operating system processor : intel (R) core(TM) i7 CPU Q740 @1.73 GHz 1.73GHz installed memory(RAM) : 2.00 GB (but it suppose to be 6 GB -_- ) My laptop is Dell and it's Model N5010

    Read the article

  • Avoid Windows Explorer to load complete executable file

    - by eli.work
    On Windows Vista, when browsing to a network folder containing executables, Windows Explorer seems to load all the files completely just to be able to show the executable icon (the resource monitor indicates loads of traffic during the loading of the directory) On XP only a part of the file is loaded. Is there a way to avoid the complete loading of these files? Note that disabling my anti virus does not help.

    Read the article

  • Upgrading MacBookPro

    - by moray95
    I'm using a Late 2011 13" MacBook Pro with an Intel i5 @ 2.4 GHz and 4 GB 1333 MHz ram. The computer has started to get older. I was going to upgrade the ram but since Mavericks come out, the ram problem just went away and now, it started to get slower and slower. So I was thinking of upgrading my ram to at least 8GB and my CPU. I have two question about that. As I have 1333Mhz rams installed by default, the motherboard should not support 1666Mhz rams. But can I use 1666 Mhz ones and if I can will it make any difference? Also is it possible to upgrade the CPU of my computer? If yes how can I find a CPU compatible with the other components?

    Read the article

  • Unable to set .NET 4 on Application Pool from remote, works locally on server

    - by Robin Wassén-Andersson
    I have setup Remote Administration for IIS successfully and connected to it. For some reason .NET Framework 4 doesn't show up as an option when configuring the Application Pools from remote even though .NET 4 is installed on both server and client (not that client should matter). If I login to the server with RDP and configure the Application Pools it work as intended, the option shows up. Even more odd is if I edit an Application Pool that already runs .Net 4 it shows up as an alternative (kind of strangly formatted text though, just says v4.0 instead of .NET Framework v4.0.30319 ) How should I proceed to solve this?

    Read the article

  • How to host ASP.NET application externally?

    - by Josh
    I have an ASP.NET application that I can get to locally by going to 192.168.1.102:81/TestApp. I would like to host the application externally by going to domain.com:81/TestApp (I already have my domain pointing to my router and this works fine - I have apache running on port 80 on another server). I modified the router settings to point any request coming through port 81 to 192.168.1.102. I am still having trouble accessing the ASP.NET site (I get the error message that "This link appears to be broken"). Am I missing something? How can I redirect domain.com:81/TestApp to my ASP.NET application? Thanks.

    Read the article

  • High frequency, kernel bypass vs tuning kernels?

    - by Keith
    I often hear tales about High Frequency shops using network cards which do kernel bypass. However, I also often hear about them using operating systems where they "tune" the kernel. If they are bypassing the kernel, do they need to tune the kernel? Is it a case of they do both because whilst the network packets will bypass the kernel due to the card, there is still all the other stuff going on which tuning the kernel would help? So in other words, they use both approaches, one is just to speed up network activity and the other makes the OS generally more responsive/faster? I ask because a friend of mine who works within this industry once said they don't really bother with kernel tuning anymore-because they use kernel bypass network cards? This didn't make too much sense as I thought you would always want a faster kernel for all the CPU-offloaded calculations.

    Read the article

  • Reccomendation for tuning 100's of SQL Databases

    - by wayne
    I'm running several SQL servers, each running a few hundred multi-gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite a lot from database to database. What would be the best way to auto-index/profile/tune this large amount of databases? As there are at least 600 or more catalogs I cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine.

    Read the article

  • How to measure startup time and order of Windows services on boot?

    - by djangofan
    I am not asking how to measure server startup time here. I am wondering if anyone knows of a tool that can measure and show a graph of the startup time and order of all the windows services during system startup. I saw a software program shown on my local Portland news last week that does this but I am unable to remember what it was called or anything else about it. All I remember is that it was a "tech" news story to help computer users with their computers. So, I know the software exists and I am trying to find it.

    Read the article

  • Retrieve data from an ASP.Net application using Ado.Net 2.0 disconnected model

    - by nikolaosk
    This is the second post in a series of posts regarding to ADO.Net 2.0. Have a look at the first post if you like. In this post I am going to investigate the "Disconnected" model. When I say "Disconnected" I mean Datasets . Datasets are in memory representations of tables in a particular database. A Dataset contains a Table collection and each Table collection contains a Row collection and each Row collection contains a Columns collection. So initially you connect to the database, get the data to...(read more)

    Read the article

  • iotop for Linux kernel 2.6.18

    - by Lightsauce
    So it has to come to my attention that iotop isn't availalbe for 2.6.18 since it's less than 2.6.20 and requires Python 2.6+. I've done some research and came across this article: http://lserinol.blogspot.com/2009/09/io-usage-per-process-on-linux.html According to this, if these process have io stats in /proc/pid#/io (where pid# is the process #) it's doable regardless of the kernel version. So, in reality, I could upgrade Python to 2.6 and test out iotop. However, my flavor of Linux, CentOS release 5.5 (Final), only supports Python 2.4.3-44.el5 currently. If I were to do uninstall from yum, it doesn't look so pretty. It ends up wanting to uninstall 235 packages, most of which are very important! I read in one place, online (I forget the URL from yesterday), that you can install Python 2.6+ parallel to this one, and have the rpm install for iotop use that. Well, I didn't choose that route. I figured, what the heck, lets write iotop (not copying it, but reverse engineering it without actually looking at it's code/it in use) in bash. I thought it would just grab the /proc/pid#/io file and parse stats. So I wrote a script to grab the top 10 rchar, wchar, read_bytes, and write_bytes by collecting all these stats from all the /proc/pid#/io files, sorting them by each metric, then grabbing the top 10 highest values. The conclusion, the data seems completely useless. Does anybody know any resources for advanced Linux where I can figure out how to take these /proc/pid#/ directories and figure out what the heck they are doing with io on the disk? My main goal is to figure out what exactly is causing high load on my disk. I just know it's on the / partition (/dev/sda2 in this case), and I'm not really sure how to narrow it down without the help of iotop. If I run iostat to grab metrics for 1 minute, every second, the first result it gives me shows a high 'kB_read/s', so that makes me think, it's reading mostly. However, if I watch the update it gives me every second, it's actually just showing values for kB_wrtn/s. This makes me think the initial value iostat gives me is misleading.

    Read the article

  • How to monitor IO svctm with every 5 mins frequency using nagios?

    - by sabya
    I want to collect samples of iostat's svctm, await every 5 mins from all of my servers and store them in nagios. I want to get the values for what is happening in every 5 minutes (not since boot time, iostat's first output gives values since boot time). How can I do it in nagios? EDIT The tps should NOT be calculated #of transactions happened since reboot divided by uptime. What I want is # of transferred happened in last X mins divided X*60.

    Read the article

  • Ngingx max worker_connections and access log

    - by MotoTribe
    I'm troubleshoot an issue with my site. I'm seeing in the ngingx-error.log that the max worker_connection limit has been reached when the site went down. I'm not seeing an increase of requests during that time in the ngingx-access.log. Does that mean the mysql database had a bottleneck at that time that caused the requests to queue up? Or would it not log any requests that where made after the max worker_connection limit has been reached?

    Read the article

  • Retrieve data from an ASP.Net application using ADO.Net 2.0 connected model

    - by nikolaosk
    I have been teaching Entity Framework,LINQ to SQL,LINQ to objects,LINQ to XML for some time now. I am huge fan of LINQ to Entities and I am using Entity Framework as my main data access technology. Entity framework is in the second version right now and I can accomplish most of the things I need. I am sure the guys in the ADO.Net team will implement many more features in the future. I am a strong believer that you cannot really understand the benefits of LINQ to SQL or LINQ to Entities unless you...(read more)

    Read the article

  • Client-server application design issue

    - by user2547823
    I have a collection of clients on server's side. And there are some objects that need to work with that collection - adding and removing clients, sending message to them, updating connection settings and so on. They should perform these actions simultaneously, so mutex or another synchronization primitive is required. I want to share one instance of collection between these objects, but all of them require access to private fields of collection. I hope that code sample makes it more clear[C++]: class Collection { std::vector< Client* > clients; Mutex mLock; ... } class ClientNotifier { void sendMessage() { mLock.lock(); // loop over clients and send message to each of them } } class ConnectionSettingsUpdater { void changeSettings( const std::string& name ) { mLock.lock(); // if client with this name is inside collection, change its settings } } As you can see, all these classes require direct access to Collection's private fields. Can you give me an advice about how to implement such behaviour correctly, i.e. keeping Collection's interface simple without it knowing about its users?

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >