Search Results

Search found 60072 results on 2403 pages for 'application performance'.

Page 204/2403 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • High frequency, kernel bypass vs tuning kernels?

    - by Keith
    I often hear tales about High Frequency shops using network cards which do kernel bypass. However, I also often hear about them using operating systems where they "tune" the kernel. If they are bypassing the kernel, do they need to tune the kernel? Is it a case of they do both because whilst the network packets will bypass the kernel due to the card, there is still all the other stuff going on which tuning the kernel would help? So in other words, they use both approaches, one is just to speed up network activity and the other makes the OS generally more responsive/faster? I ask because a friend of mine who works within this industry once said they don't really bother with kernel tuning anymore-because they use kernel bypass network cards? This didn't make too much sense as I thought you would always want a faster kernel for all the CPU-offloaded calculations.

    Read the article

  • High disk I/O activity in CentOS server

    - by triiim
    I have about 16 websites in a CentOS dedicated, and I am having some problems on high traffic hours, it seems to be a high disk I/O activity causing a general slowdown. I've installed atop and this is what I see on the bottom (the server has been restarted thats why the values are so low): *** system and process activity since boot *** PID RDDSK WRDSK WCANCL DSK CMD 1/18 2176 1.7G 7.3G 854.4M 39 mysqld 671 1248K 3.0G 0K 13 flush-8:0 566 0K 1.1G 0K 5 jbd2/sda2-8 2401 124.2M 529.1M 22408K 3 crond 2032 2.2G 502.0M 0K 12 nginx 2360 425.8M 115.3M 4188K 2 httpd flush-8:0 and jbd2/sda2-8 are the processes I see with iotop using 99% on the IO column, and they are the processes that write the most on the hdd (after mysql). From what I saw in google this could be caused by some ext4 related bug, the current kernel is: Linux srvr.com 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux I asked the hosting support to update the kernel and they tried but they now say that the server wont boot with the new installed kernel and they had to go back to the previous, they are not helping very much. Does someone has any idea how could I solve the high disk usage caused by flush-8:0 and jbd2/sda2-8 processes?

    Read the article

  • When using RAID10 + BBWC why is it better to separate PostgreSQL data files from OS and transaction logs than to keep them all on the same array?

    - by Vlad
    I've seen the advice everywhere (including here and here): keep your OS partition, DB data files and DB transaction logs on separate discs/arrays. The general recommendation is to use RAID1 for OS, RAID10 for data (or RAID5 if load is very read-biased) and RAID1 for transaction logs. However, considering that you will need at least 6 or 8 drives to build this setup, wouldn't a RAID10 over 6-8 drives with BBWC perform better? What if the drives are SSDs? I'm talking here about internal server drives, not SAN.

    Read the article

  • What's throttling the database?

    - by Troels Arvin
    Hardware: Intel x86_64 with 192GB of RAM. OS: CentOS 5.4 x86_64. DBMS: DB2 v. 9.7.1 64 bit. During certain special workloads (e.g. parallel REORGs/RUNSTATs), I've seen the server transporting 450MB/s with 25000IO/s (yes, there is probably some storage system caching happening here) while all CPU cores were happily working in an even mix of usermode/wait. And disk benchmark tools can also bring some very satisfying bandwith and IO/s numbers to the table. On the other hand, we also have another scenario: A single rather complex query with at least one large table scan. db2's "list applications" reports that the query is Executing (not locked). IO: At most 10MB/s, 500 IO/s; CPU: two cores in 99.9% wait state, all other cores 100% idle. The tables which the query reads from have been altered to have LOCKSIZE=TABLE, so I would think that lock list work is zero. What's going on in such a situation? What tools/snapshots/... can I use to gain better insight in such a case?

    Read the article

  • MS-DOS application sending screen output to LPT printer

    - by gadget00
    We have a MS-DOS application(coded in FoxPro), and recently had this glitch: the screen menu of the application without reason starts printing in an LPT Panasonic KX-1150 printer. It's a never ending print of all the screens of the application, as if the main output instead of sending it to the monitor, sends it to the printer! It creates a unnamed document with N/D pages and keeps printing forever. We have to turn the printer off and then kill the document in the spool to stop it... The printer is installed with a Generic/Text driver, and has happened to us both in WindowsXP and Win7. What can this be? Thanks in advance

    Read the article

  • What Application Indicators are available?

    - by user8592
    I installed Ubuntu 11.04 on one of my systems and I am using the Unity interface. Unity is working quite well so far but I really miss panel applets for net speed, cpu temp, and system monitor. These applets are useful for viewing quick info. Unlike 10.10, there is no other way to get this info onto the panel or unity launcher. There are solutions like screenlets and conky but they don't feel appropriate for a clean desktop look. If you know one then please list out any third party indicators with links so that they can be found.

    Read the article

  • SQL Server Full Text Search resource consumption

    - by Sam Saffron
    When SQL Server builds a fulltext index computer resources are consumed (IO/Memory/CPU) Similarly when you perform full text searches, resources are consumed. How can I get a gauge over a 24 hour period of the exact amount of CPU and IO(reads/writes) that fulltext is responsible for, in relation to global SQL Server resource usage. Are there any perfmon counters, DMVs or profiler traces I can use to help answer this question?

    Read the article

  • Unable to set .NET 4 on Application Pool from remote, works locally on server

    - by Robin Wassén-Andersson
    I have setup Remote Administration for IIS successfully and connected to it. For some reason .NET Framework 4 doesn't show up as an option when configuring the Application Pools from remote even though .NET 4 is installed on both server and client (not that client should matter). If I login to the server with RDP and configure the Application Pools it work as intended, the option shows up. Even more odd is if I edit an Application Pool that already runs .Net 4 it shows up as an alternative (kind of strangly formatted text though, just says v4.0 instead of .NET Framework v4.0.30319 ) How should I proceed to solve this?

    Read the article

  • Email server for huge number of subscriber

    - by bogha
    My question is that my company is thinking of providing a free email account for each of its customers. As a new company we will assume that our corporate email system will be MS Exchange server which will support about 1000 employees. They are asking why not adding the customer list to be a part of Exchange users. My suggestion was to separate the two systems, for the corporate we can use Exchange but for customers (around 30000) we have to use a Linux based system. My only argument was that Linux can be used for enterprise services like this and Microsoft may fail. What do you suggest? And if you are with me on choosing Linux as the server platform, what do you suggest to use as an alternative for Exchange in Linux? Thank you.

    Read the article

  • How to calculate required switch speed based on network usage?

    - by tobefound
    I have a 48 port HP Procurve Switch 2610 (J9088A) that can handle 13.0 million PPS (packets per second) and features wire speed switching capacity at 17.6Gbps. First off, what does that REALLY mean? Where do I start when trying to figure out if my office (with 70 employees) will be well setup with this switch? How to calculate through-put based on a user average load of X MB per day? 90% of the folks will only be sending email, access random websites, etc... the other 10% will be conducting heavier tasks like moving image files (10 MB) across network shares, constant external FTP streams through the switch to a server etc... Is this switch good enough?

    Read the article

  • Upgrading MacBookPro

    - by moray95
    I'm using a Late 2011 13" MacBook Pro with an Intel i5 @ 2.4 GHz and 4 GB 1333 MHz ram. The computer has started to get older. I was going to upgrade the ram but since Mavericks come out, the ram problem just went away and now, it started to get slower and slower. So I was thinking of upgrading my ram to at least 8GB and my CPU. I have two question about that. As I have 1333Mhz rams installed by default, the motherboard should not support 1666Mhz rams. But can I use 1666 Mhz ones and if I can will it make any difference? Also is it possible to upgrade the CPU of my computer? If yes how can I find a CPU compatible with the other components?

    Read the article

  • How do I improve picture quality while streaming live football (soccer) from my Dell D600 to an HDTV?

    - by Bob
    I have fibre broadband with speeds up to 38mbs, my Dell D600 has its max 2gb ram and has an ATI Mobility RADEON 9000 4xAGP 32mb card in it...Its TV support it says is NTSC or PAL in S-video and composite modes with a 7-pin mini-DIN connector (optional S-video to composite video adapter cable) and a vga port which i am using at the moment... The laptop runs Windows XP, an 80g HD with only windows + necessary updates and anti virus software on it..... There is HDMI on the TV, but not the laptop Fairly slow moving and close up pictures arent too bad, but when the movment is fast(a shot on goal) or in the distance, I cant see the ball and the images go out of focus.

    Read the article

  • sys.dm_exec_query_stats interaction with recompilation

    - by Sam Saffron
    We use sys.dm_exec_query_stats to track down slow queries and queries that are IO offenders. This works great, we get a lot of very insightful stats. It is clear this is not as accurate as running a profiler trace, as you have no idea when SQL Server will decide to chuck out a an execution plan. We have quite a few queries where the wrong execution plan is cached. For example queries like the following: SELECT TOP 30 a.Id FROM Posts a JOIN Posts q ON q.Id = a.ParentId JOIN PostTags pt ON q.Id = pt.PostId WHERE a.PostTypeId = 2 AND a.DeletionDate IS NULL AND a.CommunityOwnedDate IS NULL AND a.CreationDate @date AND LEN(a.Body) 300 AND pt.Tag = @tag AND a.Score 0 ORDER BY a.Score DESC The problem is that the ideal plan really depends on the date selected (screenshot of ideal plan): However if the wrong plan is cached, it totally chokes when the date range is big: (notice the big fat lines) To overcome this we were recommended to use either OPTION (OPTIMIZE FOR UNKNOWN) or OPTION (RECOMPILE) OPTIMIZE FOR UNKNOWN results in a slightly better plan, which is far from optimal. Executions are tracked in sys.dm_exec_query_stats. RECOMPILE results in the best plan being chosen, however no execution counts and stats are tracked in sys.dm_exec_query_stats. Is there another DMV we could use to track stats on queries with OPTION (RECOMPILE)? Is this behavior by-design? Is there another way we can for recompilation while keeping stats tracked in sys.dm_exec_query_stats? Note: the framework will always execute parameterized queries using sp_executesql

    Read the article

  • Laptop is super slow on network

    - by Gary
    So on our network we have a bunch of wireless macs and window Operating laptops, we have a network setup with 802.11g,b,n. All the laptops seem fine except one which is only getting speeds of 54Mb. I have changed the encryption from AES to TKIP and reset the connection, i have updated the drivers, tried plugging it into the LAN and still same slow speed. Apparently the laptop with the slow speed is fine on other networks. I don't know what to do, can anyone help me?

    Read the article

  • How to benchmark kernel (-Os vs -O2)

    - by NightwishFan
    It seems logical to me that on a 64-bit kernel compiling it to optimize for size might help overall. (My distro of choice uses -O2) It has the benefits of more registers and memory and perhaps less cache contention than normal optimized code. I have a kernel compiled like this and it seems excellent. However my question is how can I prove this? I like using Phoronix for "real world" sort of benchmarks so I would prefer to test cases like that. What should I pick to test? Does anyone else have any alternatives? Thank you very much in advance.

    Read the article

  • Improve file transfer speed between Windows PCs and servers

    - by Geotarget
    I've setup a server which I've connected to multiple PCs in my workplace. Sadly, data transfer speeds are at max 3 MB/sec per connection which works out slow for file transfers, especially when transferring large files. I'm using Windows filesharing and the server is a Windows Server 2008 (2 Ghz CPU, 1 GB RAM) and the client PCs mostly running Windows 7. How can I detect bottlenecks in my network and improve file sharing speed within the network?

    Read the article

  • SQL DB design to support user feeds (in application like facebook)

    - by Yoav
    I have a social network server with a MySql DB. I want to show the users feeds like done in Facebook. Example - UserX now Friend with userY, userX did like on postX etc. Currently I have table: C1 : UserId C2 : LogType (now friend, did like etc) C3 : ObjectId (Can be userId or postId) - set depending on the LogType. Currently to get all related logs to show to the user I do the following queries: 1. Get All user Friends userIds 2. Query all rows which C1 is in userIds (I query completed) 3. Scan the DB and see - if LogType equals DidLike, check if post's OwnerId is the userId - if yes add it to logs. And so on. Obvious this is not efficient at all. I am looking for a better way. I thought I had in mind: Create a new table (in addition to the Log table) C1 : UserId C2 : LogId (from Log table) C3 : UserID of the one who did the action When querying logs - look in the table and get related Logs (by LogId) from LogTable. Updating the table: Whenever user doing action that should be in the log: 1. Add the Log entry to LogTable. 2. Scan the DB and see which users are interested with the Log (Who my friends are, Who is the owner of the post) and add related entries to the new table. (must be done in BG). 3. If user UNFRIEND another user - then look in the logs for all rows where C3 == UNFRIENDED user id and delete them. Any opinions? Other suggestions?

    Read the article

  • How to Select a Facebook Application Development Team

    In today';s social-networking world Facebook is one of the unquestioning leaders. It gives unique opportunities to its users and is both a place to meet friends and a profitable advertising space. F... [Author: Dmitriy Kharchenko - Computers and Internet - April 10, 2010]

    Read the article

  • iotop for Linux kernel 2.6.18

    - by Lightsauce
    So it has to come to my attention that iotop isn't availalbe for 2.6.18 since it's less than 2.6.20 and requires Python 2.6+. I've done some research and came across this article: http://lserinol.blogspot.com/2009/09/io-usage-per-process-on-linux.html According to this, if these process have io stats in /proc/pid#/io (where pid# is the process #) it's doable regardless of the kernel version. So, in reality, I could upgrade Python to 2.6 and test out iotop. However, my flavor of Linux, CentOS release 5.5 (Final), only supports Python 2.4.3-44.el5 currently. If I were to do uninstall from yum, it doesn't look so pretty. It ends up wanting to uninstall 235 packages, most of which are very important! I read in one place, online (I forget the URL from yesterday), that you can install Python 2.6+ parallel to this one, and have the rpm install for iotop use that. Well, I didn't choose that route. I figured, what the heck, lets write iotop (not copying it, but reverse engineering it without actually looking at it's code/it in use) in bash. I thought it would just grab the /proc/pid#/io file and parse stats. So I wrote a script to grab the top 10 rchar, wchar, read_bytes, and write_bytes by collecting all these stats from all the /proc/pid#/io files, sorting them by each metric, then grabbing the top 10 highest values. The conclusion, the data seems completely useless. Does anybody know any resources for advanced Linux where I can figure out how to take these /proc/pid#/ directories and figure out what the heck they are doing with io on the disk? My main goal is to figure out what exactly is causing high load on my disk. I just know it's on the / partition (/dev/sda2 in this case), and I'm not really sure how to narrow it down without the help of iotop. If I run iostat to grab metrics for 1 minute, every second, the first result it gives me shows a high 'kB_read/s', so that makes me think, it's reading mostly. However, if I watch the update it gives me every second, it's actually just showing values for kB_wrtn/s. This makes me think the initial value iostat gives me is misleading.

    Read the article

  • Few questions on giga tweaker

    - by user23950
    I better consult first the people here before I do anything unnecessary using this app called giga tweaker. I don't really understand what this increase the performance of your CPU thing. It is under Customization-Memory Management-ram & disk cache of giga tweaker. What will happen if I change the level cache size of l2 cache into the highest possible value which is 8Mb. What are the negative effects of doing it? The file system caching memory, still under Customization-Memory Management-ram & disk cache. What effects will it have on my system which has 2Gb of Ram and 2.50 Ghz of Dual Core CPU. Please enlighten me.

    Read the article

  • Reccomendation for tuning 100's of SQL Databases

    - by wayne
    I'm running several SQL servers, each running a few hundred multi-gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite a lot from database to database. What would be the best way to auto-index/profile/tune this large amount of databases? As there are at least 600 or more catalogs I cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine.

    Read the article

  • Cannot remove application from "List Applications" in Tomcat7

    - by Soylent Green
    Question was off topic at SO. It was suggested I migrate here. I have an application I want completely removed from my local Tomcat 7 installation. I am running windows 7. All that is available to me is the "Start" option, which fails (it should, because it NOT exist in webapps). Stop, reload, and undeploy buttons are disabled. How do I remove this application from the application List? I have tried restarting the server.

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >