Search Results

Search found 40567 results on 1623 pages for 'database performance'.

Page 215/1623 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Can anybody recommend a Windows system monitoring tool similar to iPulse for the Mac?

    - by John MacIntyre
    Occasionally, my PC grinds to a halt, and by the time I get any monitoring tools open (don't forget my PC is slow at this point), performance has picked up a bit. A friend recently told me he uses iPulse, which is awesome since it's always running, and you can just glance at it when there's an issue to see what is happening. Unfortunately it's only for the Mac. Does anybody know of a good Windows system monitoring tool similar to iPulse for the Mac?

    Read the article

  • Can I enable discards on a LUKS-encrypted ssd drive in RHEL6 (and do I need to)?

    - by Dan Nestor
    I have a RHEL 6.4 workstation, running on a LUKS-encrypted LV residing on a SSD. I found RedHat documentation stating that dm_crypt does not currently support TRIM passthrough, however I also found other sources that state the opposite (albeit for other distributions) and even that discards are not needed for recent SSD drives which use some sort of automatic garbage collection. So: 1) Can I enable TRIM/discards with my setup? 2) Do I need to, for optimal disk performance? Thanks for your thoughts.

    Read the article

  • What is good usage scenario for Rackspace Cloud Files CDN (powered by AKAMAI) [closed]

    - by Andrew Smith
    I have just setup my website as static page via Rackspace CDN / Akamai. www.example.co.uk is an alias for d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com. d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com is an alias for a61.rackcdn.com. a61.rackcdn.com is an alias for a61.rackcdn.com.mdc.edgesuite.net. a61.rackcdn.com.mdc.edgesuite.net is an alias for a63.dscg10.akamai.net. a63.dscg10.akamai.net has address 63.166.98.41 a63.dscg10.akamai.net has address 63.166.98.40 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ecb9 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ed09 The HTTP header: HTTP/1.0 200 OK Last-Modified: Fri, 19 Oct 2012 23:27:41 GMT ETag: fdf9e14b77def799e09e8ce815a521da X-Timestamp: 1350689261.23382 Content-Type: text/html X-Trans-Id: tx457979be3bd746c2b4e5403a1189cdbc Cache-Control: public, max-age=900 Expires: Sat, 27 Oct 2012 22:18:56 GMT Date: Sat, 27 Oct 2012 22:03:56 GMT Content-Length: 7124 Connection: keep-alive I am wondering, if it's really the fastest solution to power the website? By investigating it thru http://www.just-ping.com/ it seems, that from many places the ping is very high, and during quick investigation I found that they use GeoIP to resolve addresses based on WHOIS, which is not accurate and because of that from many places the ping is above 300ms (for example, if ISP is in balgladore and request is routed to bangladore even if it's 300ms, for period of 1 month), while by just using Amazon Web Services and Route 53 Anycast DNS servers and only 4 EC2 instances it seems that for example India is always below 100ms, while using Akamai it goes above 300ms in some cases, and this is because Route 53 is using BGP. By quickly checking the Akamai, it seems that they are not getting feedback from the traffic - the high ping stays constant even if I keep downloading large files and videos, which is opposite to what they say on their website. They state, that they optimize the performance by taking feedback from the requests, while it seems they just use GeoIP with per City resolution (which are mostly big cities). Because of this, AWS with Route 53 / Anycast DNS seems to be much more reliable, as well EdgeCast which is using BGP, but I dont know how much does it cost to deploy static website. Actually, I dont know if EdgeCast is not a lie, because from isolated places there are many errors - so their performance is at the cost of quality of delivery, because of BGP switching the routes during transfer of large files. So I was wondering, what is really Akamai good for, because they dont seem to pose any strength in any field in what I do understand now, except they offer some software based WAF on their website, but what I really care about is the core distribiution, so the question is? Is really Akamai good for Videos? For static websites? ??? I found so far AWS most usable with most consistent ping and stable transfers.

    Read the article

  • Drupal on an NFS share has terrible performance

    - by Marcus
    We have a setup where a Drupal 7 site with the following setup - a VMware ESXi 4.1 host server running a web vm and an NFS VM. The web VM is using Apache and mod_php. The site is still in development thus we have to turn off all forms of caching due to the frequently-updated files. Each page request takes around 15-20 seconds to complete. Profiling the PHP code shows that the vast majority of time (normally over 90%) is taking by all the is_dir(), is_file() function calls that load up the modules. I've increased PHP's realpath cache size to several megs and an strace shows that the lstat calls then drop from over 200 to around 6 and stat() decreases a bit (around 600 calls). However, while this has shaved off quite a bit of time, I am simply unable to break past the 10 second per request barrier. Is there a way to get better performance out of this setup that doesn't involve caching? Configs and stats: VMs: web - Centos 6 64bt, 2.5GB RAM, normal CPU/HD prioritisation nfs - Centos 6 64bt, 2GB RAM, normal CPU priority, high HD priority PHP: 32M realpath cache size (it's this high for testing purposes) NFS: ~]# egrep -v '#|^$' /etc/nfsmount.conf [ NFSMount_Global_Options ] Defaultvers=4 Ac=False Rsize=32k Wsize=32k Bsize=32k Reading speeds via NFS are not an issue a dd of a 100M test file using 32k blocks returns: 3200+0 records in 3200+0 records out 104857600 bytes (105 MB) copied, 1.84984 s, 56.7 MB/s real 0m1.857s user 0m0.007s sys 0m0.330s Strace on Apache process with empty realpath cache: % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 50.78 1.157452 337 3434 28 stat 32.58 0.742656 628 1182 425 open 9.29 0.211788 762 278 1 lstat 3.17 0.072322 0 237865 write 2.45 0.055839 490 114 13 access 0.45 0.010262 43 237 brk 0.34 0.007725 10 811 74 read 0.28 0.006340 9 679 fstat 0.22 0.005069 18 281 poll 0.20 0.004533 6 698 getdents 0.09 0.001960 10 190 mmap 0.05 0.001065 14 74 accept4 0.04 0.001000 333 3 chdir 0.03 0.000750 4 190 munmap 0.01 0.000339 0 836 close 0.01 0.000247 3 75 writev 0.00 0.000068 0 611 fcntl 0.00 0.000063 1 77 shutdown 0.00 0.000000 0 1 lseek 0.00 0.000000 0 5 rt_sigaction 0.00 0.000000 0 1 rt_sigprocmask 0.00 0.000000 0 3 setitimer 0.00 0.000000 0 5 socket 0.00 0.000000 0 5 5 connect 0.00 0.000000 0 74 getsockname 0.00 0.000000 0 15 setsockopt 0.00 0.000000 0 5 getcwd 0.00 0.000000 0 1 futex ------ ----------- ----------- --------- --------- ---------------- Strace after realpaths are cached % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 60.14 1.371006 484 2831 28 stat 31.79 0.724705 627 1155 425 open 3.53 0.080354 0 237865 write 2.65 0.060433 530 114 13 access 0.43 0.009913 99 100 brk 0.38 0.008730 11 804 74 read 0.35 0.007910 12 675 fstat 0.30 0.006775 10 654 getdents 0.13 0.003065 11 281 poll 0.09 0.002000 333 6 1 lstat 0.07 0.001545 2 807 close 0.05 0.001063 14 74 accept4 0.04 0.001000 6 179 mmap 0.02 0.000404 2 179 munmap 0.01 0.000271 4 75 writev 0.01 0.000212 0 611 fcntl 0.01 0.000129 2 77 shutdown 0.00 0.000022 0 74 getsockname 0.00 0.000000 0 1 lseek 0.00 0.000000 0 5 rt_sigaction 0.00 0.000000 0 1 rt_sigprocmask 0.00 0.000000 0 3 setitimer 0.00 0.000000 0 3 socket 0.00 0.000000 0 3 3 connect 0.00 0.000000 0 15 setsockopt 0.00 0.000000 0 5 getcwd 0.00 0.000000 0 3 chdir ------ ----------- ----------- --------- --------- ---------------- Mount: nfs.xxx.xxx.xxx:/path/to/website/files on /path/to/website/files type nfs (rw,hard,intr,noac,vers=4,addr=xx.xx.xx.xx,clientaddr=xx.xx.xx.xx) Any help is, naturally, appreciated.

    Read the article

  • Localized database for customers

    - by Jim
    The company I work for, has just moved to AWS - and currently they have one very large central database with the instance currently located in America. However, one of their clients has requested that all of their data is held in the EU. So creating an AWS instance in Ireland isn't a problem, the problem is the database and how to manage it. We were considering having another database that runs in the EU for European customers, and use a different primary key step, so that the primary keys will never conflict in case the two locations need to be merged in the future. The problem is, if we have a customer that uses our system in both America, and the EU we would have to create 2 accounts for that user, and reporting across both regions would not be possible as the connection time would be too high. Is there an alternative to set this up?

    Read the article

  • Is CloudLinux considered to be a stable webserver

    - by Saif Bechan
    I am currently running several websites on a CentOS 5.4 system. I have the choice to switch to Cloudlinux. It is said to be better at handling several websites. Does anyone have any information to share on CloudLinux. This can be on security, stability and just overall performance of the system.

    Read the article

  • Faster zlib alternatives

    - by BarsMonster
    I wonder, if there are any faster builds of zlib around with more advanced optimizations? If it's possible to optimize it using SSE instructions or Intel C++ compiller, or some trick which were patented earlier (I know patents were a serious limitation during gzip/zlib development), have anyone bothered to implement that? I am especially interested in compression speed, which have a direct impact on high-performance web-services serving static & dynamic content.

    Read the article

  • Installing Drupal: Database configuration problem.

    - by abelenky
    I am trying to install Drupal 6.16 on a clean website. I get through the "Verify Requirements" page easily. On the Database Configuration, I supply all the proper info, but "Save and Continue" returns me back to the same page, with no error message. I am unable to proceed past this point. I've verified my info with the ISP, including a non-local database host (under Advanced Options), and that the database user has full DBA rights. The lack of an error message is particularly frustrating. Do you have any ideas what the problem is, or how to pursue it and resolve it?

    Read the article

  • Echo 404 directly from nginx to improve performance

    - by user64204
    I am in charge of production servers serving static content for a website. Those servers are constantly being crawled by bots looking for potential exploits (which isn't that much of a problem security-wise because no application can be reached behind the web server) but generates thousands of 404 per day, sometimes per hour. I am looking into ways of blocking those requests but it's tricky (you want to make sure you don't block legitimate traffic and these bots are becoming more and more clever at looking like they're legit) and is going to take me a while to find an acceptable solution. In the meantime I would like to reduce the performance impact of serving those 404 pages. Indeed we're using nginx which by default is configured to serve it's 404 page from the disk (This can be changed using the error_page directive but in the end the 404 will either have to be served from disk or from another external source (e.g. upstream application which would be worst)) which isn't ideal. I ran a test with ab on my local machine with a basic configuration: in one case I echo a message directly from nginx so the disk isn't touched at all, in the other case I hit a missing page and nginx serves its 404 from disk. server { # [...] the default nginx stuff location / { } location /this_page_exists { echo "this page was found"; } } Here are the test results (my laptop has Intel(R) Core(TM) i7-2670QM + SSD in case you're wondering why they are so high): $ ab -n 500000 -c 1000 http://localhost/this_page_exists Requests per second: 25609.16 [#/sec] (mean) $ ab -n 500000 -c 1000 http://localhost/this_page_doesnt_exists Requests per second: 22905.72 [#/sec] (mean) As you can see, returning a value with echo is 11% ((25609-22905)÷22905×100) faster than serving the 404 page from disk. Accordingly I would like to echo a simple 404 Page not Found string from nginx. I tried many things so far but they all failed, essentially the idea was this: location / { try_files $uri @not_found; } location @not_found { echo "404 - Page not found"; } The problem is that as soon as the echo directive is used, the http response code is set to 200. I tried changing that by doing error_page 200 = 400 but that breaks the configuration. How can I serve a 404 page directly from nginx? (without hacking the source which may be might next step)

    Read the article

  • Upgraded to Ubuntu 12.04 from 10.04 and have to transfer database from Postgresql 8.4 to 9.1

    - by Stpn
    I upgraded server with a Rails application to Ubuntu 12.04 from 10.04 and cannot connect to Postgresql database now... Here is the error message from Rails app: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432" Also the pg_ctl start is not recognized as a command.. EDIT: Turns out my database in on Postgresl 8.4 and my sever is now running on 9.1. So all the database files / configs are on 8.4.. How can I transfer them? Just straight copy from old pg_hba.conf?

    Read the article

  • Why does Kerberos need Ticket Granting Server?

    - by Narsil
    It's probably something fundamental but I can't find a certain statement. Why can't KDC authenticate then provide the service ticket directly. Is it about security or performance or some other thing? Since users don't log in each time they request a service and assumably they will keep logged in for a long time, AS doesn't seem so busy. Why do they have to be seperated?

    Read the article

  • Super simple high performance http server

    - by masylum
    I´m building a url shortener web application and I would like to know the best architecture to do it in order to provide a fast and reliable service. I would like to have two separate servicies in different machines. The first machine will have the application itself with a apache, nginx, whatever.. The second one will contain the database. The third one will be the one that will be responsible to handle the short url petitions. For the third machine I just need to accept one kind of http petition (GET www.domain.com/shorturl), but it have to do it really fast and it should be stable enough. Which server do you recommend me? Thank's in advance and sorry for my english

    Read the article

  • Computer "Server"

    - by user328379
    so at home we had the idea of instead of buying 3 different pc's we would somehow create a "server" for the computers where a cable would come to our screens and keyboard and mouses, so the actual pc was somewhere else in the house with all the others. Does such a thing exist? And is it possible to have such a thing for high performance workflow? (Compiling, High-End Games, just as if it was a separate pc )

    Read the article

  • Up-to-date Comparison of High-Speed USB Flash Drives

    - by Zoredache
    I am looking for comparison of the performance of USB flash drives. I have found several older comparisons, but I am trying to find a more up-to-date comparisons that apply to the larger storage sizes (32-128GB). I can try looking up the specs of various drives, but vendors have been known to exaggerate, or use numbers that are on accurate in tests that do not reflect actual usage. I was hoping to find 3rd party site which had perform testing.

    Read the article

  • AWS Free Usage Tier + Cloudflare... possible?

    - by crashintoty
    If I throw my MySQL/PHP app up on a Amazon EC2 instance (using their AWS Free Usage Tier program) and couple it with CloudFlare (the free plan of course) roughly how many daily visitors can I comfortably handle before performance starts to suffer? Just looking for a rough estimate or educated guess - I understand this setup might be less than ideal but I'm still very curious nonetheless. Thanks in advance

    Read the article

  • Barnyard Service - MySQL Error

    - by SLYN
    I installed barnyard2 and saved as a service. When I run service barnyard2 start, Barnyard2 is failed. After I run tail -100 /var/log/messages and I encounter a fault like this. ERROR database: 'mysql' support is not compiled into this build of snort#012 Aug 22 11:52:06 barnyard2[25771]: FATAL ERROR: If this build of barnyard2 was obtained as a binary distribution (e.g., rpm,#012or Windows), then check for alternate builds that contains the necessary#012'mysql' support.#012#012If this build of barnyard2 was compiled by you, then re-run the#012the ./configure script using the '--with-mysql' switch.#012For non-standard installations of a database, the '--with-mysql=DIR'#012syntax may need to be used to specify the base directory of the DB install.#012#012See the database documentation for cursory details (doc/README.database).#012and the URL to the most recent database plugin documentation. Aug 22 11:52:06 barnyard2[25771]: Barnyard2 exiting What sould I do for solving this problem? When I installed Barnyard2, I used these commands: # ./configure --with-mysql --with-mysql-libraries=/usr/lib64/mysql # make ; make install (My System is CentOS 6.5 x86_64.)

    Read the article

  • How to measure TCP connection time in Linux

    - by Paul Draper
    I want to measure the overhead in creating a TCP connection. I know of many tools like hping and netperf, but they seem oriented at measuring latency. I want to know how long the 3-way handshake takes, and allocating any buffers, etc., and then closing it. So I want to open a real, legitimate TCP connection, and then close it. Are there any tools that will do that and help me measure performance?

    Read the article

  • Clean install vs disk image

    - by Thanos
    Once a year I am making a clean install on windows, in order to keep my system fast. After posting a question on making a bootable windows usb with exe programs where I was adviced to make a disk image, a new question rose. What is the difference in making a disk image and performing a clean install on windows? Which is better in terms of speed, general performance, value for time and transfering between different computers?

    Read the article

  • Possible reasons for high CPU load of taskmgr.exe process on VM?

    - by mjn
    On a VMware virtual machine which has severe performance problems I can see a constant average of 20+ percent CPU load for the TASKMGR.EXE (task manager) process. The apps running on this server have lower load, around 4 to 10 percent average. The VM is running Windows 2003 Server Standard with 3.75 GB assigned RAM. I suspect that the task manager CPU load has something to do with other VM instances on the VMWare server but could not see a similar value on internal ESXi systems (the problematic VM runs in the customers IT).

    Read the article

  • How to get Ubuntu to perform better on an older computer?

    - by alex
    Ubuntu 9.1 runs quite slugglish on my old laptop from 2004. Slower than Windows XP that was on there. It has 512mb RAM and probably 1.2ghz (can't remember) CPU. I have turned off Visual Effects under Appearance Preferences. Are there any other tricks to get better performance, or do I just need a better computer to try Ubuntu? Thanks

    Read the article

  • How can I sync Access databases and keep them up-to-date?

    - by user327472
    I have an Access database on my server. We split it up and use the front-end database for search data and adding new records or reports in local computer. If we update or add a new record, that writes to the back-end of database. I want to use this database in the other building with other servers. Also, those servers have no direct connection. How can I sync both back-end databases to keep the database data up to date? These details may be useful: It's a big amount of data - about 25,750 client records. I guess there are more than 25 tables at 80 MB.

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >