Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 91/184 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Why do Apache access logs - timer resolution issue?

    - by Rob
    When going through Apache 2.2 access logs, logging with the %D directive (The time taken to serve the request, in microseconds), that it's very common for a 200 response to have a given number of bytes, but a "time to serve" of zero. For example, a given URL might be requested 10 times in a single day, and a 200 response is sent for them all, and all return, say 1000 bytes. However, 7 of them have a "time to serve" of zero, while the other 3 have a time to serve of 1 second. Is this simply because the request was served faster than the resolution of the timer Apache uses?

    Read the article

  • How to work with bookmarks in Word without naming them?

    - by deepc
    I am working in a large Word 2007 document and need bookmarks to remember editing positions. I know I can manage bookmarks with shift+ctrl+F5 but that's cumbersome because I am used to do this a lot faster in the Delphi editor. There I create a bookmark with ctrl+shift+0..9 and jump to the bookmark with ctrl+0..9. In this way I have 10 quick bookmarks. I do not have to name them, I do not have to pick them from a dialog (because there is not even a dialog prompting me for a selection). Is something similar possible in Word, or has anybody made a macro for that purpose? Thanks.

    Read the article

  • Reading a file from an alternate location

    - by Highstaker
    I have a certain file (data.abc) located in, say, my home folder. I make a copy of it to another location (for example, "/mnt/ramtemp/"). Whenever the file in my home folder is accessed by any process, I want it to be read not from home folder, but from "/mnt/ramtemp/". As you might have guessed from the path of the latter, it is where I mount the ramfs. So, basically, I want a process to access not the file on my HDD (which is slower), but its copy on ramfs (which is way faster). At the same time, I want the file data.abc to remain in my home folder under that name, I don't want to rename or delete it. Is there any way I could guide the system to redirect the processes to read the file from alternative location whenever they try to read it from home folder?

    Read the article

  • ionice idle is ignored

    - by Ferran Basora
    I have been testing the ionice command for a while and the idle (3) mode seems to be ignored in most cases. My test is to run both command at the same time: du <big folder> ionice -c 3 du <another big folder> If I check both process in iotop I see no difference in the percentage of io utilization for each process. To provide more information about the CFQ scheduler I'm using a 3.5.0 linux kernel. I started doing this test because I'm experimenting a system lag each time a daily cron job updatedb.mlocate is executed in my Ubuntu 12.10 machine. If you check the /etc/cron.daily/mlocate file you realize that the command is executed like: /usr/bin/ionice -c3 /usr/bin/updatedb.mlocate Also, the funny thing is that whenever my system for some reason starts using swap memory, the updatedb.mlocate io process is been scheduled faster than kswapd0 process, and then my system gets stuck. Some suggestion? References: http://ubuntuforums.org/showthread.php?t=1243951&page=2 https://bugs.launchpad.net/ubuntu/+source/findutils/+bug/332790

    Read the article

  • Keeping video viewing statistics breakdown by video time in a database

    - by Septagram
    I need to keep a number of statistics about the videos being watched, and one of them is what parts of the video are being watched most. The design I came up with is to split the video into 256 intervals and keep the floating-point number of views for each of them. I receive the data as a number of intervals the user watched continuously. The problem is how to store them. There are two solutions I see. Row per every video segment Let's have a database table like this: CREATE TABLE `video_heatmap` ( `id` int(11) NOT NULL AUTO_INCREMENT, `video_id` int(11) NOT NULL, `position` tinyint(3) unsigned NOT NULL, `views` float NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `idx_lookup` (`video_id`,`position`) ) ENGINE=MyISAM Then, whenever we have to process a number of views, make sure there are the respective database rows and add appropriate values to the views column. I found out it's a lot faster if the existence of rows is taken care of first (SELECT COUNT(*) of rows for a given video and INSERT IGNORE if they are lacking), and then a number of update queries is used like this: UPDATE video_heatmap SET views = views + ? WHERE video_id = ? AND position >= ? AND position < ? This seems, however, a little bloated. The other solution I came up with is Row per video, update in transactions A table will look (sort of) like this: CREATE TABLE video ( id INT NOT NULL AUTO_INCREMENT, heatmap BINARY (4 * 256) NOT NULL, ... ) ENGINE=InnoDB Then, upon every time a view needs to be stored, it will be done in a transaction with consistent snapshot, in a sequence like this: If the video doesn't exist in the database, it is created. A row is retrieved, heatmap, an array of floats stored in the binary form, is converted into a form more friendly for processing (in PHP). Values in the array are increased appropriately and the array is converted back. Row is changed via UPDATE query. So far the advantages can be summed up like this: First approach Stores data as floats, not as some magical binary array. Doesn't require transaction support, so doesn't require InnoDB, and we're using MyISAM for everything at the moment, so there won't be any need to mix storage engines. (only applies in my specific situation) Doesn't require a transaction WITH CONSISTENT SNAPSHOT. I don't know what are the performance penalties of those. I already implemented it and it works. (only applies in my specific situation) Second approach Is using a lot less storage space (the first approach is storing video ID 256 times and stores position for every segment of the video, not to mention primary key). Should scale better, because of InnoDB's per-row locking as opposed to MyISAM's table locking. Might generally work faster because there are a lot less requests being made. Easier to implement in code (although the other one is already implemented). So, what should I do? If it wasn't for the rest of our system using MyISAM consistently, I'd go with the second approach, but currently I'm leaning to the first one. But maybe there are some reasons to favour one approach or another?

    Read the article

  • Why do I see different TCP behaviour between IIS and FTP server applications on Windows 2003?

    - by rupello
    I am comparing Wireshark traces of a 10MB file download file from: the FileZilla FTP server and IIS (using HTTP) on the same Windows 2003 server. The FTP download performs faster and the trace shows the server behaving as expected, sending more data to the client with every ACK received: Link to full-size image The HTTP server trace shows a more bursty pattern. The timing of the send bursts are sometimes unrelated to any ACKs received from the client (circled in red): Link to full-size image Anyone have a suggestion as to why IIS traffic is having like this? Update: We have tried modifying the http.sys registry settings (setting MaxBytesPerSend to 256k and MaxBufferedSendBytes to 64k as recommended). Changing MaxBytesPerSend does seem to improve performance by increasing the amount of in-flight data , but we still see the same bursty pattern.

    Read the article

  • For a particular domain, how can I cache its JSON responses locally?

    - by Chris
    I'm coding the frontend of a web app that uses XHR to grab JSON data from a 3rd party. The 3rd party service is slow and because of its API design, we need to make a LOT of API requests every time I refresh the page to test some new code. It's making the development loop painful. The requests are GETs, POSTs and PUTs even though I'm pretty sure none of the requests are changing state. I want to go to localhost for the JSON rather than to this 3rd party API - simply to make my development process faster.

    Read the article

  • SSH - SFTP/SCP only + additional command running in background

    - by Chris
    there are many solutions described to get ur SSH-connection forced to only run SFTP by modifying the sshd_config by adding a new group match and give that new group a Forcecommand internal-sftp Well that works great but i would love to have a little more feature. My servers automatically ban IP's which try to connect often in a short time. So when you use any SFTP-Client, which opens multiple connections to work faster it can get banned instandly by the server for a long time. The servers have a script to whitelist users by administrator. I've modified this script to whitelist the user, which runs the script. All i need to do is now get the server to execute that script, when somebody logins. On SSH it's no problem, just put it in .bashrc or something like, but the Forcecommand don't runs these scripts on login. Is there any way to run such a shellscript before or at the same time as the Forcecommand get fired?

    Read the article

  • What is the easiest way to copy Chrome's login/passses into KeePass without creating duplicates?

    - by ldigas
    Okey, here's the thing. I have most of my login info in two places; one is in Keepass file and the other is in Chrome. Being a lazy sort of person, and since Chrome/Keepass integration never really started to work the way it should, a couple times a year I use the Nirsoft tool to get the Chrome login/passwords into a textual .csv file and then import it in Keepass. Creating lots of duplicates in the process which I then clean and so on. In the meantime, all the new logins I accumulate just stay in Chrome. As you might notice, this is not really the best way to do it. Is there a faster way to do this; copy logins from Chrome to Keepass without creating duplicates in Keepass, or has anyone perhaps found a way to get Keepass to work with Chrome under Win XP SP3? Keepass 1.0 or 2.0, doesn't make the difference as long as it works.

    Read the article

  • MPICH2 vs KERRYGHED

    - by user40135
    Hi All right now I am moving first steps in clustering. I installed MPICH2 on my Ubuntu at home and I have a silly question about it. For what I am reading right now it seems that it provides the capability of sending processes to other pcs. I went for this lib just because I set it up very quickly and easily. Compared to MPICH2 , do you know what is the advantage of having a different clustering system like KERRYGHED? It seems that these ones also provide this capability, but the Kernel must be rebuild, so I suppose that it is going to be faster. What other advantages are remarkable for a clustering system like this? Thanks

    Read the article

  • Why does moving large folders take a lot of time?

    - by acidzombie24
    What can i do to fix this? Drop permission properties? I have a large folder with 100k files. I moved it into my archive folder and its taking forever to move. Why is that? I know on XP it takes <1sec but not on windows 7. I am sure its a permission thing, is there a way i can disable it and make it faster? -edit- I am moving the folder into another in the same drive/partition. In XP. AFAIK it just moves the folder file from one place to another. In windows 7, it seems like its touching something in every file when i move it.

    Read the article

  • How do I stop my IIS App Pool making a request to wpad.mydomain.com?

    - by Programming Hero
    As part of some performance troubleshooting, I've monitored the slow startup of a "cold" App Pool (one without an active worker process) in IIS. When using a built-in account, the App Pool starts in sub-second time. When using a custom local account the App Pool takes 30+ seconds to start processing requests. The service appears to be making requests to wpad.mydomain.com, an address it does not have access to, which causes it to wait 30 seconds for a response before eventually timing out. As a workaround, I've added the hostname to the server's hosts file, to direct the traffic to the local machine, which returns much faster (1-2 seconds). What do I need to do to stop IIS making this request when this identity is used for the App Pool?

    Read the article

  • Slow network file copy on Windows 7

    - by Jason V.
    I wrote a program that uses xcopy to transfer files (usually between 1KB and 2MB) over our intranet. Usually, I am copying files from my host machine (Windows 7 x64) to a VMWare virtual machine running Windows Server 2008 (the VM is running on my host machine, if that matters). On Windows XP, the file transfers usually only require a few seconds to complete. But on my Windows 7 machine, the transfer of the first file (1.5 MB) takes around 1.5 minutes to complete. This is true whether I use xcopy, robocopy, or programmatically using File.Copy(). I noticed that if I use File.Copy, the first transfer is very slow and subsequent transfers are much faster. Any clue how I can speed up the process? Is there a setting in Windows 7 (or server 2008) that I could try?

    Read the article

  • Slow slef hosted wordpress website

    - by Integrati Marketing
    Hi All, we have a great site which has been humming along nicely for about 5 months and then in May it went from a page load speed time of 3-5 secs to now an agonising 15+ secs!!! The host has been really helpful and has even shifted the site to a new server which is faster! I guess seeing as though we do not have the insight or your expertise we would ask the Serverfault community and see what this crowd of experts could recommend? Appreciate any insight, thank you. site is here: integrati.com.au Cheers. :)

    Read the article

  • Why does the heat production increase as the clockrate of a CPU increases?

    - by Nils
    This is probably a bit off-topic, but the whole multi-core debate got me thinking. It's much easier to produce two cores (in one package) then speeding up one core by a factor of two. Why exactly is this? I googled a bit, but found mostly very imprecise answers from over clocking boards which do not explain the underlying Physics. The voltage seems to have the most impact (quadratic), but do I need to run a CPU at higher voltage if I want a faster clock rate? Also I like to know why exactly (and how much) heat a semiconductor circuit produces when it runs at a certain clock speed.

    Read the article

  • Why are browsers so heavy?

    - by Kaivosukeltaja
    Back in 1998 I had a computer with 233MHz Pentium MMX CPU and a GFX card with no 3D acceleration. It was able to run games like Quake II at a decent FPS rate. My current computer has tons more performance and a mid-class GPU, yet struggles to reach 20 FPS when rendering a single model inside a skybox with WebGL. Even regular pages with lots of 2D CSS animations bring many modern computers to their metaphorical knees. As a web developer I understand there's a lot going on in a web page but not what makes it that heavy. Modern browsers compile JavaScript to CPU native machine code before running it and rendering into a canvas element shouldn't trigger DOM rebuilds so theoretically it should be a lot faster than it is. What am I missing here and is it possible to avoid or minimize whatever is making the browsers slow to build more efficient websites?

    Read the article

  • CPU Configuration Issue for 2 Servers (Server 2008 R2)

    - by Bill Moreland
    I have 2 servers running the exact same Classic ASP code with Access DBs (yes, not ideal, but it is what it is, for now). 1) Xeon 5520 @ 2.27 GHz (6 GB Memory) 2) Xeon E5-2620 @ 2.00 GHz (2 processors, 32 GB Memory) For most pages the newer E5-2620 processes the pages between 10-15% faster. On pages requiring heavy and/or multiple complicated access stored procedures (queries) the older 5520 does a much better job. I believe the servers are configured nearly identically. My question: is it possible that the newer, multi-processor server is not as good at handling Classic ASP as the older single processor? Is there a configuration difference that needs to be in place that I'm missing since I'm shooting for identical implementations?

    Read the article

  • Starting Chrome without any window

    - by Manish
    Hi, I've a situation where I want to start Google Chrome browser without any window appearing in Linux. This might sound weird but my intention is launch it faster than it already does. I've noticed that if Chrome is already running, then opening a new windows opens in a snap!!. I thought I'll keep it running in background and when needed open a new window by clicking its icon. I've checked almost all command line switches from Chrome source but couldn't succeed. Any ideas? Thanks in advance..

    Read the article

  • Slow self hosted wordpress website

    - by Integrati Marketing
    Hi All, we have a great site which has been humming along nicely for about 5 months and then in May it went from a page load speed time of 3-5 secs to now an agonising 15+ secs!!! The host has been really helpful and has even shifted the site to a new server which is faster! I guess seeing as though we do not have the insight or your expertise we would ask the Serverfault community and see what this crowd of experts could recommend? Appreciate any insight, thank you. site is here: integrati.com.au Cheers. :)

    Read the article

  • Apache logging: rotating logs on Win32?

    - by Jason S
    I was noticing my disk space disappearing faster than expected, and finally narrowed it down to a rewrite.log file that was 4 GB in size! Is there a way to rotate the various Apache logs (rewrite, error, access, etc.) on a Win32 PC so that only the most recent entries are there and I can limit the data size that results? I found the bit about log rotation on Apache's website but it's Unix-centric. Edit: I got rotatelogs.exe to work, and it's great except that it slows the server response down noticably so I rejected the idea of using it.

    Read the article

  • Why OS X use swap when there is lots of "inactive memory"?

    - by Balchev
    I am using OS X from few months (Lion and now Mountain Lion). I have 8 GB on my mini and almost daily now it get close to that. On Windows 7 machine with 8 GB I never had that kind of problem. Anyway, I read over the net, that the inactive memory is app cache of the programs that are recently closed and can be used for faster reopening.And this inactive memory can be released to a new app if needed. It is not released. Instead OS X starts swapping. So my question is why OS X use swap when there is lots of "inactive memory"? Here a screen that shows what I mean: I really hope there is a away to make OS X to use those 2.69 GB before start swapping.I really do.

    Read the article

  • Hybrid HDD/SSD Volume Windows

    - by ccrama
    Is it possible to create a hybrid drive-like experience using an ssd and a hdd? In theory, frequently used files would be stored on the faster ssd, and larger/less used would be moved to the hdd. I know you can create shared volumes in windows, but the speed differences would create some issues I would assume, and it doesn't have drive preference for what gets stored on what drive. Any programs/info/advice would be great! Thanks! (PS. I saw this question, but it was asked three years ago and some solutions have probably changed)

    Read the article

  • How Do I Migrate 100 DBs From One MS-SQL 2008 Server To Another? (looking for automation)

    - by jc4rp3nt3r
    Let me start by saying that I am not a DBA, but I am in a position where I am responsible for moving just under 100 MS-SQL 2008 DBs from our current development server, to a new/better/faster development server. As this is just a local dev server, temporary downtime is acceptable, but I am looking for a way to move all of the databases (preferably in bulk). I know that I could take a bak of each, and restore it on the new server, but given the volume of DBs, I am looking for a more efficient way. I am not opposed to learning a new piece of software, writing code or any other requirement, so long as it speeds up the process.

    Read the article

  • Best virtualization solution for running Linux under Mac OS X?

    - by grumbles
    I'd like to run a virtualized Ubuntu instance under Mac OS X (10.6). I've used VirtualBox in the past, but am looking for something that will be faster, and don't mind paying for either Parallels Desktop or VMWare Fusion. Does anyone have experience running Linux guests under either or both programs? I'm primarily interested in doing software development on the Linux guest installation, but I'm also very concerned with the performance and responsiveness the guest OS. I have a mid-2010 15" MacBook Pro (2.66 GHz i5, 8 GB of RAM, NVIDIDA GeForce GT 330M). Thanks!

    Read the article

  • Best way to administer a website remotely

    - by Mark Szymanski
    I have a Windows computer running an intranet website with IIS and I was wondering what the best way was to administer it from another computer, in this case, a Mac. What I want to do: Be able to edit pages from my Mac. VNC into the server because it is 'headless'. (Already have this set up) My current file syncing setup: I have Dropbox setup to sync files between the computers and then use PureSync to sync the files in the Dropbox folder into the wwwroot folder. Is there a better (faster/easier) way I could do all this? Thanks in advance!

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >