Search Results

Search found 60072 results on 2403 pages for 'application performance'.

Page 167/2403 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • GNOME/KDE Linux entirely in RAM?

    - by František Žiacik
    Hi. I'd like to have very responsive linux but I also like modern, elegant and functional desktops like gnome or kde, not the lightweight ones like xfce or lxde. Once I tried PuppyLinux and was impressed by the responsivity when I clicked an application. In my Ubuntu, it bothers me much when I click chromium and must wait 5 seconds of disk flashing until main window appears. Or evolution or anything else. Is it possible to make GNOME or KDE run entirely in RAM like PuppyLinux (of course, I mean frequently used applications and services, not all) if you have enough of it? I don't care if boot time is longer. I tried using "preload" but it didn't help much.

    Read the article

  • IIS 7 throws 401 responses on application whose physical directory has been shared

    - by tonyellard
    I have an IIS 7, Windows Server 2008 R2 box with a relatively fresh install. I've deployed a .NET 2.0 application using windows autentication to the server, and from the default website, added it as an application. I updated the IIS authentication to enable Windows Authentication. When I went to share out the physical directory for the application so that a developer could deploy updates, the users began to receive 401 errors. I can reliably recreate the issue by sharing out the directory of any newly created application. The IIS user has the necessary read/write access to the directory. What do I need to do to keep web users from receiving 401's while at the same time allowing this developer to have access to the physical directory for deployments? Thanks in advance!

    Read the article

  • Cannot delete a SharePoint web application

    - by Vijay
    What I have? I have normal web application and it has 3 site collections with name, "PDirectory". Other than this I have only Central administration web application in the farm. What I want? I want to delete that web application, "PDirectory". What problem am I facing? I am not able to delete the web application. I get below error when I try to delete it but, the site collections got deleted! Error: An object in the SharePoint administrative framework, "SPWebApplication Name=XXX Parent=SPWebService", could not be deleted because other objects depend on it. Update all of these dependants to point to null or different objects and retry this operation. The dependant objects are as follows: SPFarm Name=SharePoint_Config SPFarm Name=SharePoint_Config at Microsoft.SharePoint.Administration.SPConfigurationDatabase.DeleteObject(Guid id) at Microsoft.SharePoint.Administration.SPConfigurationDatabase.DeleteObject(SPPersistedObject obj) at Microsoft.SharePoint.Administration.SPPersistedObject.Delete() at Microsoft.SharePoint.Administration.SPWebApplication.Delete() at Microsoft.SharePoint.ApplicationPages.DeleteWebApplicationPage.BtnSubmit_Click(Object sender, EventArgs e) at System.Web.UI.WebControls.Button.OnClick(EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) Can somebody tell me how I can delete this web application? Thanks in advance!

    Read the article

  • How should I use my new SSD drive?

    - by jasondavis
    I just built a new PC the other day. Specs... Processor: Intel i7-930 quad core CPU CPU Cooler: COOLER MASTER Hyper 212 Motherboard: AsRock X58 Extreme 3 RAM/Memory: 6gb G-Skill tripple channel DDR3 memory (3 sticks of 2gb planning to get another kit to make it 12gb total soon) Operating System Hard drive: Intel X25-M 80GB Mainstream SATA2 Solid State Drive Video Cards: 2 XFX ATI Redeon HD 4650 cards to run 3-4 monitors Case: Lian Li PC-B10 Midtower case Power Supply: Antec TruePower New TP-750 Blue 750W Operating System Windows 7 Pro 64bit Not sure if the specs are helpful at all but I posted them just in case. So I got everything put together and running great so far but I need some advice/ideas/help/tips. I got the SSD drive in hopes of using it strictly for my windows 7 install along with all my other programs I install. I am then going to get another drive or 2 just for data (video,music,photos, etc). So my plan is to just install the new data drives and then in windows 7 I will change my "My documents" "My Music" "My Video" "MY Photos" library's to be located on the data drives instead of the OS SSD drive. I would ultimately like to install all my programs with my windows install on the SSD drive and then create an IMAGE of the drive and then 6 months down the road if things are sluggish I can just wipe the drive and restore my IMAGE with all my programs and settings in tact still. So here are some questions now. 1) How can I verify that TRIM is working on my new SSD? 2) Is there anything above that I missed that I should be doing? I think I once read that there is a page file or some sort of file that windows changes a lot and that it should be moved off f an SSD an onto my data drives. DOes anyone know what I might of heard? If you do can you explain the pros and cons of doing such a thing as well as how to possibly? 3) Any tips or advice to get the best performance from all this, I built a pretty nice system and I just want to make it stay that way as long as I can.

    Read the article

  • Routing application traffic through specific interface

    - by UnicornsAndRainbows
    Hello All! First question here, so please go easy: I have a debian linux 5.0 server with two public interfaces. I would like to route outbound traffic from one instance of an application via one interface and the second instance through the second interface. There are some challenges: both instances of the application use the same protocol both instances of the application can access the entire internet (can't route based on dest network) I can't change the code of the application I don't think a typical approach to load balancing all traffic is going to work well, because there are relatively few destination servers being accessed in the outbound traffic, and all traffic would really need to be distributed pretty evenly across these relatively few servers. I could probably run two virtualized servers on the box and bind each of them to a different external ip, but I'm looking for a simpler solution, maybe using iproute or iptables? Any ideas for me? Thanks in advance - and I'm happy to answer any questions.

    Read the article

  • How can I simulate a slow machine in a VM?

    - by Nathan Long
    I'm testing an AJAX-heavy web-application. I develop on a new Mac, but I use VmWare Fusion (currently 3.1.2) to test in Windows XP, using IETester to simulate older versions of IE. This lets me see how older IE versions would render the site, but I'd also like to see how the site would perform on an older machine. I see in the VM's settings that I can decrease the RAM; is there a way to also dial down the processor speed? How else might I simulate a slow machine? (I am also going to check out how to simulate a slow internet connection.)

    Read the article

  • Logitech Performance MX Mouse Jumps on OS X Lion (10.7.4)

    - by Adam Thompson
    I have a Logitech MX Revolution wireless mouse that I am trying to use with OS X Lion. Everything is working except for one problem... there is a small, but quite noticeable, jump when the mouse cursor is moved. The problem is mostly prevalent when dragging and dropping files or trying to highlight items. It makes performing any task with the mouse accurately next to impossible. I did quite a bit of looking and found that all kinds of people have had mouse issues with OS X. I've tried all of the following with absolutely no success: Using the official drivers from Logitech (these performed worse than the default mouse drivers in OS X) Using SteerMouse as a third party mouse driver. This worked ever so slightly better than the default driver, but still suffered quite frequently from the skipping problem Cleaning the sensor on the mouse and ensuring it's not the result of the surface that it's being used on. Tested the mouse on a Windows machine. The mouse worked absolutely flawlessly on the other machine. Changed the channel that my wireless router operates on by the off chance my problems were the result of interference. This also had no effect. I can't think of anything else that could possibly interfere with the mouse. I'm am out of ideas on what to try, so I would really appreciate if anyone has any suggestions. I should also mention that an old wired mouse I had laying around worked just fine when I plugged it in. This really isn't the best solution, however, as I really prefer the MX Revolution.

    Read the article

  • What is the computer "doing" when it is running slow and task manager is not showing any CPU activity?

    - by Joakim Tall
    Typical example is when shutting down a memoryintensive application. It can take quite a while before the computer gets back up to speed. Is there some inherent cost in releasing memory? Or is it throttled by some kind of harddrive activity, and if so is there any good way to track that? I usually bring up task manager when a computer is running slow, and usually sorting by cpu activity can show what process is causing the problem, but sometimes there is no activity showing. And yes I "show processes from all users", I have been wondering this since the days win2k :)

    Read the article

  • Large, high performance object or key/value store for HTTP serving on Linux

    - by Tommy
    I have a service that serves images to end users at a very high rate using plain HTTP. The images vary between 4 and 64kbytes, and there are 1.300.000.000 of them in total. The dataset is about 30TiB in size and changes (new objects, updates, deletes) make out less than 1% of the requests. The number of requests pr. second vary from 240 to 9000 and is dispersed pretty much all over, with few objects being especially "hot". As of now, these images are files on a ext3 filesystem distributed read only across a large amount of mid range servers. This poses several problems: Using a fileysystem is very inefficient since the metadata size is large, the inode/dentry cache is volatile on linux and some daemons tend to stat()/readdir() it's way through the directory structure, which in my case becomes very expensive. Updating the dataset is very time consuming and requires remounting between set A and B. The only reasonable handling is operating on the block device for backup, copying, etc. What I would like is a deamon that: speaks HTTP (get, put, delete and perhaps update) stores data it in an efficient structure. The index should remain in memory, and considering the amount of objects, the overhead must be small. The software should be able to handle massive connections with slow (if any) time needed to ramp up. Index should be read in memory at startup. Statistics would be nice, but not mandatory. I have experimented a bit with riak, redis, mongodb, kyoto and varnish with persistent storage, but I haven't had the chance to dig in really deep yet.

    Read the article

  • Log and debug/decrypt an windows application's HTTPS traffic

    - by cweiske
    I've got a proprietary windows-only application that uses HTTPS to speak with a (also proprietary, undocumented) web service. To ultimately be able to use the web service's functionality on my linux machines, I want to reverse-engineer the web service API by analyzing the requests sent by the application. Now the question: How can I decrypt and log the HTTPS traffic? I know of several solutions which don't apply in my case: Fiddler is a man-in-the-middle HTTPS proxy which I cannot use since the application doesn't support proxies. Also, I do not (yet) know if it works with self-signed server certificates, which I doubt. Wireshark is able to decrypt SSL streams if you have the server's private certificate, which I don't have. any browser extension since the application is not a browser If I remember correctly, there have been some trojans that capture online banking information by hooking into/replacing the window's crypto API. Since the machine is mine, low level changes are possible. Maybe there is a non-trojan (white-hat) network log application out there which does the same? There is a blackhat presentation with some details available to read. They refer to Microsoft Research Detour for easy API hooking.

    Read the article

  • Puzzling TCP performance over 3G / UMTS

    - by lemonsqueeze
    I'm using 3G as my primary internet connection, and TCP over this thing is getting more puzzling every day. For example: Downloading from kernel.org is crazy fast: $wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.6.8.tar.bz2 increases to ~500kB/s after a few secs ! Some servers are incredibly slow, for instance www.graphic-pc.com:Same thing, downloading a big file with wget it starts at ~30kB/s for a split second, then collapses to 5-10k or even worse. Web browsing is decent but somewhat unreliable. Randomly, a page will take really long to load or even fail to load, but a reload can succeed almost immediately. Now, by chance i started playing with OpenVPN over UDP on top of the 3G connection, and OMG suddenly everything's extremely fast !Same www.graphic-pc.com now shoots at 100-200kB/s ! What's going on here ??? How come it is so much better with the VPN than without ?? And why does graphic-pc.com crawl when kernel.org flies ?Something to do with my tcp stack (or the server), or some buggy router in between ?? Notes: Setup is laptop running Ubuntu Lucid and a Huawei 3G dongle (So direct pppd connection). I can reproduce this pretty much any time during the day and I'm not moving, so it's clearly not cell environment or internet congestion. (although kernel.org without VPN sometimes does worse in the evening, 60kB or so - but still 500kB with VPN !) For 2) wireshark shows retransmitted packets, dup ack's, even out of order sometimes. I've tried playing with different /proc/sys/net/ipv4 parameters (tcp_rmem, window_scaling, tcp_congestion...) doesn't seem to make a difference. Update: Tried under windows 7 (no VPN) with some interesting results: tcp settings : default tcp_optimizer kernel.org : 10 kB/s 20 kB/s graphic-pc.com: 8 kB/s 70 kB/s ! tcp_optimizer turned on ctcp among other things. Have to check what os graphic-pc.com is running, my bet is linux's tcp_westwood and ms ctcp don't mix well here...

    Read the article

  • Migrate TFS 2010 Application Tier to another server on the same domain

    - by Liam
    I'm in the process of looking into the possibility of moving our TFS 2010 Application Tier to another server from the one it is on at the moment so that we can repurpose the hardware. I've been looking through the Microsoft Documentation over at http://msdn.microsoft.com/en-us/library/ms404869.aspx, but this assumes that everything is stored on one box (Application and Data tiers). In my setup however, our Data tier is separate to the Application tier and will be staying where it is. I think I should be able to do this but for my own peace of mind, would there be any issues or implications if I merely installed the Application Tier on the new hardware and then connected it to the existing data tier?

    Read the article

  • How to prevent slow printer performance when AD is not available

    - by AKoran
    When I take a domain based computer (Windows XP) and plug it into a network that doesn't have access to the AD, the first time I select a local printer (printing directly to printer) on the current network it takes a good 20-30 seconds before I can select the printer. Doing a little investigating using wireshark I can see the computer is trying to hit AD for some reason and it just keeps timing out. I also tried the same experiment with just a plain workgroup computer and it was able to bring the printer up immediately. Does anyone know how to prevent the machine from trying to contact AD?

    Read the article

  • Change the Mac notification sound on a per-application basis

    - by Mark Szymanski
    By default on Mac OS X there is a system-wide notification sound that you can choose. This sound is applied to every application and played whenever the application outputs a beep (for instance, when typing a keyboard shortcut that doesn't work, or during a terminal beep). Is there any way to change what sound this is on a per-application basis? Specifically, I'm looking to change the sound Terminal.app uses, while every other app uses another sound.

    Read the article

  • Search for files which will open a certain application in Mac OS X

    - by Jacob Palme
    In Mac OS X, when you doubleclick on a file name, that file will open with the application which created the file. So there must be stored, somewhere in the file description on a Mac OS X file, information on which application created this file. Note that this is not the file extension, the file can have any extension, or no extension at all. Two questions regarding this information: (1) How can I search for all files which will open a specific application? (2) How can I see, and change, the application which a certain file will open?

    Read the article

  • Polling performance on shared host

    - by Azincourt
    I am planning on writing a small browser game. The webserver is a shared server, with no root / install possible. I want to use AJAX for client/server communication. There will be 12 players. So each player would be polling the server for the current game status every X milliseconds (let's say 200ms). So that would be 200ms x 12 players x 5 = 60 requests per second Can Apache handle those requests? What might be the bottlenecks when using this attempt?

    Read the article

  • 100% uptime for a web application

    - by Chris Lively
    We received an interesting "requirement" from a client today. They want 100% uptime with off-site failover on a web application. From our web application's viewpoint, this isn't an issue. It was designed to be able to scale out across multiple database servers, etc. However, from a networking issue I just can't seem to figure out how to make it work. In a nutshell, the application will live on servers within the client's network. It is accessed by both internal and external people. They want us to maintain an off-site copy of the system that in the event of a serious failure at their premises would immediately pick up and take over. Now we know there is absolutely no way to resolve it for internal people (carrier pigeon?), but they want the external users to not even notice. Quite frankly, I haven't the foggiest idea of how this might be possible. It seems that if they lose Internet connectivity then we would have to do a DNS change to forward traffic to the external machines... Which, of course, takes time. Ideas? UPDATE I had a discussion with the client today and they clarified on the issue. They stuck by the 100% number, saying the application should stay active even in the event of a flood. However, that requirement only kicks in if we host it for them. They said they would handle the uptime requirement if the application lives entirely on their servers. You can guess my response.

    Read the article

  • System Lags/Freezes when under high usgage

    - by tom
    I am not sure if its my GPU / Memory or Hard drive thats failing. For example if I'm runnning more than one instance of chrome and running an application that takes up a lot of resources, my system will start to lag and freeze. When I launch Photoshop the GPU feature disables automatically, this also lags when I click on menus and when working on documents in Photoshop. I really dont know where to start, if i should buy a new graphics card or test the memory or could it be my OS drive? System: Windows 7 64bit, ATI Raedeon 5850, Corsair 2x4GB http://i.stack.imgur.com/qqkLZ.jpg

    Read the article

  • Measuring performance indicators on a cluster

    - by Aditya Singh
    My architecture is based on Amazon. A ELB load balancer balances POST requests among m1.large instances. Every instance has a nginx server on port 80 which distributes the requests to 4 python-tornado servers on backend which handle the request. These tornado servers are taking about 5 - 10ms to respond to one request but this is the internal compute time of every request. I want to put this thing on test and i want to measure the response time from ELB to upstream and back and how does it vary when the QPS throughput is increased and plot a graph of Time vs. QPS vs. Latency and other factors like CPU and Memory. Is there a software to do that or should i log everything somewhere with latency checks and then analyze the whole log to get the stuff out. I would also need to write a self-monitor which keeps checking the whole response time. Is it possible to do it with a script from within the server. If so, will it be accurate ?

    Read the article

  • Oracle redo log performance degradation when inserting

    - by Aldarund
    I have a oracle 11g database. I'm testing in for inserts. The database running in noarchive mode. I have 3 redo log configured, each 2gb. I'm trying to insert data into test table. At begin it goes fine with 15k ins/second. I make a commit after 200 inserts. But after about 1.3m inserted records it become really slow, about 1-2k ins/second. As i noticed in resource explorer at this point we have filled all redo logs and so the inserts from this points work slow. So my question is why it become so slow when it fills redo logs, even if i commit each 200 records. And how this situation can be fixed ( except the turning off logging completely at inserts)

    Read the article

  • Performance Bottleneck with Photoshop CS3 on XPSP3

    - by Doozer1979
    I have an intel core 2 4400, with 4GB of ram running on XP 32-BIT SP3. Photoshop CS3 becomes sluggish & unresponsive even after loading up small files, and this is with only Bridge open as well, plus McAffee AV. My photos are loaded in from a USB 2 external drive, and my c: drive is used only for programs and windows itself. Even with 4GB of RAM, i am seeing the pagefile increase to 1.6GB, whilst there appears to be 1.5GB of RAM free to use. I've defragged the drive, with defraggler, and after that the only file reported to be fragmented was the pagefile itself. Anyone have any ideas what i can do to improve/solve this?

    Read the article

  • Compiz: Switching focus by application instead of by window

    - by Ivan Vucica
    I got used to OS X way of doing things (separate shortcuts for switching between applications and switching between current application's windows). Is there a way to get Compiz to have a shortcut (such as Super+Tab) to switch between applications ("window groups") instead of between windows? I already got the "Scale" plugin (an expose clone) to display only windows from current window group, proving there is a way to group by application, but I cannot find a way to get the "Application Switcher" to switch between these groups instead of between windows themselves.

    Read the article

  • How to configure a hostname of sub-application?

    - by BrunoLM
    I have a structure like this: Website |- Controllers |- Models |- Views |- Content |- Static (APP) Website is an application using an ASP.NET 4.0 pool. Static is a sub-application using a not managed application pool. On Website bindings I've set local.domain.com to have access through port 80. I want to access the Static app using static.domain.com, but I don't find the option to configure the binding. How is it possible to configure like that?

    Read the article

  • Prevent Java application from accessing/monitoring/altering clipboard contents

    - by mcstrother
    I'm a student using a service that provides practice questions for standardized tests. The service requires that I access the questions by downloading and running a Java application. If I try to copy anything from any window of my computer (including applications unrelated to the question bank) while the application is running, the copied item is replaced with an obnoxious message asking me to not pirate their copyrighted material. I find this obnoxious, and I also really don't like the idea that any application can slurp up any and all potentially sensitive information that I happen to copy while it's running. Is there are a way to limit the privileges of this application to stop it from doing this? Thank you!

    Read the article

  • Linux/Apache performance very slow even on local network

    - by klausch
    I have an Ubuntu server machine running Apache and MYSQL. System and version info is as follows: Linux kernel 3.0.0.-12 Apache/2.2.20 MySQL Ver 14.14.Distrib 5.1.58 I am running a few websites on this server, some HTML only, some PHP/MySQL. THe [problem is that response time is very slow, both on static as well as the dynamic sites. Sometimes it takes more than 10 seconds before a response is given, this makes the sites very slow and almost unusable. The problem occurs even when requesting from the local network. I have added the involved subdomains to my /etc/hosts file, and abolve all the problem is not solved by using IP numbers instead of URL's. So there is no DNS lookup issue. I have modified the log format by showing the response times and sometimes a files takes 12 seconds to be served, see the jquery~.js file in the example screenshot. I have no explanation for this extremely long response time, but is is not even the only issue here, some other files takes a long time to be served too, but do not show a long response time in the log file. So probably different tissues are involved here. I cannot find a solution until now, any suggestions??? THanx in advance, Klaas link to screenshot picture from access logfile Some extra configuration info: apache2.conf (comment is removed) LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_event_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType text/plain HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn Include mods-enabled/*.load Include mods-enabled/*.conf Include httpd.conf Include ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" %T/%D" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include conf.d/ Include sites-enabled/ And the virtual hostfile for one of the slow sites, in fact it is pretty straightforward... <VirtualHost *:80> ServerAdmin [email protected] ServerSignature EMail ServerName toenjoy.drsklaus.nl DocumentRoot /var/www/toenjoy.drsklaus.nl <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/toenjoy.drsklaus.nl/> Options Indexes FollowSymLinks MultiViews AllowOverride AuthConfig AuthType Basic AuthName "To Enjoy" AuthUserFile /etc/.htpasswd Require user petraaa Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> And the output of free -m: klaas@ubuntu-server:/etc/apache2$ free -m total used free shared buffers cached Mem: 1997 1401 595 0 144 1017 -/+ buffers/cache: 238 1758 Swap: 2035 0 2035 and I have no indication that swapping occurs on the moments the site is slow. I have runned top and it does not appear to be a CPU issue. I have the impression that the spawning of a apache thread could maybe be the bottleneck but it is just a suggestion. Maybe this gives some extra information! EDIT: The problem seemed to be gone for some time but occurs again! And not only with Apache, also connecting using SSH takes a tremendous time, sometimes it takes up to 15 seconds before the keyphrase is asked for. Also scp works very slowly. The behavious is really unpredoctable and makes the server very hard to use. Any ideas...?

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >