Search Results

Search found 12531 results on 502 pages for 'resume download'.

Page 52/502 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Save a single web page (with background images) with Wget

    - by mikael
    I want to use Wget to save single web pages (not recursively, not whole sites) for reference. Much like Firefox's "Web Page, complete". My first problem is: I can't get Wget to save background images specified in the CSS. Even if it did save the background image files I don't think --convert-links would convert the background-image URLs in the CSS file to point to the locally saved background images. Firefox has the same problem. My second problem is: If there are images on the page I want to save that are hosted on another server (like ads) these wont be included. --span-hosts doesn't seem to solve that problem with the line below. I'm using: wget --no-parent --timestamping --convert-links --page-requisites --no-directories --no-host-directories -erobots=off http://domain.tld/webpage.html

    Read the article

  • Server not responding to SSH and HTTP but ping works

    - by yes123
    Hello guys, I requested an hard reboot because none of ssh and http worked. Ping worked normally. Which logs should i check to understand what was the problem? Thanks! (debian 6 on lamp) Edit: my memory and swap: Mem: 4040068k total, 1114920k used, 2925148k free, 109212k buffers Swap: 1051384k total, 0k used, 1051384k free, 283820k cached 4 GB ram (and more than 1TB of HDD) The cause is from 2 days ago: look how the usage of swap goes +60% in less than 10hours My control panel reports this as top 5 memory usage process: If every apache2 process is 190MB large that sux because IF i do TOP i have 262 sleeping process most of them are apache2! My apache mpm_prefork settings are: <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 ServerLimit 1500 MaxClients 1500 MaxRequestsPerChild 2000 </IfModule> KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 4

    Read the article

  • Download discussions from usenet

    - by user22559
    Hello. Does anyone know a good (and maybe free) usenet client that would allow me to save all the discussions from one (or more) groups to text files? Preferably each post to its own text file. I need this to run some data mining on those discussions. Thanks :)

    Read the article

  • Why are my downloads up to ~1500KByte/sec only, when the ADSL connection locks at 13611Kbit/s?

    - by leladax
    No uploading is going on other than the overhead of downloading which appears to be not high for the abilities of the connection: Only about 30-40KByte/s when the router locks at 1012Kb/s and other direct uploads or uploading overheads can reach more than 100KByte/sec so I don't think it's a congestion at uploading that is doing it. Is there something I miss? Because I assume 13611Kbit/s should be ~1701Kbyte/sec. Is it an overheard at the ADSL level I don't understand? Could it be the ISP doing it? If it's active throttling it can't be on single connections since 2 high speed connections still go up to ~1500KByte/sec. It's not an example on torrents or other complex situations. The tests were on Ethernet, but I doubt the results would be different on wireless. I wonder if the settings of those connections at my end could be doing it, e.g. MTU settings, though I haven't touched the defaults of a common Realtek NIC.

    Read the article

  • why is drop box syncing so freakishly slowly in my linux virtual machine?

    - by Bec
    i am setting up a linux virtual machine (windows 7 64 bit host, ubuntu 64 bit guest, using virtual box) and i just installed drop box and set it to sync. I've only got about 2Gb in there so i figured it should take just an afternoon, but it's going at about 0.5 kB/second and says it will take about 60 days. I usually get about 200 kB/second in the host OS, and downloading straight from the dropbox website through firefox in the ubuntu VM i get about that, but sync is really slow. any tips?

    Read the article

  • Setting up a download limit for a computer

    - by sprsr
    Hello all, A friend of me asked me a question like that,"I have a modem, and a house mate. He is using my modem, and slowing down my internet. What I want to do is, limit his bandwith without using any program like netlimiter or so. How can I do that?" What are the ways to do this ? Thanks.

    Read the article

  • how to correctly download tomcat 6 on centos 5.5

    - by user582862
    hi guys, i am a big confused about how to install tomcat 6 on centos 5.5 final. this is what i am trying to do: # cd /etc/yum.repos.d/ # wget http://jpackage.org/jpackage50.repo # yum install tomcat6 tomcat6-webapps tomcat6-admin-webapps but when i type the widget command, this is what i get: Resolving www.jpackage.org... failed: Temporary failure in name resolution. wget: unable to resolve host address `www.jpackage.org' could anyone kindly show me the right way please. really in trouble at the moment with this. thanks in advance.

    Read the article

  • What causes PHP pages to consistently download instead of running normally

    - by Jonathan
    Hi, I'm running a Ubuntu Server on a VM, to test out different web forum solutions. I have set up a ~/public_html/ to be accessible with the apache2 web server, and that works fine. However when I go to a .php file on a browser (using my VM's ip-address/~username/phpfile.php) it does not display it as it should. Instead it offers to save to file/asks what program to open it with. Interestingly though that dialog box does recognise that it is a php file. I have the following version of php installed on the system: PHP 5.3.2-1ubuntu4.5 with Suhosin-Patch (cli) (built: Sep 17 2010 13:49:46) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies And the following server: Server version: Apache/2.2.14 (Ubuntu) Server built: Nov 18 2010 21:19:09 If anyone knows what might be causing this/potential solutions it would make me very happy :) EDIT: Turns out files this behaviour was only apparent on files in the ~/public_html/ directory. All php files in /var/www/ work fine. Prizes go to whoever can explain why? :D (And by prizes I just mean a well done, no actual prizes I'm afraid.)

    Read the article

  • Why is Chrome receiving data?

    - by Aero
    Chrome seems to be continually receiving data even though I'm not downloading anything. This is making a noticeable impact on my browsing speed. The first screenshot shows Chrome receiving data even though I'm not downloading anything (nor buffering a YouTube video etc.) Even after I completely close Google Chrome, the "chrome.exe" remains in the Resource Monitor list and the "Received bytes" column continually increases in the screenshot below. However, "chrome.exe" does not show up in the Processes tab of Task Manager. This only occurs sometimes, but I don't know why. I have tried running a malware/virus scans to ensure that there is nothing malicious behind this, but those scans have shown nothing. Any ideas on what's causing this?

    Read the article

  • Download or view a servers wins database

    - by Segfault
    I am trying to troubleshoot a WINS browsing problem in a Server 2008 AD Forest. I am in one domain and the problem is with a sibling domain. What command can i use to dump or view the WINS database on a particular AD server by name, in a different domain than me? I thought one of the subcommands of net would have an option for this, but I can't find it. I also tried browstat.exe getblist but it gives me an error message "The list of servers for this workgroup is not currently available". I am not a domain admin and don't have any rights to the either domain other than a normal user. Anyone know how this can be done?

    Read the article

  • partially downloaded a torrent file and renamed it

    - by user2613789
    I have partially downloaded a torrent file in ubuntu and unfortunately i renamed it. and after some time i resumed the remaining torrent file.but the torrent then got downloaded with its default name and i lost the complete file information into two different files... please help me out...is there any way that i can produce the whole information into one file...i can't open either file that means both file got corrupted or so...the file is in mp4 format...please help me

    Read the article

  • BitTorrent Myth

    - by Moon .
    In BitTorent Statistics there is a field "Total Ratio" that is the ratio between total downloads and uploads. i have heard that this ratio affects BitTorrent'ss performance. If the ratio is better then BitTorrent Network provides you services on priority. And If the ratio is down (less uploads) then the BitTorrent provides you services on average or below average priorities. Is there something like that.....

    Read the article

  • OS X software updates download but don't install

    - by ridogi
    I've got three 10.6 computers that won't install OS X updates. Checking for new software will show about a dozen updates (Security updates, Safari, iPhoto, printers, etc) and if choose install it downloads them. After downloading and then clicking restart the computer sits at the purplish sky desktop with no progress bar, and then after about 3 minutes it goes back to the login window (without ever installing or restarting). If I then select check for updates the same updates will all be presented and I can repeat the process. Manually downloading and installing an update such as 10.6.8 combo updater works as it should, and then check for updates no longer presents that particular update as an option. This seems to be the result of some setting or 3rd party application as I've got 3 out 7 computers experiencing this exact same problem. What could cause this and how can I fix it?

    Read the article

  • Fresh install of nginx causes browser to download index.html instead of opening it

    - by 010110110101
    When I view this in Chrome, http://localhost:90 the file is downloaded instead of displayed in Chrome. This question has been asked a lot of times on SO, but about index.php files. My problem is a plain jane HTML file, not a PHP file. That hasn't been asked yet. I was hoping the solution would be similar, but I haven't been able to figure it out. Here's my example.com.conf: server { server_name localhost; listen 90; root /var/www/example.com/html index index.html location / { try_file $uri $uri/ =404; } } My index.html file contains only two words, no markup Hello World I think it's the mime.types. The mime.types file has the entry for html in it. This is a fresh nginx install. nginx -t reports "test is successful"

    Read the article

  • screen scraper templates for various websites

    - by intuited
    I'm looking specifically for a convenient way to locally archive posts from this and other similar sites. I'd like to separate the question itself from the answers, or maybe crop the question and store it, keeping the page title. Obviously I don't need to store the menu or the various other site interface chrome. The best way to do this would seem to be to associate an XSLT template with a match on the URL and use that template to pull the various relevant informations and format them. My two-part question: Is there a tool specifically built for this task? I.E. something that takes a URL and checks it against a map of path-matching expressions to templates, and outputs the result of applying the template to that resource? xmlto seems to be most of the way there, and could probably just be called from a script that does the pattern-matching, but something already integrated would be more convenient. Is such a URL_pattern-to-XSLT_template map publicly available somewhere? Question 2.5: Is it legal to do this with sites like this one that have public licenses on their content?

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >