Search Results

Search found 57810 results on 2313 pages for 'http delete'.

Page 664/2313 | < Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >

  • Installing CURL on Ubuntu Karmic

    - by Racertim
    Trying to get this up and running: https://github.com/cloudnull/massupload I have everything except CURL installed and when I attempt to, it fails with the following: Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: curl 0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded. Need to get 196kB of archives. After this operation, 311kB of additional disk space will be used. WARNING: The following packages cannot be authenticated! curl Install these packages without verification [y/N]? y Err http://us.archive.ubuntu.com karmic/main curl 7.19.5-1ubuntu2 404 Not Found [IP: 91.189.91.13 80] Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/c/curl/curl_7.19.5-1ubuntu2_i386.deb 404 Not Found [IP: 91.189.91.13 80] Thank you!

    Read the article

  • Apache Rewrite Rules

    - by Philip
    I have moved my website from a Wiki to Wordpress and in the process, realised that I have broken links to some popular pages on my website. Is it possible to fix this with a rewrite rule? I need the rule to redirect anything beginning with "^/wiki/(.+)$" to "/$1" but also replacing the "_" character used in MediaWiki slugs to "-" used in Wordpress slugs. For example: http://example.com/wiki/An_Example_Page should be pointed to: http://example.com/an-example-page Is it possible to write such a rewrite rule? Edit: It appears that Wordpress doesn't even care if the "/wiki/" part is removed - provided the slug matches, and that seems to be case-insensitive too. So all I need to do is change the "_" characters to "-" in the slugs.

    Read the article

  • Is there a way to replicate a very large file shares in real-time?

    - by fsckin
    I have an hourly cron job that copies about 40GB of data from a source folder into a new folder with the hour appended on the end. When it's done, the job prunes anything older than 24 hours. This data changes very often during work hours and is on a samba file share. Here's how the folder structure looks: \server\Version.1 \server\Version.2 \server\Version.3 ... \server\Version.24 The contents of each new folder compared to the last one usually doesn't change very much, since this is a hourly job. Now you might be thinking that I'm an idiot for setting dreaming this up. Truth is, I just found out. It's actually been used for years and is so incredibly simple, anyone could delete the ENTIRE 40GB share (imagine that dialog spooling up... deleting thousands and thousands of files) and it would actually be faster to restore by moving the latest copy back to the source than it took to delete. Brilliant! Now to top this off, I need to efficiently replicate this 960GB of "mostly similar" data to a remote server over WAN link, with the replication happening as close to real-time as possible -- think hot spare, disaster recovery, etc. My first thought was rsync. Total failure. Rsync sees it sees a deletion of the folder that is 24 hours old and the addition of a new folder with 30GB of data to sync! I also looked at rdiff-backup and unison, they both appear to use similar algorithms and do not keep enough meta-data to do this intelligently. Best thing that I can find "out of the box" to do this is Windows Server "Distributed Filesystem Replication" which uses "Remote Differential Compression" -- After reading the background information on how this works, it actually looks like exactly what I need. Problem: Both servers are running Linux. D'oh! One approach to this I'm looking at is this, say it's 5AM and the cron job finishes: New Version.5 folder arrives at on local server SSH to remote server and copy Version.4 to Version.5 Run rsync on the local server pushing changes to the remote server. Rsync finally knows to do a differential copy between Version.4 and Version.5 Is there a smarter way to replicate Samba shares as close to real-time as possible? Anything out there that does "Remote Differential Compression" on Linux?

    Read the article

  • Solr performance (tomcat) - High load

    - by Ward Loockx
    I'm relatively new to solr. I have a production site running on a VPS, but now I'm having serious load issues. I don't know where to start in order to get the load down... VPS specs (linode.com 512) 512 MB RAM 4 CPU (1x priority) Looks like my solr server (tomcat) is using a lot of CPU power You can find my solrconfig.xml on http://pastebin.com/qdfi8Med and my schema.xml on http://pastebin.com/rRusDP8b I've tried to increaese the cache size, but this didn't do anything on the load. You can see the stats page below. EDIT - Because the screenshot was unclear, I took smaller screenshots if what (I think) is important. Dismax query handler stats Caches stats Thanks for the help!

    Read the article

  • Non-blocking service to receive messages on port via UDP

    - by stUrb
    I want to build a service on my Linux VPS which listens to a certain UDP port and does something with the (text)message which is captured. This processing consists of appending the message to a locally stored txt-file and send it as http, with a post variable to another server. I've looked into Nginx but as far is can see this server can only be bound to receive http packets. Although it is asynchronous. What is the best way to achieve this listening-service on linux? And which has the capabilities to do the above mentioned processing?

    Read the article

  • Apache with mod_perl eating memory when idle

    - by syneticon-dj
    An Apache webserver running a mod_perl application is exposing abnormal memory usage - after the "day load" ceases, the system's memory is being exhausted by the Apache processes and oom_killer is being invoked. As the load returns the following morning, the memory usage normalizes - probably because Apache workers get recycled periodically if a sufficient number of hits is generated: This is the graph for apache hits per second to correlate: The remaining 2 hits per second throughout the night are induced by HAProxy checks - it runs HEAD http://mydomain.example.com/running HTTP/1.0 requests against the server every half a second with "running" being a static file (i.e. not invoking any perl code). It also seems that disabling these checks remedies the memory usage problem, but obviously cannot be a solution. All of 3 similarly configured servers (behind HAProxy) expose this behavior. The running OS is Ubuntu 10.10, Apache version 2.2.16. This seems to be a memory leak but I have no idea how to start debugging it - any hints?

    Read the article

  • Rsync with a list of variables

    - by EMKA
    I am trying to write a bash script that will rsync only a specific subset of folders. I am trying to figure out a more slick so that I can just add a variables such as FOLDER1='name of folder in home directory' and then rsync -arvz --delete /home/emka/$FOLDER1/ /home/emka/Desktop/Mount/$FOLDER1 Currently I have FOLDER1 through FOLDER13, but I do not want to have the above line thirteen times. Could someone give me a push on how to do this?

    Read the article

  • How to manually insert signature in Thunderbird? (impossible?)

    - by Rabarberski
    How can I manually insert a signature in Thunderbird when I am busy composing an email? I don't find the option/action in any of the menus (specifically not under Insert, as I would expect). (Note: I know how to configure Thunderbird to automatically insert a signature when creating a new mail in a certain account. But if you delete the signature (accidentaly or not), how can you reinsert it? Or how can you insert it for an account which isn't configured to automatically insert a signature?)

    Read the article

  • Remove duplicated images with the shell [duplicate]

    - by nkint
    This question is an exact duplicate of: Find all duplicate files by md5 hash 1 answer I have a folder with osme images. Each of them has a different name but some of them are duplicate. What is the best way to delete the duplicates? I have to do it sistematically so I need some shell command/script to invoke. No limitation of used software, just no strange software.. I'd like to do it both in a Mac and in a Ubuntu systems

    Read the article

  • tomcat processParameters complains about "invalid chunk ignored"

    - by cgicgi
    I am hosting a software system running under tomcat for quite a number of customers. Some of these send invalid URLs as request. These URLs may contain "&=" or "&&", which is not within the http specs. Now my tomcat complains about the following: "08.09.2010 12:36:04 org.apache.tomcat.util.http.Parameters processParameters WARNING: Parameters: Invalid chunk '' ignored." It is no problem, as is doesn't affect the operation in any way. Only problem ist that the tomcat/logs/catalina.out is growing with every single request. In the net you can find suggestions like: - Fix your URLs (which I can't, as it is the customers who send them) - Raise tomcats log level to ERROR (which I don't want to do, as it would suppress INFO like "INFO: Reloading context [/ContextName]" and other stuff you want to know. - Redirect the log to the application log (which won't solve the problem, as the message will flood just another log) Does anyone know how to solve the problem at its ROOT, which means: Tell tomcat not to complain about invalid request parameters any longer

    Read the article

  • How do you set the default user in Linux for file creation?

    - by Not a Name
    I want to create a directory, for example: /public/all But I want it so that if you create a file in all, the owner is root, but anyone with access to the /public/all folder can delete/edit/etc the file, just not change the permissions. (I will use a self-created "setx" application to change the execute value if needed.) Reason for this, I don't want you to be able to deny other users write/read access to files in /public/all. I heard setuid on directories doesn't work for that.

    Read the article

  • My images look desaturated and brighter on the iPhone than on my Mac (in PS, Pixelmator & Finder)

    - by david
    I save my image in Photoshop (No Color management). I put it on the iPhone and it looks brighter and desaturated. Also my blue-green looks more like blue. I have tried some color profiles on the net: http://luminous-landscape.com/forum/index.php?showtopic=38121 http://www.colorwiki.com/wiki/Color_on_iPhone But the images still look different. I'm going insane with this because its stealing my time away and i can't fix it. Every bit of help would be appreciated! Do I have to create my own profile? Is there an easy way??

    Read the article

  • Access logs show someone "GET"ing a random ip, why does this return 200?

    - by Wilduck
    I have a small linux box set up with Apache as a way to teach myself Apache. I've set up port forwarding on my router so it's accessible from the outside world, and I've gotten a few strange requests for pages that don't exist from an ip address in China. Looking at my access_log shows that most of these return 404 errors, which I'm guessing is a good thing. However, there is one request that looks like this: 58.218.204.110 - - [25/Dec/2010:19:05:25 -600] "GET http://173.201.161.57/ HTTP/1.1" 200 3895 I'm curious what this request means... That ip address is unconnected to my server as far as I know, and visiting it simply tells me information about my uid. So, my questions are: How is it that this request is showing up in my access_log, why is it returning 200, and is this a bad thing (do I need to set up more security)?

    Read the article

  • How to check there are no html files in current directory?

    - by kev
    I have a script which will download html files into current directory. Then it'll generate a report based on these html files. At last, it'll delete all these html files. So, when I run this script, I want to make sure there is no html files in current dir. This is what I got: if ls *.html >/dev/null 2>&1; then echo 'clear HTML files first' exit fi Is there any easy way to check?

    Read the article

  • ModRewrite Domain

    - by Mike Knoop
    I've done a little research into ModRewrite rules and conditions but have not been able to find a satisfactory set of rules/conds which achieve the effect I'm looking for. Essentially, I have a directory on domain A (http://www.domaina.com/dir/) which I would like to redirect to a different directory on domain B (http://www.domainb.com/diff_dir/). Note that I only want to apply the rewrite rule if the user is attempting to access /dir/ on domaina. If they are accessing a different directory or root I do not want to rewrite the URL. Thank you!

    Read the article

  • Is it possible to redirect/bounce TCP traffic to an external destination, based on rules?

    - by xfx
    I'm not even sure if this is possible... Also, please forgive my ignorance on the subject. What I'm looking for is for "something" that would allow me to redirect all TCP traffic arriving to host A to host B, but based on some rules. Say host A (the intermediary) receives a request (say a simple HTTP request) from a host with domain X. In that case, it lets it pass through and it's handled by host A itself. Now, let's suppose that host A receives another HTTP request from a host with domain Y, but this time, due to some customizable rules, host A redirects all the traffic to host B, and host B is able to handle it as if came directly from domain Y. And, at this point, both host B and the host with domain Y are able to freely communicate (of course, thought host A). NOTE: All these hosts are on the Internet, not inside a LAN. Please, let me know if the explanation is not clear enough.

    Read the article

  • French accents on a PC with US keyboard??

    - by frenchie
    My laptop has a US keyboard, and I need to write some French, with accents. I know there's a painful way to do it with combinations of the alt key and the ascii code alt-codes, but I was wondering if there was an easier way to do it. PS: Since the question is closed (but the answers no great) I thought I'd add this addendum. Basically, you need to set the keyboard to US International and then you can do accents using 'e or 'a; see this link: http://support.microsoft.com/kb/97738 PS: Much much better solution: http://keyxpat.com.

    Read the article

  • is it a good idea to change a recovery partition from primary to logical? [HP laptop]

    - by DiegoDD
    I have a new HP laptop, model dv6-6c85la, with 1TB hard drive, and it has 4 primary partitions, like this: |<- system [199 MB] -|<- c: [899.8 GB] -|<- d:(recovery) [27.5 GB] -|<- e:(hp_tools) [4 GB] -| I wanted to make another partition, splitting "C" which is the main partition, into TWO partitions, and leave the rest as it is. but it doesn't let me because they are already 4 primary partitions (the ones in the diagram). I read somewhere, that i could in fact split C into 2 partitions, but only if the adjacent partition (in this case d:(recovery) is converted into a "logical" partition. That way, the new unallocated part taken from C, and the recovery partition, would each be logical, "inside" an extended partition (right???) As i understand, the resulting partitions would be: primary (system, no letter), primary (c:), extended [ logical (x:) | logical(d:recovery) ], primary (e: hp_tools) "x" being the new one. am i correct? My question is, if i do convert the recovery partition to logical (and thus, it is inside an extended partition adjacent to the new "x:" one), would i have any problems when in case of a disaster i would like to restore the system using the now logical instead of primary RECOVERY partition? Or it is completely safe to change it to logical? My main concern is because i think i may need to be primary so the recovery can proceed in boot time? Or i am completely wrong? how does the recovery process happens? I also understand that i can simply create recovery media, in DVDs, and then even i would be able to delete that recovery partition completely, but as of now, i don't want to do that. I may create the disks, but i don't want to delete the partition, simply because it would be a lot faster and easier to recover from a hard drive than disks. Wrapping up: if i change a recovery partition from primary to logical, will the system still be capable of using it to recover? or it NEEDS to be primary to work? The whole point is that i want to split C:, but as things are, i cant directly, i'd need to change the recovery partition to logical. Or is there another way? thanks.

    Read the article

  • Oracle (xe) 10 vs 11 . Have I lost the SQL tuning pages ? Am I going out of my mind?

    - by Richard Green
    Ok .. so perhaps the title needs calming down a bit, but basically I am after the xe 11g equivalent of the pages that you can see here : http://docs.oracle.com/cd/B25329_01/doc/admin.102/b25107/getstart.htm#BABHJAGE whcih you can then navigate to stuff like "top 50 queries" and "longest running queries" etc etc. For the life of me, I can't find that on the most recent xe edition. Please can someone direct me to where I might find these very useful admin pages ! Or was I imagining it all along :-/ Edit: These are the pages I am after: http://docs.oracle.com/cd/B25329_01/doc/admin.102/b25107/monitoring.htm

    Read the article

< Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >