Search Results

Search found 54055 results on 2163 pages for 'multiple files'.

Page 526/2163 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • Incremental deploy from a shell script

    - by WishCow
    I have a project, where I'm forced to use ftp as a means of deploying the files to the live server. I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents, deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository. (and taking care of user uploaded files and folders, and making post-deploy changes, etc) It's working well, but the project is starting to get big enough to make the deployment process too long. I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is) I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files, and upload each modified file, and delete each removed file. I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc. Two problems: I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here, if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations. hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least. So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

    Read the article

  • How can I send GET data to multiple URLs at the same time using cURL?

    - by Rob
    My apologies, I've actually asked this question multiple times, but never quite understood the answers. Here is my current code: while($resultSet = mysql_fetch_array($SQL)){ $ch = curl_init($resultSet['url'] . $fullcurl); //load the urls and send GET data curl_setopt($ch, CURLOPT_TIMEOUT, 2); //Only load it for two seconds (Long enough to send the data) curl_exec($ch); //Execute the cURL curl_close($ch); //Close it off } //end while loop What I'm doing here, is taking URLs from a MySQL Database ($resultSet['url']), appending some extra variables to it, just some GET data ($fullcurl), and simply requesting the pages. This starts the script running on those pages, and that's all that this script needs to do, is start those scripts. It doesn't need to return any output. Just the load the page long enough for the script to start. However, currently it's loading each URL (currently 11) one at a time. I need to load all of them simultaneously. I understand I need to use curl_multi_*, but I haven't the slightest idea on how cURL functions work, so I don't know how to change my code to use curl_multi_* in a while loop. So my questions are: How can I change this code to load all of the URLs simultaneously? Please explain it and not just give me code. I want to know what each individual function does exactly. Will curl_multi_exec even work in a while loop, since the while loop is just sending each row one at a time? And of course, any references, guides, tutorials about cURL functions would be nice, as well. Preferably not so much from php.net, as while it does a good job of giving me the syntax, its just a little dry and not so good with the explanations.

    Read the article

  • Why won't my files push to my SFTP server?

    - by Matthew
    I'm having trouble pushing my branch to an SFTP server. I'm following the instructions here. When I push the branch, everything seems to complete successfully. I get the message "Created new branch.", and if I do "bzr push" again, it says "No new revisions to push." But when I ssh to the SFTP server to look at the directory I put my branch in, only the .bzr directory is there. None of my files are there. Does anyone have any idea why this might be?

    Read the article

  • Is it ok to share private key file between multiple computers/services?

    - by Behrang
    So we all know how to use public key/private keys using SSH, etc. But what's the best way to use/reuse them? Should I keep them in a safe place forever? I mean, I needed a pair of keys for accessing GitHub. I created a pair from scratch and used that for some time to access GitHub. Then I formatted my HDD and lost that pair. Big deal, I created a new pair and configured GitHub to use my new pair. Or is it something that I don't want to lose? I also needed a pair of public key/private keys to access our company systems. Our admin asked me for my public key and I generated a new pair and gave it to him. Is it generally better to create a new pair for access to different systems or is it better to have one pair and reuse it to access different systems? Similarly, is it better to create two different pairs and use one to access our companies systems from home and the other one to access the systems from work, or is it better to just have one pair and use it from both places?

    Read the article

  • Setting up Virtual Hosts with Apache on Windows 2008 server for multiple sites. Complicated setup, including subversion

    - by Roeland
    I am setting up apache on my windows 2008 server at my home. It will serve 2 functions. Subversion hosting to allow me and some others to manage company documents with version control Local website hosting for web development. Will need to run several websites since I generally work on more then one site at a time. Heres what I have done so far. I set up subversion and apache 2.2 using some walk troughs. I changed the default port to 1337. (im a nerd) Using dyndns.com I created a domain to forward to my home ip which is dynamic. ( company.gotdns.org) I then went into my DNS for my company.com and added a record to point repo.company.com to company.gotdns.org At this point people who need access to my file repository can access by going to repo.company.com/repo which is good so far. My question comes on the next step, setting up virtual hosts with apache. Ideally I would like to have my local website be viewable by some others in the company from their homes. So, say I am working on site1, I would like to have them be able to view this by going site1.roeland.bythepixel.com. At the same time, I would like to have site10.wouter.bythepixel.com go to his local setup for site10. What I have done for this: I went into my DNS for company.com and added a record to point roeland.company.com to company.gotdns.org (which translates to my ip). I added code to my httpd-vhosts.conf (listed at bottom) I added code to my host file (listed at bottom) Hah, so of course this doenst work as excepted.. going to site1.roeland.bythepixel.com doesnt bring up my test1 site. Could anyone point me where I may be going wrong? Thanks! hosts: 127.0.0.1 localhost 127.0.0.1 sensenich.roeland.bythepixel.com ::1 localhost httpd-vhosts.conf: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "F:/Current Projects/sensenich.com" ServerName sensenich.roeland.bythepixel.com ErrorLog "logs/sensenich.roeland.bythepixel.com-error.log" CustomLog "logs/sensenich.roeland.bythepixel.com-access.log" common </VirtualHost>

    Read the article

  • How do I prevent capistrano from overwriting files uploaded by users in their own folders?

    - by Hrishi Mittal
    I'm using Capistrano and git to deploy a RoR app. I have a folder under which each user has their own folder. When a user uploads or saves a file, it is saved in their own folder. When I deploy new versions of the code to the server, the user files and folders are overwritten with what's on my dev machine. Is there a way to ignore some folders in capistrano, like we do in git? This post - http://www.ruby-forum.com/topic/97539 - suggests using symlinks and storing the user files in a shared folder. But it's an old post, so I'm wondering if there is a better way to do it now. Also, does anyone know of any good screencasts/tutorials to recommend for using RoR+git+capistrano? Thanks.

    Read the article

  • How to retrieve files/documents that are not found on the web server machine.

    - by jhorton
    I am trying to create an export feature for a user to be able to download documents into a zip file. I have the feature working when the files are located on my local and I can use an absolute path on my local. But after talking to the infrastructure team, I found out that the documents are not stored on the same machine as the web server but located at a server farm located off site. I can query the database which gives me a file path. But the path is more of a relative path. So can anyone help me understand how to use FileInfo with getting files from another machine. I believe the infrastructure team said there is a virtual drive set up to the outside server. Am I able to use a virtual path some how? Thanks.

    Read the article

  • How to include header files in Visual Studio 2008?

    - by Sergio
    I am currently trying to compile a simple program that includes two header files. I see them in the Solution Explorer, where I included them through "include existing files". However, when I run my program it get the following error. fatal error C1083: Cannot open include file: 'FileWrite.h': No such file or directory. THe problem is that I see the file included in the Header's folder and in the code I have written: #include "FileWrite.h" and then the rest of the program code. Is there something else needed to do so that the compiler can see the header file and link it to the .cpp file I'm trying to compile?

    Read the article

  • How do I run multiple commands on one line in Powershell?

    - by David
    In cmd prompt, you can run two commands on one line like so: ipconfig /release & ipconfig /renew When I run this command in PowerShell, I get: Ampersand not allowed. The & operator is reserved for future use Does PowerShell have an operator that allows me to quickly produce the equivalent of & in cmd prompt? Any method of running two commands in one line will do. I know that I can make a script, but I'm looking for something a little more off the cuff.

    Read the article

  • How to make a non-root user to use chown for any user group files?

    - by user1877716
    I would like to make a user super powerful, with almost all root rights but unable to touch a the root user (to change the password of the root). My goal is to user "B" to manage my web server. The problem is user B need to able to run the chown and chmod commands on some files belonging to other users. I tried to put B in root group or use visudo, but it's not enough. I'm working an Centos 6 system. If some body have ideas!

    Read the article

  • using C# how to convert iso8859-1 encoded text files that contain Latin-1 accented characters to utf

    - by Tim
    I am being sent text files saved in iso88591-1 format that contain accented characters from the Latin-1 range (as well as normal ASCII a-z etc). How to convert these files to utf-8 using C# so that the single-byte accented characters in iso8859-1 become valid utf-8 characters? I have tried to use a StreamReader with ASCIIEncoding, and then converting the ascii string to UTF-8 by instantiating an ascii encoding and a utf8 encoding and then using Encoding.Convert(ascii, utf8, ascii.GetBytes( asciiString) ) — but the accented characters are being rendered as question marks. What step am I missing? Thanks

    Read the article

  • Zip files way larger on a Mac using Finder than the 'zip' command.. 2x larger.

    - by user33947
    I have a directory of JPEG's. Each one is roughly 90k, as reported by Photoshop when saving, and also reported by the command line function 'ls'.. When I get the properties for the file with Finder, it's double that, over 220k. Zipping it with finder will also package this bulk as well. Doing the "zip -v test.zip ./dir" command will make a MUCH smaller zip file. Zipping the files on windows also results in a much smaller file size as well, roughly the same to that of the unix zip command. File sizes are also reported correctly on windows. I can't find any mention of this anywhere, so I'm asking here.

    Read the article

  • How do you synchronise huge sparse files (VM disk images) between machines?

    - by chrisdew
    Is there a command, such as rsync, which can synchronise huge, sparse, files from one linux server to another? It is very important that the destination file remains sparse. It may be longer (but not bigger) than the drive which contains it. Only changed blocks should be sent across the wire. I have tried rsync, but got no joy. groups.google.com/group/mailing.unix.rsync/browse_thread/thread/94f39271980513d3 If I write a programme to do this, am I just reinventing the wheel? http://www.finalcog.com/synchronise-block-devices Thanks, Chris.

    Read the article

  • Use a media player in Linux just to play files from an iPod device (no sync, no manage, just play)?

    - by Somebody still uses you MS-DOS
    I have an ipod classic 160gb, that I sync with my machine at home. I use Linux at work, and want to just plug my ipod and just listen to the tracks, with all the playlists and such. I don't want to sync nothing, I just want to listen to the tracks as if I was using the ipod itself. Why? Because this way I can use the usb port. So, I don't want to manage my ipod in Linux, I just want to listen to the tracks on it in Linux, like it was a local library but it's instead in my ipod. (I've tried gtkpod, it works to show my files, but I can't play, shuffle, etc. It would be interesting to have a complete audio software to handle everything like it was a local library)

    Read the article

  • How can I tell what files are currently open by a process (i.e. my app)?

    - by chaiguy
    I am using a Lucene.Net index and want to give the user an option to move the index, but am having trouble closing it down so the directory/contents can be moved (I keep getting access denied exceptions). I need to be able to have some more information so I can debug this problem, such as being able to tell what files are currently open, and as much information about each use as possible. Alternatively, is there any way to simply force close a bunch of files so they can be moved? This would make things a lot easier to solve.

    Read the article

  • Unable to send multiple AJAX request in a loop?

    - by Harish Kurup
    I am sending multiple AJAX request through a loop, but some request are successfully send not all.. my code goes here... for(var i=0; i<dataArray.length; i++) { var request=getHttpRequest(); request.open('post','update.php',false); request.setRequestHeader("Content-Type","application/x-www-form-urlencoded"); request.send("data="+dataArray[i]); if(request.readyState == 4) { alert("updated the data="+dataArray[i]); } } function getHttpRequest() { var request=false; if(window.XMLHttpRequest) { request=new XMLHttpRequest(); } else if(window.ActiveXObject) { try { request=new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) { try { request=new ActiveXObject("Microsoft.XMLHTTP"); } catch(e) { request=false; } } } return request; } here in the above code, some data are being posted, but some dont, it does not return the readyState = 4. i.e if i have array with dataArray['1','2','3','4']; it updates only 1,2,and 4 and skips 3, or other value in between... is there any solution..please help...

    Read the article

  • How to override browser default download behavior for files?

    - by moha297
    Lots of times we have to download files from the net. In IE we get to see the ugly download progress bar. In firefox we get to see a pop-up window opening etc. However, I have never seen this being over ridden in any manner. Until recently on the site *thesixtyone DOT com* If we get to download a song free and click on the ok link to start the download we get a pop up to select location in the default style of windows. Then we see the progress bar as shown below. Any ideas on this? I am trying to see how these guys did this. you can see the image http://highwaves.files.wordpress.com/2010/04/61-download-bar.jpg

    Read the article

  • What router hardware or software should be used when multiple public IPs are routed into the same LAN?

    - by lcbrevard
    I am looking for recommendations to replace a set of consumer grade (Linksys, Netgear, Belkin) routers with something that can handle more traffic while routing more than one static public IP into the same LAN address space. We have a block of static public IPs, 5 usable, with Comcast Business. Currently four of them are in use for: General office access Web server Mail and DNS servers Download and backup web server for separate business All systems (a mixture of physical and virtual) are in the same LAN address space (10.x.y.0/24) to enable easy access between them inside the office. There are 30 or more systems in use depending on which virtual machines are currently active. We have a mixture of Windows, Linux, FreeBSD, and Solaris. Currently a separate consumer grade router is used for each of the four static addresses, with its WAN address set to the specific static address and a different gateway address for each: uses 10.x.y.1 - various ports are forwarded to various LAN IPs on systems with gateway 10.x.y.1 uses 10.x.y.254 - port 80 is forwarded to a server with gateway 10.x.y.254 uses 10.x.y.253 - ports for mail and dns are forwarded to a server with gateway 10.x.y.253 uses 10.x.y.252 - ports as needed are forwarded to server with gateway 10.x.y.252 Only router 1. is allowed to serve DHCP and address reservation based on the MAC is used for most of the internal "server" IP addresses so they are at fixed values. [Some are set static due to limitations in the address reservation capabilities of router 1.] And, yes, this really does work! But... I am looking for: better DHCP with more capable address reservation higher capacity so I don't have to periodically power cycle the routers One obvious improvement would be to have a real DHCP server and not use a consumer grade router for that purpose. I am torn between buying a "professional" router such as Cisco or Juniper or Sonic Wall verus learning to configure some spare hardware to perform this function. The price goes up extremely rapidly with capabilities for commercial routers! Worse, some routers require licensing based on the number of clients - a disaster in our environment with so many virtual machines. Sorry for such a long posting but I am getting tired of having to power cycle routers and deal with shifting IP addresses afterwards!

    Read the article

  • What is a "good" tool to password-protect .pdf files?

    - by Marius Hofert
    What is a "good" tool to encrypt (password protect) .pdf files? (without being required to buy additional software; the protection can be created under linux but the password query should work on Windows, too) I know that zip can do it: zip zipfile_name_without_ending -e files_to_encrypt.foo What I don't like about this is that for a single file, you have to use Winzip to open the zip and then click the file again. I rather would like to be prompted for a password when opening the .pdf (single file case). I know that pdftk can do this: pdftk foo.pdf output foo_protected.pdf user_pw mypassword. The problem here is that the password is displayed in the terminal -- even if you use ... user_pw PROMPT. But in the end you get a password-protected .pdf and you are prompted for the password when opening the file.

    Read the article

  • Can I make an identity field span multiple tables in SQL Server?

    - by johnnycakes
    Can I have an "identity" (unique, non-repeating) column span multiple tables? For example, let's say I have two tables: Books and Authors. Authors AuthorID AuthorName Books BookID BookTitle The BookID column and the AuthorID column are identity columns. I want the identity part to span both columns. So, if there is an AuthorID with a value of 123, then there cannot be a BookID with a value of 123. And vice versa. I hope that makes sense. Is this possible? Thanks. Why do I want to do this? I am writing an APS.NET MVC app. I am creating a comment section. Authors can have comments. Books can have comments. I want to be able to pass an entity ID (a book ID or an author ID) to an action and have the action pull up all the corresponding comments. The action won't care if it's a book or an author or whatever. Sound reasonable?

    Read the article

  • How do I capture and playback http web requests against multiple web servers?

    - by KevM
    My overall goal is to not interrupt a production system while capturing HTTP Posts to a web application so that I can reverse engineer the telemetry coming from a closed application. I have control over the transmitter of the HTTP Posts but not the receiving web application. It seems like I need a request "forking" proxy. Sort of a reverse proxy that pushes the request to 2 endpoints, a master and slave, only relaying the response from the master endpoint back to the requester. I am not a server geek so something like this may exist but I don't know the term of art for what I am looking for. Another possibility could be a simple logging proxy. Capture a log of the web requests. Rewrite the log to target my "slave" web application. Playback the log with curl or something. Thank you for your assistance.

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >