Search Results

Search found 2950 results on 118 pages for 'andrew martin'.

Page 37/118 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • CPU sometimes hangs at 50% maximum on Windows 8

    - by Martin.
    Recently I installed Windows 8 on my HP ProBook 6450b and it appears I've got problems when using more than one program at once. Sometimes, when I run multiple programs at once, CPU hangs at 49% and never goes up. I suspect it has something to do with CPU throttling, because sometimes it just goes over 49%. I've got my notebook connected to charger and my plan is set to "Active cooling". The weird thing is that this happens only occasionally. I found nothing in my BIOS that could change this kind of things.

    Read the article

  • batch file to deploy files

    - by Martin Michalak
    hi I have created batch file which pulls info from *.txt file and deploy code from the source to destination: SET Source=%1 if exist %Source% ( ECHO Source for WEB exists ) else ( ECHO Wrong build%Source% doesn't exist GOTO Menu ) SET Server=%2 SET AppPool=%3 SET Destination=%4 SET Folder=%5 SET ENV=%6 SET AppName=%7 SET Envlog=%8 ECHO Deployment of WEB > %Envlog% %Date% %Time% echo. @ECHO Stopping App Pools @ECHO Stopping App Pools >> %Envlog% %Date% %Time% D:\ICTTools\PSEXEC.EXE -d \\%Server% cmd.exe /c c:\windows\system32\inetsrv\appcmd STOP apppool /apppool.name:%AppPool% echo. @ECHO App Pools will be stopped in the background @ECHO App Pools will be stopped in the background >> %Envlog% %Date% %Time% Pause echo. IF EXIST "%Destination%" ( ECHO Deleting %AppName% %Folder% RMDIR %Destination% /s /q ECHO Destination Folder %Folder% Deleted ECHO Destination Folder %Folder% Deleted >> %Envlog% %Date% %Time% ) else ( ECHO Destination Folder %Destination% does not exist, please check ECHO Destination Folder %Destination% does not exist, please check >> %Envlog% %Date% %Time% Pause ) echo. @ECHO Starting Robocopy for %AppName% @ECHO Starting Robocopy for %AppName% >> %Envlog% %Date% %Time% echo. START /WAIT /MIN ROBOCOPY.EXE %Source% %Destination% *.* /S /NP /R:3 /W:5 /LOG:"Logs\Robo%AppName%%ENV%.log" D:\Tools\Windiff\windiff.exe %Source% %Destination% echo. @ECHO Finished with Robocopy @ECHO Finished with Robocopy >> %Envlog% %Date% %Time% echo. @ECHO Checking if App pools stopped: @ECHO Checking if App pools stopped: >> %Envlog% %Date% %Time% D:\ICTTools\PSEXEC.EXE \\%Server% c:\windows\system32\inetsrv\appcmd LIST apppool /apppool.name:%AppPool% @echo off set /p ask=All app pools stopped? (y/n) if %ask%==y (echo Great, please continue with deployemnt) else echo Before continuing please check why app pools did not stop @echo App pools stopped?: %ask% >> %Envlog% %Date% %Time% DEL %Source%\web.config echo. @ECHO Production Config check if exist "%Destination%\%ENV%-Web.config" ( echo. ECHO The Application production configuration file does exist. ECHO The Application production configuration file does exist. >> %Envlog% %Date% %Time% COPY %Destination%\%ENV%-Web.config web.config echo. ECHO Production %ENV%-Web.config has been renamed to web.config ECHO Production %ENV%-Web.config has been renamed to web.config >> %Envlog% %Date% %Time% ) else ( ECHO The Application production configuration file is missing in Production %AppName% ECHO The Application production configuration file is missing in Production %AppName% >> %Envlog% %Date% %Time% explorer %Destination% Pause ) echo. @ECHO Confirm that configs were renamed correclty, if yes please hit any key to START APP Pools @ECHO Confirm that configs were renamed correclty, if yes please hit any key to START APP Pools >> %Envlog% %Date% %Time% Pause echo. @ECHO Start %AppName% Application Pool >> %Envlog% %Date% %Time% D:\ICTTools\PSEXEC.EXE \\%Server% c:\windows\system32\inetsrv\appcmd START apppool /apppool.name:%AppPool% @echo off set /p ask=All app pools started? (y/n) if %ask%==y (echo Great, please continue with deployemnt) else echo Before continuing please check why app pools did not start @echo App pools started?: %ask% >> %Envlog% %Date% %Time% Pause echo. @ECHO Build Version for %AppName% @ECHO Build Version for %AppName% >> %Envlog% %Date% %Time% type %Destination%\buildinfo.xml echo. ECHO ............................................... @ECHO ...........Deployment Compelted................ @ECHO ...........Deployment Compelted................>> %Envlog% %Date% %Time% ECHO ............................................... here are my issues: Lets say I am running code for 3 servers, then for each instance: For all three servers I am performing destination folder delete even so destination folder is always the same, the code should only delete it in the 1st instance (when code is deployed to first server) then I would prefer if script would check if the code from the source and destination is the same and if it is it should delete the folder or not. Then based on 1: a) deleting web.config and renaming should only happen if code in destination is new b) Robocopy should not override files if they are the same I think there is /Xo option to do that any idea how to achieve that? :)

    Read the article

  • How to extract jpegs from a video file using ffmpeg

    - by Andrew Simpson
    I am using C# and ffmpeg. In this scenario I have 279 individual jpegs and i have used ffmpeg to create a AVI file from these images on my client. CMD Line: -f image2 -r 10 -i "C:\000EC902F17F\img%05d.jpg" -s 352x288 -y "C:\1\test.avi" I then upload to my server. CMD Line: -i c:\1\1.avi c:\1\img-%05d.jpg I then extract jpegs from the AVI file. I get 265 jpegs back. Obviously ffmpeg is dropping these frames (most probable) when the avi is 1st created. Is there a way to 'force' to encode using ALL the images I have? Thanks. PS I did not specify any command line option other than the size of the video output. As far as I am aware if none are specified then ffmpeg automatically chooses the best ones?

    Read the article

  • Best technique for reusing a Windows system image across configurations

    - by Martin Wiboe
    We are a small company that provides solutions for ventilation systems. Part of the solution is a "controller" which communicates with the ventilation equipment. These controllers are simply Dell computers that come with our Windows 7 system image on them and sometimes some special hardware. We typically do a batch of 10 controllers at a time. We have been using Norton Ghost to apply the system image, but this process breaks because Dell changes the system configuration often, and our Windows image now does not contain the correct drivers. This is especially a problem when they change the RAID controller. To improve this, I see 2 options: use some kind of virtualization and install a hypervisor on each PC. This would solve the driver problem, but probably cause trouble with our special hardware. use some method of adding the proper drivers to our Windows image in offline mode. I haven't got much experience in either of these approaches. How would you solve our problem?

    Read the article

  • Avoiding spam filters on my CentOS 5.5 64bit server?

    - by Andrew Fashion
    I run a social network on my web server, with about 15,000 members right now. My administration section let's me Mass Email all my users. Currently it uses the built in PHP mail function. What is the best way to congfigure my server to bypass spam? Can I install anything on the server? Or should I just make the social network use SMTP? The admin panel lets me choose SMTP or built-in mail function. I'm not to familiar with mailing from servers, as I usually use Aweber for my mailing, but I cannot use Aweber for this as they will not let me just import 15,000 emails. Let me know, thanks.

    Read the article

  • How can I store logs and meet compliance requirements for free?

    - by Martin
    I am trying to keep long-term logs of an app in such a way, that it could plausibly demonstrated to third parties/court that the application has processed certain data at a given time. The data can be represented in XML or text format. A simple gzipped log is not plausible, as I may have added or modified data afterwards, whereas an external logging service would be an overkill. Cost is an issue, we are not dealing with financial data or so, but rather some simple user generated content, where some malicious users tried to blame the operator in the past when things escalated and went to court. My question: Is there some kind of signing software for Linux that signs each element of a log in such a way, that it can be easily shown that no element can be added or modified afterwards? Plug-Ins into some free Splunk Alternatives would be fine too. Ideally the software I am looking for should be under a GPL or similar license. I could probably achive something like this by using PGP/GPG sgning functions and including the previous elements signituares within the following element, but I would prefer to use some program where you do not have to argue about the validity of your own code. Note to mods: I am not asking this question on Stackoverflow, because I am not looking for writing own code for reasons described above. I think this question rather fits into serverfault than superuser, as server-side logging software is discussed rather here than on superuser.

    Read the article

  • Ethernet 802.1x client -> WiFi AP on a Raspberry Pi?

    - by Martin Janiczek
    I have an Ethernet connection that requires 802.1x authentication (TTLS, MSCHAPv2, name+password). My goal is to connect that to something that would then act as an WiFi AP, so I can use the connection on more devices (iPhone, notebook, etc.) Would it be possible/good idea to use Raspberry Pi for this purpose? Or are there better-suited devices to do this? EDIT: found some alternatives but because of low rep can't post more than two links... OpenWRT + wpa_supplicant guide Carambola - works with OpenWRT (but probably not standalone?) Hornet-UB - works with OpenWRT Asus RT-N10+ + OpenWRT how-to EDIT 2: probably going to try TP-LINK TL-WR740N. It's a classic router, but can be flashed with OpenWRT, and the price beats everything else I've seen.

    Read the article

  • Chrome 33 shows ugly, blocky, pixelated fonts in Linux

    - by Andrew Mao
    After updating to the latest version of Chrome (33) on my Gentoo Linux box, certain sites such as GitHub have started rendering with ugly, pixelated, non-antialiased fonts. Small text is now basically impossible to read. Before this, GitHub had looked the same to me on Windows, Linux, and Mac computers. So what has happened here and how can it be fixed? EDIT: Appears to be fixed on the stable release of Chrome 34.

    Read the article

  • Shell Script, iterating over a folder

    - by Martin
    I have very basic shell scripting knowledge. I have photos under original folder on many different folder like this folder + folder1 + original + folder2 + original + folder3 + original + folder4 + original Using mogrify I'm trying to create thumbs under a thumb folder following a structure to this. folder + folder1 + original + thumb + folder2 + original + thumb + folder3 + original + thumb + folder4 + original + thumb I'm a little lost in how to write the shell script that may iterate through it. I'm ok giving mogrify its settings but I don't complete understand how to tell the script to go iterate each folder to run the mogrify command.

    Read the article

  • LAMP Setup, PHP's session_start permission denied

    - by Andrew
    I'm trying to set up a development environment for a legacy system that runs CentOS 4.8, PHP 4.3.9, and MySQL 4.1.22. I'm matching OS and software versions to keep the development server as close to the production server as possible. When I fire up PHPMyAdmin's setup script (version 2.11.10.1, of course) the installation errors out and I see these errors in my error log: [client 172.18.141.74] PHP Warning: session_start(): open(/var/lib/php/session/sess_b5b90f86bd3dcfad315ff24cb7483a79, O_RDWR) failed: Permission denied (13) in /home/www/intranet/phpmyadmin/libraries/session.inc.php on line 87 [client 172.18.141.74] PHP Warning: Unknown(): open(/var/lib/php/session/sess_b5b90f86bd3dcfad315ff24cb7483a79, O_RDWR) failed: Permission denied (13) in Unknown on line 0 [client 172.18.141.74] PHP Warning: Unknown(): Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0 I've done some searching on ServerFault and on teh Googles and I see that a common reason for this error is that the session.save_path isn't writable by the www user. I also found where in /etc/php.ini this URL is set: session.save_path. My session.save_path is set to: session.save_path = /var/lib/php/session I've since changed the owner and the group of /var/lib/php/session and still have the same error. Here's the result of ls -la for /var/lib/php [root@localhost php]# ls -la total 24 drwxrwxr-x 3 www www 4096 Oct 23 20:21 . drwxr-xr-x 17 root root 4096 Oct 23 20:31 .. drwxrwx--- 2 www www 4096 Jun 1 2009 session ...But I'm still getting the same error. Is there another possibility for why I'm getting this error?

    Read the article

  • List/remove files, with filenames containing string that's "more than a month ago"?

    - by Martin Tóth
    I store some data in files which follow this naming convention: /interesting/data/filename-YYYY-MM-DD-HH-MM How do I look for the ones with date in file name < now - 1 month and delete them? Files may have changed since they were created, so searching according to last modification date is not good. What I'm doing now, is filter-ing them in python: prefix = '/interesting/data/filename-' import commands names = commands.getoutput('ls {0}*'.format(prefix)).splitlines() from datetime import datetime, timedelta all_files = map(lambda name: { 'name': name, 'date': datetime.strptime(name, '{0}%Y-%m-%d-%H-%M'.format(prefix)) }, names) month = datetime.now() - timedelta(days = 30) to_delete = filter(lambda item: item['date'] < month, all_files) import os map(os.remove, to_delete) Is there a (oneliner) bash solution for this?

    Read the article

  • Proxy Error 502 "Reason: Error reading from remote server" with Apache 2.2.3 (Debian) mod_proxy and Jetty 6.1.18

    - by Martin
    Apache is receiving requests at port :80 and proxying them to Jetty at port :8080 The proxy server received an invalid response from an upstream server The proxy server could not handle the request GET /. My dilemma: Everything works fine normally (fast requests, few seconds or few tens of seconds long requests are processed ok). Problems occur when request processing takes long (few minutes?). If I issue request instead directly to Jetty at port :8080 the request is processed OK. So problem is likely to sit somewhere between Apache and Jetty where I am using mod_proxy. How to solve this? I have already tried some "tricks" related to KeepAlive settings, without luck. Here is my current configuration, any suggestions? #keepalive Off ## I have tried this, does not help #SetEnv force-proxy-request-1.0 1 ## I have tried this, does not help #SetEnv proxy-nokeepalive 1 ## I have tried this, does not help #SetEnv proxy-initial-not-pooled 1 ## I have tried this, does not help KeepAlive 20 ## I have tried this, does not help KeepAliveTimeout 600 ## I have tried this, does not help ProxyTimeout 600 ## I have tried this, does not help NameVirtualHost *:80 <VirtualHost _default_:80> ServerAdmin [email protected] ServerName www.mydomain.fi ServerAlias mydomain.fi mydomain.com mydomain www.mydomain.com ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass / http://www.mydomain.fi:8080/ retry=1 acquire=3000 timeout=600 ProxyPassReverse / http://www.mydomain.fi:8080/ RewriteEngine On RewriteCond %{SERVER_NAME} !^www\.mydomain\.fi RewriteRule /(.*) http://www.mydomain.fi/$1 [redirect=301L] ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On </VirtualHost> Here is also the debug log from a failing request: 74.125.43.99 - - [29/Sep/2010:20:15:40 +0300] "GET /?wicket:bookmarkablePage=newWindow:com.mydomain.view.application.reports.SaveReportPage HTTP/1.1" 502 355 "https://www.mydomain.fi/?wicket:interface=:0:2:::" "Mozilla/5.0 (Windows; U; Windows NT 6.1; fi; rv:1.9.2.10) Gecko/20100914 Firefox/3.6.10" [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: error reading status line from remote server www.mydomain.fi, referer: https://www.mydomain.fi/?wicket:interface=:0:2::: [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: Error reading from remote server returned by /, referer: https://www.mydomain.fi/?wicket:interface=:0:2:::

    Read the article

  • nginx redirect proxy

    - by andrew
    I have a web app running on a nginx server on local ip 192.168.0.30:80 I have this in my etc/hosts 127.0.0.1 w.myapp.in If someone accesses my app using a "w" subdomain, it shows a webdav interface, otherwise it runs normally (for example, someone calls http://myapp.in , it goes into the app, and http://w.myapp.in goes into webdav interface - this is done within the app, nginx has nothing to do with it) Because I don't have a dns or anything like that, users must access the app by ip. A problem appears if someone wants to access the webdav interface, because you cannot access the app by a subdomain - unless you write a line in your local hosts file, which is not a solution) A possible solution If it's possible to setup the nginx server so that if someone calls http://192.168.0.30 (on port 80), it goes normally into the app, but if a user tries to access say http://192.168.0.30:81 (another defined port) it redirects internally to w.myapp.in, and the app sees the subdomain Given the app, can this be done? If yes, what should I put in the nginx config file? And if you guys think of a better solution, I'm open to any.

    Read the article

  • Windows 7 System File Checker (sfc) not working

    - by Andrew
    I have a strange problem with Windows 7 RTM. I'm running Ultimate 64-bit edition. Whenever I run sfc /scannow I get this error: Windows Resource Protection could not start the repair service. My research so far has told me that I need to set the Windows Module Installer and Windows Installer services' startup types to Manual. They already were, so I manually started those services and tried again. No luck. I've even booted into my Windows 7 repair disc and tried running sfc /scannow from that. All I get is this: A repair operation is already pending. Restart and try again. History: I'm trying to run sfc because I am unable to open any images in Windows Photo Viewer. Whenever I try, I get the error "Class not registered." I believe the problem started after I installed Gladinet, but I can't be sure. I've uninstalled Gladinet, but the problem remains. System Restore was disabled (yes, I know I'm stupid - you don't have to remind me). Please help. Thanks.

    Read the article

  • Insert Hyperlink via VBA

    - by Martin
    I have a Word VBA macro that loops through a directory and writes down the file path of files selected for some criteria into a new Word document. Works well as plain text (as part of a loop): wdDocResults.Content.InsertAfter objFile.Path & Chr(13) However, I'd like them to be hyperlinks. The following works as single macro, but when called from within another script, it does nothing at all (no matter if path is provided as variable or string, or as H:... or \\MyServernameAsNetDrive...): ActiveDocument.Hyperlinks.Add Anchor:=Selection.Range, Address:= objFile.Path, _ SubAddress:="", ScreenTip:="", TextToDisplay:=objFile.Path If try to select the current line in order to make sure something is selected at the right place -- error: out of memory": wrdDocResults.Content.InsertAfter objFil.Path Selection.Expand wdLine ActiveDocument.Hyperlinks.Add Anchor:=Selection.Range, Address:= objFile.Path, _ SubAddress:="", ScreenTip:="", TextToDisplay:=objFile.Path I also tried inserting a string resembling the Hyperlink field code ({ Hyperlink "..." }, which is of course not recognized... Any help is appreciated... Thanks in advance!

    Read the article

  • Best practice for scaling a single application source to multiple nodes

    - by Andrew Waters
    I have an application which needs to scale horizontally to cover web and service nodes (at the moment they're all on one) but interact with the same set of databases and source files (both application code and custom assets). Database is no problem, it's handled already with replication in MongoDB. Also, the configuration of the servers are the same (100% linux). This question is literally about sharing a filesystem between machines so that its content is always correct, regardless of the node accessing it. My two thoughts have so far been NFS and SAN - SAN being prohibitively expensive and NFS seeing some performance issues on the second node with regards to glob()ing in PHP. Does anyone have recommended strategies or other techniques that don't involved sharding data across nodes or any potential gotchas in NFS that may cause slow disk seek times? To give you an idea of the scale, the main node initialises it's application modules in ~ 0.01 seconds. The secondary is taking ~2.2 seconds. They're VM's inside a local virtual network in ESXi and ping time between them is ~0.3ms

    Read the article

  • PRNG test suite: bitstream and stream length

    - by Martin Trigaux
    On the NIST website, there is a tool called sts (Statistical Test Suite) that allow us to rest the validity of a pseudo-random number generator based on a stream of bits in input. When running the program, there is two variables I am not sure to understand : the stream length and number of bitstream. Is the stream length the size of the file ? The number of bit inside ? The size of a bitstream ? Are the bitstreams subset of the whole file ? Chosen how ? Let say I have a text file containing 1,000,000 bits in ascii. What should be my arguments ? You can find the user manual here if needed (I didn't find explanation about what are these variables in it). Thank you

    Read the article

  • How to install/configure ffmpeg to compress mp4 videos for flash player delivery?

    - by Andrew Fulton
    We have a flash web-app that created interactive video, and are using ffmpeg to do some compression/resizing when a user "publishes" their project. The user can upload flv files and mp4 files, both of which play fine in the Flash UI before publishing. After publishing the flv files work fine, but the mp4 files will not play in the flash player: Audio will play but video won't. The mp4 files will play fine if I download them and play them in the Quicktime player but if I attempt to open them in the Adobe Media Player it reports "The media file does not contain a supported video track". If I open the Movie inspector in quicktime it tells me that the original file is an "h264" video and the ffmpeg-processed ones are "mpeg-4". I have tried forcing it to h264 by adding flags like -f h264 and -vcodec h264 but I get a screenfull of errors (no frame, illegal POC type, sps_id out of range) ending with Could not find codec parameters (Video: h264) h264 will show up if I run ffmpeg -formats and ffmpeg -codecs, and as I said it will play fine in Quicktime. Is there anything else I need to do to convince the flash player to play them? Is there anything else I need to tell you about the server that will help?

    Read the article

  • How can I transfer a logged in user's login data from one server to another?

    - by Martin
    I have one server "A" where users can login. Login is verified by an LDAP server "L". I have a different server "B" were users can log in, too. Login is verified by the same LDAP server as before. Both servers are standard web servers with PHP. My goal is: If a user is logged in to server "A", and if he clicks a link to log in to server "B", the user should automatically be logged in without re-entering username and password. What is a good and secure way to achieve this? I can't submit username and crypted password to server "B". I can't use the PHP session of server "A", because it does not exit on "B". Cookies won't work either. I think that there is a way, but I just can't see it. Any help is very much appreciated.

    Read the article

  • Routing / binding 128 IPs to one server

    - by Andrew
    I have a Ubuntu server with 128 ip's (static external ips 86.xx.xx.16), and I want to crawl pages thru different ip's. The gateway is xx.xxx.xxx.1, the main ip is xx.xxx.xxx.16, and the other 128 ip's are xx.xxx.xxx.129/255. I tried this configuration in /etc/network/interfaces but I doesn't work. It work if I remove the gateway for the aliases eth0:0 and eth0:1. I think this is routing problem. auto lo iface lo inet loopback auto eth0 auto eth0:0 auto eth0:1 iface eth0 inet static address xx.xxx.xxx.16 netmask 255.255.255.128 gateway xx.xxx.xxx.1 iface eth0:0 inet static address xx.xxx.xxx.129 netmask 255.255.255.128 gateway xx.xxx.xxx.1 iface eth0:1 inet static address xx.xxx.xxx.130 netmask 255.255.255.128 gateway xx.xxx.xxx.1 Also, please tell me how to "reset" every changes that I made in networking and routing. Update: I removed the gateway and now it works. I can reach the website thru all 128 ip's. But when I try to bind a socket connection in php to a specific ip I get no answer. socket_bind($sock, "xx.xxx.xx.xxx"); socket_connect($sock, 'google.com', 80); I tryed to use a sniffer to see the packets, and I see the packet sent from binded ip to google.com but the "connection" can't be established. I don't know anything about "route" command, but I have a feeling that this is the solution.

    Read the article

  • How to test if DNS information has propagated?

    - by Andrew
    I set up a new DNS entry for one of my subdomains (I haven't set up any Apache virtual hosts or anything like that yet). How can I check that the DNS information has propagated? I assumed that I could simply ping my.subdomain.com and assume that if it could resolve, it would show the IP address I specified in the A record. However, I don't know if I am assuming correctly. What is the best way to check this information?

    Read the article

  • How to reliably recieve message from AWS that my instance was rebooted / terminated / stopped?

    - by Andrew Smith
    I have Nagios, and I want it to stop monitoring instances when they are stopped from the console. The requirements are: The message passed from AWS is 100R% reliable, e.g. when Nagios is down, and the message cannot be delivered, it will be re-delivered promptly when Nagios is up The message will pass quickly There is no need to scan status of all instances via EC2 API all the time, but only once a while Many thanks!

    Read the article

  • Backup files from Linux client to Windows Server

    - by Andrew
    I'm trying to backup my files from my Linux box to my Windows Server 2008 as a push, and when I delete them from my Linux box, they remain on my Windows Server. I've found lots of sources that are similar, but most results were from Windows to Linux. I managed to find slightly more similar cases like Using rsync and cygwin to Sync Files from a Linux Server to a Windows Notebook PC, and rsync from Windows PC to remote Linux server, with the most similar being a backup from Linux to Windows Server, but through a pull from the Windows Server. Initially, I used Unison because I thought having the 2-way capability would come in handy, and I would just have to set some configurations to make it 1-way. Unfortunately, I couldn't find the right configuration, and only managed to synchronize using the command unison "profile" -ui text -auto -silent. When I deleted the files on my Linux box, the files in the Server got deleted too, which of course, isn't what I want. When I tried to find any options for Unison, I only discovered the -force option, which didn't help, since what I wanted was an incremental update to the Server. I found out I could achieve this from using rsync and the -a option (archive), which would keep adding files even if I deleted them from my Linux box. I installed Cygwin on my Windows Server, configured an SSH daemon, but I can't seem to get it working. I've also already configured Windows Firewall to open port 22 (both inbound and outbound). I used the following command from my Linux box: rsync -avrzn /folder/to/be/backed/up/ [email protected]:/cygdrive/c/place/to/store/backed/up/files (a - archive, v - verbose, r - recurse into subdirectories, z - compress, n - dryrun) but it just won't work. Can anyone help me out? I don't mind using either Unison or rsync, as long as it achieves what I want.

    Read the article

  • Shorewall SHOW DYNAMIC command doesn't work

    - by Andrew Burns
    Setting up shorewall dynamic zones, http://shorewall.net/Dynamic.html shows the command shorewall show dynamic zone where zone is one of your zones. I can get the add and delete commands to work, but not the show dynamic command. Here is a shell session, with output from ipset list that proves that the items are indeed there. $ ipset list CPREM_br0 Name: CPREM_br0 Type: hash:ip Header: family inet hashsize 1024 maxelem 65536 Size in memory: 16520 References: 66 Members: 192.168.85.153 $ shorewall add br0:192.168.85.200 CPREM Host br0:192.168.85.200 added to zone CPREM $ shorewall show dynamic CPREM $ ipset list CPREM_br0 Name: CPREM_br0 Type: hash:ip Header: family inet hashsize 1024 maxelem 65536 Size in memory: 16536 References: 66 Members: 192.168.85.153 192.168.85.200 $ shorewall delete br0:192.168.85.200 CPREM Host br0:192.168.85.200 deleted from zone CPREM $ ipset list CPREM_br0 Name: CPREM_br0 Type: hash:ip Header: family inet hashsize 1024 maxelem 65536 Size in memory: 16536 References: 66 Members: 192.168.85.153 I am using the packaged version from Ubuntu 12.04 (4.4.26.1-1)

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >