Search Results

Search found 5793 results on 232 pages for 'ftp sync'.

Page 156/232 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Setting up podcasting for a non-tech user

    - by Force Flow
    I have a user who wants to start making podcasts, but they only have basic skills when it comes to technology. So, I was trying to get a process together that would be easy for them to follow. To upload files (the mp3's and rss feed files), I have an explorer shortcut for their FTP space. To record the podcast, I was going to either use audacity or PodProducer. For the RSS feed, I was looking for a podcast RSS generator of some sort. In my search for this, I've come across a lot of dead links and a lot of paid tools, so I haven't come up with anything too useful. Is there a free, reliable webservice or windows-based tool available that folks like to use?

    Read the article

  • Linux servers seeing bad download performance behind Sonicwall firewall

    - by Joshua Penix
    I'm working with a pair of co-located CentOS Linux servers sitting behind a Sonicwall PRO 2040 Enhanced firewall running in transparent bridge mode. These servers are having a strange problem downloading files more than a few megabytes in size. For example, if I try to wget or FTP a copy of the Linux kernel from kernel.org, the first ~1-2MB will download at 600+K/s, and then throughput will drop off a cliff to 1K/s. I've reviewed all the firewall configuration settings for anything suspicious, but found nothing. More interestingly, I performed the same download with a Windows server sitting behind the same firewall, and it sailed right through at 600+K/s the whole way. Has anyone seen this? Where should I start looking to troubleshoot this problem?

    Read the article

  • Secure Apache Virtual Hosts?

    - by Dr Hydralisk
    I am going to host a few small sites on VPS, and each of them are going to run my own custom PHP scripts. I am fairly certain that they are secure (did everything in the book, plus some of which is not in the book) to make sure they can't be exploited. But just to be safe I want to know how I could secure each of the virtual hosts so that they can't escape from there virtual host (if a hacker uploaded a shell they could not go above the www folder a legitimate user can't do in ftp no matter how many times they click ..) folder on Debian and Apache.

    Read the article

  • rsync doesn't use delta transfer on first run

    - by ockzon
    I'm trying to synchronize a large local directory (with a batch file using rsync 3.0.7 on Cygwin, Windows 7 x64, 30k files, 200gb size) to a remote server (Debian x64 with kernel 2.6, rsyncd 3.0.7) over a slow internet connection (90kbyte/s upload). I know almost all files are identical and I verified that using md5sum locally and remotely. However when executing rsync from my local machine every file gets transferred completely for the first time. When I terminate the batch file after a few transfers and run it again then the already transferred files are skipped. But as soon as it gets to a file not yet transferred it uploads the file as a whole again instead of noticing that the checksum is the same locally and remotely. The batch file calling rsync looks like this (backslashes and line brakes added here for readability): c:\cygwin\bin\rsync.exe --verbose --human-readable --progress --stats \ --recursive --ignore-times --password-file pwd.txt \ /cygdrive/d/ftp/data/ \ rsync://[email protected]:33400/data/ | \ c:\cygwin\bin\tee.exe --append rsync.log I experimented using the following parameters in varying combinations but that didn't help either: --checksum --partial --partial-dir=/tmp/.rsync-partial --compress

    Read the article

  • Using rsync with link-dest from HFS to NTFS

    - by Tom
    Hi, I'm having a problem with rsync. I'm on a Mac and I'd like to sync my everyday's changes from my HFS+ partition to my NTFS formated networked drive. Pretty simple, and everything goes well except that it syncs every file each times. Here's my script: #! /bin/sh snapshot_dir=/Volumes/USB_Storage/Backups snapshot_id=`date +%Y%m%d%H%M` /usr/bin/rsync -a \ --verbose \ --delete --delete-excluded \ --human-readable --progress \ --one-file-system \ --partial \ --modify-window=1 \ --exclude-from=.backup_excludes \ --link-dest ../current \ /Users/tommybergeron/Desktop/Brainpad \ $snapshot_dir/in-progress cd $snapshot_dir rm -rf $snapshot_id mv in-progress $snapshot_id rm -f current ln -s $snapshot_id $snapshot_dir/current Could someone help me out please? I've been searching for like two hours and I still am clueless. Thanks so much.

    Read the article

  • Firewall GPO not applying despite being enumerated by gpresult

    - by jshin47
    I have a need to open up the admin$ share on all of my domain's client PC's and I am trying to do so using group policy. I defined computer policy for Windows Firewall with Advanced Security in a policy object linked to the appropriate container and added the appropriate rules. However, they are not being applied! I feel like I have tried all of the obvious steps: I've checked gpresult and the resulting set of policy is the way that I would expect it to look. I've gpupdate /force and gpupdate /sync on a few client computers, but no matter what I do they don't seem to respond to my changes. I know that other computer policies in the GPO are being applied so it is strange that these are not. I have also disabled exceptions on clients in the firewall GPO, but that doesn't seem to be applying either. Here is a screenshot of the firewall.cpl from a client: Basically, although other options in the same GPO ARE applied for computer policy, the firewall settings seem to be ignored.

    Read the article

  • Download from http server all directories,files and subdirectories and so on

    - by Jack
    I want to download from remote http server all files directories,files and so on. I found some solutions to ftp server,but doesn't work to http. Until now no luck with wget -r or -m. It download all direcotories in the root and the respective index.html. Not all files and sub-directory under such it(note the sub-directory may have another directory and so on) not sure on tags fix for me if needs. Note: I'm not a native english speaker,sorry for bad english.

    Read the article

  • Logging upload attempt with proftpd

    - by Amit Sonnenschein
    I have a logging server that i use with external hardware, the idea is that a special hardware is uploading logs about it's operation every few hours and from the server i can do whatever i need to do with the information, the old server was getting a bit too old and i've moved to a new one, i've install lamp,proftpd and ssh (just the same as i had on the old server). now for some reason the logs are not being uploaded and i don't know why. the hardware uses a direct ftp access - i've the proftpd.log and saw that the connection is not being rejected (just to make sure i didn't make a mistake with the user/pass) my problem is that for some reason the upload itself is failing... it might be due to wrong path (as it's hard coded in the hardware) but i can't really know as proftpd wont give me any details.. i've tried to change the loglevel to "debug" thinking it would give me more information but i don't see any change... is there any other way i can make sure proftpd logs EVERTHING ?

    Read the article

  • TCP/IP performance tuning under KVM/Qemu

    - by vpetersson
    With more and more companies switching to public cloud services, I'm curious what you guys' thoughts are on TCP/IP tuning in the cloud. Is it worth bothering with? Given that you don't have access to the host-server, you're somewhat limited I presume Let's say for the sake of the argument that you're running three MongoDB-servers in a replica-set on FreeBSD or Linux that all sync over an internal network. I'd also be curious if anyone made any actual performance benchmarks to back up their arguments. I benchmarked the various network drivers available for KVM/Qemu here, but I'm curious what the gurus here suggest to tune further. I started playing around a bit with the tuning-recommendations as suggested over here, but interestingly enough I saw a decrease in performance, rather than an increase, but perhaps I didn't fully understand the tweaks. Update: I did a few more benchmarks and posted the result here. Unfortunately the result wasn't really what I expected.

    Read the article

  • how can i realize a video-wall on 3-9pc with vlc

    - by Luca
    hello! i have to create a videowall, from 3 to 9 monitor. every monitor as a pc. actually, i stream from a server 9 movies with different istances of VLC, but i could also play on every machine the relative video with a single player. there's no problem. the real problem is that i really dont know how to sync the videos on a LAN...unfortunately there is a NETSYNC module inside VLC wich is NOT working. here are some info about my setup: videowall from 3 to 9 monitor || from 3 to 9 pc, all with the same configuration || a gigabit router+switch for the "dedicated" LAN im really stuck in this situation, if anyone has an idea or just a completely different solution, please, share it with me! thanks a lot in advance! :)

    Read the article

  • What does Libre Office do to an existing Excel sheet to bloat its size?

    - by Sn3akyP3t3
    I try to avoid using Libre Office on existing Excel created workbooks because of the potential for unpleasant results. In this case Libre Office bloated the size of the workbook for some reason unknown to me. I would like to know if Libre Office does this to all Excel workbooks or just something in that workbook that causes it. Software involved: Microsoft Office Excel 2010 Libre Office 3.5.x (exact version unknown) Dropbox (merely to sync changes) Platforms involved: Office on Windows (master of the obvious on that one I suppose..) Libre Office on Mac OS 10.6 Types of data stored in this workbook: Text Integers 1 column with a simple formula spanning the entire worksheet representing that particular row (=CONCATENATE(A2285,B2285,D2285), =CONCATENATE(A2286,B2286,D2286), etc.) Total of 3,500 plus rows Here is a photo with details described within, but I'll go ahead and explain the photo as well: This screenshot is from Dropbox history of the .xlsx workbook. Version 61 - 68 were Office Excel. Version 69 - 73 were Libre Office.

    Read the article

  • Rsync over NFS with QoS: How to view real transfer speed?

    - by Ian Mackinnon
    We have a bandwidth limit between a Linux server and a NAS, created using 'tc' with an IP filter. When writing to an NFS mount of the NAS, rsync claims a very high transfer speed for each file and then waits a long time before acknowledging that everything has finished. The total time taken is consistent with the QoS limit and the time taken by the same transfer over FTP. Why does the write to the NFS mount report higher transfer speeds than are actually happening over the network? How can I monitor the actual bandwidth of the transfer?

    Read the article

  • Backing up data in an encrypted way

    - by Eli Bendersky
    I have the following use case: There's some data from my PC I want to periodically back-up online I own some hosting, so I want to use that for the backups, don't want to pay to another backup service I want to encrypt my data locally prior to moving it to the server I have no problem writing scripts to automate the process (say, periodically generate the backup and upload by FTP to my server), but my main question is about step 3 - the encryption: which way is recommended to encrypt my files (say, collected into a .ZIP) prior to uploading to the server? P.S. TrueCrypt seems popular but it's not quite what I'm looking for, since I don't want the files to be constantly encrypted here on my PC.

    Read the article

  • How to reliably synchronise file servers between London and Shanghai?

    - by Andy S
    We have two offices, one in London and one in Shanghai, each needing to be able to access the same set of files. This means we need a solid, speedy means of synchronising a set of folders between servers at either office. They're likely to be Windows servers, but we could look at Linux boxes if the software side makes more sense on *nix. We've considered Rsync, Unison, Gluster, and a few other options, but none of them seem capable of reliably keeping the servers in sync between such distant office locations. Each office is on DSL connectivity over the open internet, so encryption is also a factor. Does anyone have any hints for getting the servers synchronising in as close to real time as possible, without dying constantly? Andy

    Read the article

  • proftpd - TLS connection hangs authenticating

    - by greydet
    I setup a proftpd server that uses TLS/SSL certificate for authentication. Everything works well when I connect through lftp or Filezilla (with explicit connection). But once I attempt connecting with simple ftp connection from Filezilla, the USER command ends with the 550 response (SSL/TLS required). After that any further connection through lftp or Filezilla (with explicit connection) will hang authenticating. Anyone knows how to workaround this issue? Is there a way to ask Filezilla to automatically use TLS/SSL if required? I am using Ubuntu server 10.04 with proftpd 1.3.2c. There is no error message in the log files.

    Read the article

  • Gmail exchange settings for desktop mail clients?

    - by Abhishek
    So I have an ipad, and I have my gmail account there configured via Microsoft Exchange... I don't know a lot about the underlying technologies, but using exchange is mind blowingly awesome... I mean, I receive mail INSTANTLY, and to be able to sync things like my calendars and my contacts at the same time, its just amazing... So enough gloating, and onwards to the real problem... How do I do the same thing on my Mac? Any email client will do... I've tried both, the built in Mac Mail (4.5) app, as well as Outlook for Mac 2011... And I can't get it to work on either...

    Read the article

  • mdadm - Recovering a 'split' RAID1 array

    - by Hamza
    I have two drives that used to be part of a single RAID1 volume but it appears that one of them went offline for some time, something I've noticed just now when I rebooted my system. I now seem to have two RAID volumes, as reported by: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sdc[1] 2096116 blocks super 1.2 [2/1] [_U] md127 : active (auto-read-only) raid1 sdb[0] 2096116 blocks super 1.2 [2/1] [U_] unused devices: <none> Not exactly sure where to go from here. How can I merge and re-sync these volumes without data loss? Thanks.

    Read the article

  • OS X Login Authentication Against Leopard Server

    - by mattdwen
    I am doing a few tests with OS X Server before I have to do a deploy in a few months. I have configured Open Directory, and created a few users. I've configured Directory Utility on a 10.5 client, but the login authentication doesn't work the way I would expect. I would expect I could user a username/password from any user created in Open Directory and be able to log into the client. Instead, it appears I need to create a local user, which you then sync with a directory user using Directory Utility. Alternatively, if I add an Active Directory config to the client, I can use any AD user, as I would expect. Am I hoping for the impossible, or is something likely wrong with the configuration?

    Read the article

  • SQL Server 2008 Log-shipping: Without a UNC drive: how?

    - by samsmith
    My real question here is... is there a tool I can use? (E.g. I have a lot to do, and would prefer not to script it all up myself!) Anyone using the redgate (hmmm, they had a tool for this, but I do not see it on their web site now...) I have a primary web app at rackspace. Am setting up a backup copy of the app in another data center. I want to use SQL log replication to sync the db. Using SQL Server Web Edition. TIA for suggestions and insight!

    Read the article

  • Linux: disbale USB without disabling power

    - by Ergot
    TLDR I want toggle between the following usages of a usb-port via the terminal: use like a normal usb-port only supply energy to charge Story I recently got me something like a magna doodle that can save your drawings to pdf, which can be moved to your computer via usb afterwards. Now the thing is that you can't save anything while it's plugged in. Because it's the only way to charge it, it bugs me that I can't find a software solution and laziness I want to keep it plugged in and toggle the connection to the computer only when needed. I noticed that it's charging and usable when it is plugged in and the computer is shut down or suspened. So I guess that there's a way to do it. Tech info computer: ThinkPad X201 Linux Kernel: 3.14.5-1-ARCH "Magna doodle": Boogie Board Sync

    Read the article

  • Gnome 3 - Unable to change date and time

    - by Chris Harris
    I am running Arch Linux with Gnome 3. Unfortunately, although my time and date settings in /etc/rc.conf show that HARDWARECLOCK='UTC' and TIMEZONE='America/LosAngeles'. I continue to get the timezone of Europe/London. If I try to change the date and time via the GUI. It requires root access. After authorizing root access, the date and time may be changed; however, after closing the GUI window, it automatically reverts back to the previous incorrect timezone. I am able to use pool.ntp.org in order to sync my time to the correct one; however, this works only for the current session and is not fixed. This solution is inconvenient since there is not always network access. What other solutions are available for this problem?

    Read the article

  • NFS Issues in Gnome

    - by Alex
    I mount NFSv4 export via /etc/fstab and mount and use the shared folder in nautilus. There are two issues: When I copy a large file (around 4 GB) to the NFS server, the progress bar rapidly goes to 2 GB and then basically stops moving. But the copy s still in progress - it is just not displayed well When I disconnect from the network without unmounting the nfs share, nautilus freezes. How can I work around that? /etc/export on the server /export/share 192.168.0.0/24(rw,sync,insecure,no_subtree_check,anonuid=1000,anongid=1000) /etc/fstab on the client: server:/share /mnt nfs4 soft,tcp

    Read the article

  • Virtual Server 2005 R2 kungfu

    - by AngryHacker
    Does Virtual Server 2005 R2 have a command line interface, that's versatile enough? Here is a situation. I run a Win2k VM on an old memory constrained machine. I allocate it 378MB of RAM and the VM runs just fine. Once a month, inside the VM, I backup the (a very large) database, compress it using 7Zip and ftp it to the backup site (all in a script). Unfortunately the compression part takes a massive amount of RAM (far exceeding the 378MB), it goes for the paging file and brings absolutely everything to a crawl and literally takes 2-3 days, if left unattended. So to fix this, I have to shutdown the VM, give it temporarily 768MB of RAM and then the whole thing finishes in 20 minutes. So, is there a way do the following automatically from the host machine in a script? Shutdown the guest OS (I think, I got this part) Change the RAM allocation from 378 to 768 Start the guest OS again then, 1 hour later, do everything in reverse.

    Read the article

  • Enabling SFTP Access within PLESK

    - by spelley
    I have a client who wants to ensure his upload is secure, so we are trying to enable SFTP for him on our Linux PLESK server. I have enabled SSH access to bin/bash for FTP accounts, and created a new user. When I attempt to SFTP using either the IP address or the domain name, this is the error FileZilla is giving me: Error: Authentication failed. Error: Critical error Error: Could not connect to server Here is some basic information regarding the server: Operating system Linux 2.6.24.5-20080421a Plesk Control Panel version psa v8.6.0_build86080930.03 os_CentOS 5 I had read in some places that I should reboot the SSH Service in Server - Services, however, there is no SSH Service within the list. I'm not really a server guy so it's quite possible I'm missing something obvious. Thanks for any help that you guys can provide!

    Read the article

  • How to efficiently dump a huge MySQL innodb database?

    - by Jagbir
    I got an Ubuntu 10.04 production MySQL database server where total size of database is 260 GB while size of root partition is itself 300 GB where DB is stored, essentially means around 96% of / is full and there's no space left for storing dump/backup etc. No other disk is attached to server as of now. My task is to migrate this database to other server sitting in different datacenter. Question is how to do that efficiently with minimum downtime? I'm thinking in line of: Request to attach an extra drive to server and take a dump in that drive. Transfer dump to new server, restore it and make new server slave of existing one to keep data in sync When migration is needed, break replication, update slave config to accept read/write requests and make old server read-only so it won't entertain any write requests and tell app developers to update there config with new IP address for db. What's your suggestions to improve this or any alternate better approach for this task?

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >