Search Results

Search found 21838 results on 874 pages for 'copy item'.

Page 331/874 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • Scripted forwarding for Outlook 2003

    - by John Gardeniers
    We have a staff member in sales who has gone onto a 4 day week (getting ready for retirement), so each Thursday afternoon her email needs to be forwarded to another user and each Friday afternoon it needs to be set back. I'm using the VBS script below to do this, run via the Task Scheduler. Although the script appears to do it's job, based on what I see when I view the user's Exchange settings, Exchange doesn't always recognise that the setting has changed. e.g. Last Thursday the forwarding was a enabled and worked correctly. On Friday the script did it's thing to clear the forwarding but Exchange continued to forward messages all weekend. I found that I can force Exchange to honour the changed setting be merely opening and closing the user's properties in ADUC. Of course I don't want to have to do that. Is there a non-manual way I can have Exchange read and honour the setting? The script (VBS): ' Call this script with the following parameters: ' ' SrcUser - The logon ID of the suer who's account is to be modified ' DstUser - The logon account of the person to who mail is to be forwarded ' Use "reset" to clear the email forwarding SrcUser = WScript.Arguments.Item(0) DstUser = WScript.Arguments.Item(1) SourceUser = SearchDistinguishedName(SrcUser) 'The user login name Set objUser = GetObject("LDAP://" & SourceUser) If DstUser = "reset" then objUser.PutEx 1, "altRecipient", "" Else ForwardTo = SearchDistinguishedName(DstUser)' The contact common name objUser.Put "AltRecipient", ForwardTo End If objUser.SetInfo Public Function SearchDistinguishedName(ByVal vSAN) Dim oRootDSE, oConnection, oCommand, oRecordSet Set oRootDSE = GetObject("LDAP://rootDSE") Set oConnection = CreateObject("ADODB.Connection") oConnection.Open "Provider=ADsDSOObject;" Set oCommand = CreateObject("ADODB.Command") oCommand.ActiveConnection = oConnection oCommand.CommandText = "<LDAP://" & oRootDSE.get("defaultNamingContext") & ">;(&(objectCategory=User)(samAccountName=" & vSAN & "));distinguishedName;subtree" Set oRecordSet = oCommand.Execute On Error Resume Next SearchDistinguishedName = oRecordSet.Fields("DistinguishedName") On Error GoTo 0 oConnection.Close Set oRecordSet = Nothing Set oCommand = Nothing Set oConnection = Nothing Set oRootDSE = Nothing End Function

    Read the article

  • Shell wrong encoding

    - by csch
    Somehow I managed to screw up my shell-encoding. An example: root§server:ç£ cat --help Usage: cat ¡OPTION¿... ¡FILE¿... Concatenate FILE(s), or standard input, to standard output. -A, --show-all equivalent to -vET -b, --number-nonblank number nonempty output lines -e equivalent to -vE -E, --show-ends display $ at end of each line -n, --number number all output lines -s, --squeeze-blank suppress repeated empty output lines -t equivalent to -vT -T, --show-tabs display TAB characters as ^I -u (ignored) -v, --show-nonprinting use ^ and M- notation, except for LFD and TAB --help display this help and exit --version output version information and exit With no FILE, or when FILE is -, read standard input. Examples: cat f - g Output f's contents, then standard input, then g's contents. cat Copy standard input to standard output. Report cat bugs to bug-coreutils§gnu.org GNU coreutils home page: <http://www.gnu.org/software/coreutils/> General help using GNU software: <http://www.gnu.org/gethelp/> For complete documentation, run: info coreutils 'cat invocation' root§server:ç£ It should look like: root@server:~# cat --help Usage: cat [OPTION]... [FILE]... Concatenate FILE(s), or standard input, to standard output. -A, --show-all equivalent to -vET -b, --number-nonblank number nonempty output lines -e equivalent to -vE -E, --show-ends display $ at end of each line -n, --number number all output lines -s, --squeeze-blank suppress repeated empty output lines -t equivalent to -vT -T, --show-tabs display TAB characters as ^I -u (ignored) -v, --show-nonprinting use ^ and M- notation, except for LFD and TAB --help display this help and exit --version output version information and exit With no FILE, or when FILE is -, read standard input. Examples: cat f - g Output f's contents, then standard input, then g's contents. cat Copy standard input to standard output. Report cat bugs to [email protected] GNU coreutils home page: <http://www.gnu.org/software/coreutils/> General help using GNU software: <http://www.gnu.org/gethelp/> For complete documentation, run: info coreutils 'cat invocation' root@server:~# I have no clue what went wrong, do you have any ideas?

    Read the article

  • lighttpd: why using port >= 9000 does not work properly

    - by yejinxin
    I had a lighttpd server which works normally. I can access this website from outside(non-localhost) via http://vm.aaa.com:8080. Let's just assume that it's a simple static website, without php or mysql. Now I want to copy this website as a test one(using another port) in the same machine. And I do not want to use virtual host. So I just copy the whole files of original server, including lighttpd's bin/ conf/ htdocs/ lib/ and so on folders. And I made some required change, including changing lighttpd.conf. Now what I'm confused is, if change the port to a number that is less than 9000, it works perfectly. But if the port is changed to a number that is equal or greater than 9000, lighttpd can start, but I can not access the new website from outside, while I do can access the new website from INSIDE(I mean in the same LAN or localhost). The access log from INSIDE is like below: vm.aaa.com:9876 10.46.175.117 - - [08/Oct/2012:13:18:47 +0800] "GET / HTTP/1.1" 200 15 "-" " curl/7.12.1 (x86_64-redhat-linux-gnu) libcurl/7.12.1 OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6" Command I used to start lighttpd is: bin/lighttpd -f conf/lighttpd.conf -m lib/ -D My lighttpd.conf is like: server.modules = ( "mod_access", "mod_accesslog", ) var.rundir = "/home/work/lighttpd_9876" server.port = 9876 server.bind = "0.0.0.0" server.pid-file = var.rundir + "/log/lighttpd.pid" server.document-root = var.rundir + "/htdocs/" var.cronolog_path = "/home/work/lighttpd_9876/cronolog/sbin/cronolog" server.errorlog = ... accesslog.filename = ... ... So why is this happening? I've tried several diffrent ports, still the same. Isn't that ports between 8000 and 65535 are the same?

    Read the article

  • Need info on scripts and Autoforward through Exchange Server in Outlook 2010

    - by user103037
    I am using the below information to auto-forward my work emails to my BB via a gmail account. The script works fine. But my work email ask's for every email to send either classifield or unclassified. What and where would I add into the below script to autoforward unclassified? I have written some VBA script to do this bypass the server's disabling of auto-forward. Basically it mimics the user forwarding the email rather than the server doing an auto-forward. It's pretty simple: Sub AutoForwardAllSentItems(Item As Outlook.MailItem) Dim strMsg As String Dim myFwd As Outlook.MailItem Set myFwd = Item.Forward myFwd.Recipients.Add "[email protected]" myFwd.Send Set myFwd = Nothing End Sub It's beyond the scope of this post to give detailed instructions, but here's a summary: Add the above code in the Visual Basic editor of Outlook (Alt-F11 should get your started). Be sure to change [email protected] to the address where you want the mail to go Tell Outlook to run this code for each inbound message (Tools - Rules and Alerts - New Rule - Check Messages when they arrive - Next - YES - Checkbox "Run a Script" - Then select the script you just created. Now Outlook should automatically forward each email you receive, but it won't be blocked by the Admin as an "Auto-forward".

    Read the article

  • 7zip many files from different folders?

    - by mafutrct
    I would like to add a large number of files with different names from different folders to a single 7zip archive using 7za.exe. This should be simple, but it turned out to be a major pain. I created a file that contains the paths (7za -a @list.txt) but once there are too many (~100) files, it fails. Apparently the content of the argument file is pushed onto the command line buffer, which is far too small (the number of files to add is 1m). Splitting the process up by adding the files one by one is not feasible due to the way 7za works: When adding the next file, it creates a copy of the archive, adds the file to the copy and finally replaces the original. This is terribly slow once the archive gets to a couple 100MB in size. So far I am using a combination of the two approaches by adding a dozen files each time in a loop, but it is an unreliable hack and still very slow. Is there a better way to do it? I tried to use 7zip wrapper DLLs (I'm a C# programmer), but none of them worked reliably and I was repeatedly suggested to just use 7za instead.

    Read the article

  • dd on entire disk, but do not want empty portion

    - by Jonathan Henson
    I have a disk, say /dev/sda. Here is fdisk -l: Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0000e4b5 Device Boot Start End Blocks Id System /dev/sda1 * 1 27 209920 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 27 525 4000768 5 Extended Partition 2 does not end on cylinder boundary. /dev/sda5 27 353 2621440 83 Linux /dev/sda6 353 405 416768 83 Linux /dev/sda7 405 490 675840 83 Linux /dev/sda8 490 525 282624 83 Linux I need to make an image to store on our file server for use in flashing other devices we are manufacturing so I only want the used space (only about 4gb). I want to keep the mbr etc... as this device should be boot ready as soon as the copy is finished. Any ideas? I previously had been using dd if=/dev/sda of=[//fileserver/file], but at that time, my master copy was on a 4gb flash ide.

    Read the article

  • Create account for service

    - by Andy
    I am configuring a new server. The server is running Hudson that is going to copy some files from this server to another. The other server is a virtual machine. Both running Windows Server 2012. Hudson is started on server A with log on as "Local System". When I come to the copy phase it says "Access denied". Changing the log on to "Administrator" works. However, I guess this is bad. I do not have much experience with user management. I tried to create a own hudson account on both servers A and B. I tried to log on as hudson account in the service-management but it doesn't start. How would you create an account for this particular service that has access to the shared folder on server B and can be used to start the service on server A? I guess I need two accounts with same username and password on server A and server B? The folder on Server B is shared with everyone and the guest account is enabled.

    Read the article

  • GPO startup script not copying files

    - by marcwenger
    I created a GPO startup script to execute for computers in a specific AD container. The script takes a file from the AD netlogon share and places it on a directory on the computer. Given the right permissions (ie: myself) can execute the script just fine and the file copies. But it doesn't work on startup - the file does not copy over from the AD server. The startup script should run as localsystem (am I right?). So the question is why do the files not copy on startup? Could it be because of: Is it permissions of the local system user? Reading the registry is problematic on startup? Obtaining files from the AD netlogon folder is problematic on startup? Am I missing it completely? My test machine does have the registry key and local directories as described in the script. I myself have standard user permissions on the test machine. AD server is Windows 2008, test client is Windows XP SP3 (and soon to be Windows 7, which I assume permissions issues will be inevitable) Dim wShell, fso, oraHome, tnsHome, key, srcDir Set wShell = WScript.CreateObject("WScript.Shell") Set fso = CreateObject("Scripting.FileSystemObject") key = "HKLM\Software\Oracle\Oracle_Home" On Error Resume Next orahome = wShell.RegRead(key) If err.Number = 0 Then tnsHome = oraHome + "\" + "network\admin\" srcDir = wShell.ExpandEnvironmentStrings("%logonserver%") + "\netlogon\UpdatedFiles\" fso.CopyFile srcDir + "file1.ext", tnsHome, true End If Side note: To ensure that the script is properly deployed, I purposely put some errors in the script, and on the next startup the error message appeared. So I know the GPO is deployed properly.

    Read the article

  • Transfer disk contents *without* cloning tools

    - by Chris Cummins
    Is it possible to "clone" a disk which contains programs by performing a copy of all the disk contents (preserving file attributes) from source to destination disk, and unplugging the source disk and changing the drive letter of the destination disk to match that of the source? Context I have a two disk Windows 8 system with a system drive and a data drive. Recently, the data drive developed a number of bad sectors leading to IO errors. I have been sent a replacement drive so I simply need to clone the contents of this data drive onto the replacement. The drive contents include documents & media, user folders (My Documents and related), and some programs (games etc). Problem The problem is that the bad sectors on the source disk causes most disk cloning tools to fail with read errors. Attempted approaches include: Disk clone from live boot environment with Acronis True Image. Fails due to read errors. Disk clone from live boot environment with Clonezilla. Fails due to read errors. Disk clone using Roadkil's Unstoppable Copier. Fails due to hardware timeouts in the HDD (application hangs indefinitely). A straightforward copy from source to destination disk using FreeFileSync (preserving file attributes and metadata). This succeeds. So at the moment I have a replacement disk which contains all of the data from the original disk. Now all I need to is somehow get Windows to replace all references to the old disk to the new one. Is this possible by simply swapping the assigned drive letters? Any help would be greatly appreciated, thanks!

    Read the article

  • How to Upgrade PHP 5.2 to 5.3 on Windows Plesk Panel 8.2

    - by Jagat Sheth
    I need to upgrade my server PHP version becasue of Wordpress New version not support PHP 5.2. On My server windows 2003 stansard edition x64 with SP1 installed, IIS 6.0, MySQL 4.1, Plesk Panel 8.2. I have follow listed steps on plesk KB. http://kb.parallels.com/6670 How to update PHP 5 on a Windows server with Parallels Plesk Control Panel 8.x and 9.x installed. In order to upgrade PHP 5 to the necessary version (other than shipped with Parallels Plesk), please perform the following steps: Stop Plesk services (‘Control Panel’ and all that are included in the ‘Plesk Run-Time’ section) Rename folder %plesk_dir%\Additional\PleskPHP5 to the orig_PleskPHP5 Create a new folder %plesk_dir%\Additional\PleskPHP5 Download necessary version of PHP, unzip its content, and copy it to the newly created folder PleskPHP5 Copy the file php.ini from the old folder orig_PleskPHP5 to the new one Make sure the permissions are inherited Start Plesk services Click the "Refresh" button in the Components Management section in Parallels Plesk Panel and check if you can see the new PHP version there After follow steps when I open a PHP info it shows me specified module could not be found. If anybody know solution Kindly help me is highly priority. I am very thankful if any one help me to solve this ASAP. Thanks and regards, Jagat Sheth

    Read the article

  • SMTP Client implementation [on hold]

    - by orif
    I'm implementing SMTP client. What should the client do once it already sent the "." at the end of the mail, but didn't receive "250 Ok"? This is how the conversation between the client and server look like: Server Response: 220 www.sample.com ESMTP Postfix Client Sending : HELO domain.com Server Response: 250 Hello domain.com Client Sending : MAIL FROM: <[email protected]> Server Response: 250 Ok Client Sending : RCPT TO: <[email protected]> Server Response: 250 Ok Client Sending : DATA Server Response: 354 End data with <CR><LF>.<CR><LF> Client Sending : Subject: Example Message Client Sending : From: [email protected] Client Sending : To: [email protected] Client Sending : Client Sending : TEST MAIL Client Sending : Client Sending : . Server Response: 250 Ok: queued as 23411 Client Sending : QUIT I'm not sure what should I do if the client sends "." and doesn't receive the 250 Ok - because of possible network error. Was the "." sent or not? Should the client resend the mail - and - maybe - duplicate the item, or not - and risk in losing an important mail item? Thank you.

    Read the article

  • Replacing HD in an MacOS 10.6.8 server caused all shares to fail

    - by Cheesus
    I'm hoping someone might have a helpful suggestion about this problem. We have 2 MacOSX servers available for file sharing. (quad Xeons - 2GB RAM, both 10.6.8), No.1 is an Open Directory Master with 50+ user accounts, No.2 has only 2 local accounts (/local/Default) and looks at the OD Master for all user accounts (/LDAPv3/10.x.x.20/) Both servers have 3 internal HD's, The boot volume with only Server OS and minimal Apps. A 'DataShare' HD (500GB) and a backup drive (500GB). After upgrading the DataShare HD in Server No.2 from a small internal HD (500GB) to larger capacity (2TB) drive, users are unable to connect to shares on Server No.2. Users get an error "There are no shares available or you are not allowed to access them on the server" The process I followed was to use Carbon Copy Cloner to create an exact copy of the original data drive (keeps all ownership data, UID, permissions, last edit date and time). Everything booted up ok, no indication there was any issues. (Paths to the sharepoint look good) Notes during troubleshooting - Server1 is operating perfectly, all users can access shares and authenticate etc. - I've checked the SACL (Server Access Control List) settings is ok. - On Server2 in the Server Admin' app, I can see all the shares listed ok. The paths seem valid, I can disable / reenable the shares, no errors. - On Server2 'workgroup manager' lists all the accounts from the OD Master in the LDAP dir view. All seems fine from here. Basically everything looks normal but no file shares on Server2 can be accessed from regular users.

    Read the article

  • Vim: How to Install Airline?

    - by MunkyCheez
    I'm just getting started with Vim. It's a fun experience, but I've found it to be kind of overwhelming. I'm trying to get this plugin installed, vim-airline, but I'm having a lot of trouble. The Installation section on the Github page simply states: copy all of the files into your ~/.vim directory Presumably, this means download the .zip, extract it, and copy all of those files into ~/.vim/. I did this, but Vim just starts up like normal, and running :help airline just gives: Sorry, no help for airline I assume that this means it isn't getting installed. Also, the statusbar remains the same. I'm new to Vim and would really like to get this working. Thanks in advance! EDIT: I also tried putting the files into /usr/share/vim/vim73/. No dice. EDIT 2: I ran :helptags ~/.vim/doc and now the help-page displays when I type :help airline, but I'm still not getting the plugin itself (the status bar). Vim looks the same, but it can now display the help page.

    Read the article

  • 7-Zip many files from different folders?

    - by mafutrct
    I would like to add a large number of files with different names from different folders to a single 7-Zip archive using 7za.exe. This should be simple, but it turned out to be a major pain. I created a file that contains the paths (7za a out.7z @list.txt), but once there are too many (~100) files, it fails. Apparently the content of the argument file is pushed onto the command line buffer [Edit: This was likely a misinformation on my part, either way it was not the reason], which is far too small (the number of files to add is more than one million). Splitting the process up by adding the files one by one is not feasible due to the way 7za works: When adding the next file, it creates a copy of the archive, adds the file to the copy and finally replaces the original. This is terribly slow once the archive gets to a couple 100 MB in size. So far I am using a combination of the two approaches by adding a dozen files each time in a loop, but it is an unreliable hack and still very slow. Is there a better way to do it? I tried to use 7-Zip wrapper DLLs (I'm a C# programmer), but none of them worked reliably and I was repeatedly suggested to just use 7za instead.

    Read the article

  • Backup data from RAID 1 disk out of its server

    - by Doomsday
    I'm facing with a pretty easy problem in my opinion. I've extracted a working disk from a RAID1 and I'm looking to copy only data (FS and RAID configuration doesn't matter) into another location (another FS). My problem is I'm not able to mount properly this disk into another linux. I've first looked the partition table : # fdisk -l /dev/sdc Disk /dev/sdc: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 1249535699 624767818+ fd Linux raid autodetect /dev/sdc2 1249535700 1250017649 240975 fd Linux raid autodetect /dev/sdc3 1250017650 1250258624 120487+ 82 Linux swap / Solaris I've understood I should use dmraid tools. Once installed : # cat /proc/mdstat Personalities : md0 : inactive sdc1[1](S) 624767744 blocks unused devices: <none> And some other informations : # mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : 8f292f54:7e5aef72:7e5ab5fd:b348fd05 Creation Time : Mon Jun 2 03:39:41 2008 Raid Level : raid1 Used Dev Size : 624767744 (595.82 GiB 639.76 GB) Array Size : 624767744 (595.82 GiB 639.76 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Update Time : Tue Feb 7 22:34:59 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : a505b324 - correct Events : 15148 Number Major Minor RaidDevice State this 1 8 1 1 active sync /dev/sda1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 1 1 active sync /dev/sda1 From here, I've tried to mount but I'm not comfortable with dmtools and how it's working. # mount /dev/sdc1 /mnt/sdc1 mount: unknown filesystem type 'linux_raid_member' # mount /dev/md0 /mnt/sdc1 mount: /dev/md0: can't read superblock I've seen some options to alter RAID array with mdadm but I only want to copy data on its filesystem before wiping them... Anyone has a clue ?

    Read the article

  • Moving a site from IIs6 to IIS7.5

    - by Sukotto
    I need to move a site off of IIS6 (Win Server 2003) and onto IIS7.5 (Win Server 2008) as soon as possible. Preferably tomorrow. The site itself is a delightful mix of classic asp (vbscript) and one-off asp.net (C#) applications (each asp.net app is in its own virtual dir and has a self-contained web.config). In case it's relevant, this is a sort of research site made up of 40 or 50 unconnected microsites. Each microsite is typically a simple form allowing a user to submit a form, which then runs a Stored Proc on a sqlserver db and displays a chart and/or table of the results. There is very little security to worry about. The database connection info is in a central file (in the case of the classic asp) or app's individual web.config (lots of duplication there) To add a little spice to the exercise... I have no idea how to admin IIS The company no longer employs the sysadmin or the guys who set this thing up. (They're not going to employ me much longer either but my sense of professional pride does not permit me to just walk away from this task). The servers are on mutually firewalled networks and I have to perform a convoluted, multi-step process to copy anything from one to the other. Would someone please point me to a crash-course tutorial for accomplishing the above? I have: a complete copy of the site's filesystem on the new box installed the 3rd party charting tool on the new system a config.xml file from the "all tasks - save configuration to a file" right click menu. There doesn't seem to be a way to import it on the new system however. The newer IIS manager has a completely different UI and I'm totally lost. Please help.

    Read the article

  • Bad HD and robocopy

    - by acidzombie24
    After my HD gave me crc errors I wanted to copy one drive to another and picked up a new 1TB hd. I am using the command robocopy G: J: /MIR /COPYALL /ZB First it tried copying the file a few times (i didnt count, its not in my window anymore) and got an access denied error, error 5. Then it tried again and locked up. I tried copying that specific file (14mb) and windows says "cant read from source file or disk". I started robocopy again. Hopefully it will ignore it after a fail attempt or two but what options can i use to say if it doesnt work continue to the next file. It looked like that is what it was doing but for this last one it repeated more then 4times and locked up finally. I'm open to other copy solutions. I do prefer built in solutions. I am using windows 7 Also how might i do this without the /MIR option. Is /S /E good enough? flag reference here -edit- i see i can control retries with /R: but i am still open to alternative solutions.

    Read the article

  • Windows 2008 R2 file share - any way to "lock it down" outside of a 3rd party app?

    - by TheCleaner
    I have a 3rd party app that "makes a call" to write files to a file share on our network using the currently logged in credentials of the Windows domain user. Meaning the 3rd party app doesn't pass the apps credentials but simply issues a behind the scenes copy command to take a source file specified and copy/move it to the destination "repository" on the file share. The basic premise is that it keeps revisions/approvals for Document Control (think svn/git I guess, similar to this question: Lock down Windows folder to only be updatable by SVN). This all works fine...but here's my issue: I need a way to lock down the file share from being accessed/modified outside of using the 3rd party app (meaning prevent explorer/word/excel/etc from getting to that share). I know I can do the following: make the share a hidden share ($) - this definitely helps. Most users would have zero clue on how to get to such a share. Solves probably 95% of my issue. go one step further and set the "Hidden" attribute on the folders in the hidden share - this would go a little further in that even if a user knows the path to the hidden share like \\server\hidden$ they still won't see folders in that share without changing their explorer options to "show hidden files/folder Any other ideas on how I can lock this down? The users still need modify rights to this share/folders since the 3rd party app relies on their Windows permissions to that location when copying the files into it. I can't really use 3rd party tools to password protect the folder/share without causing the 3rd party app functions to fail.

    Read the article

  • SVN client too old after I used sshfs

    - by johnlai2004
    I have a svn checked out web application on a shared hosting linux server. The linux server has an svn client that I can access via ssh. From my localhost, I did the following > sshfs [email protected]:webappdir/ /media/webapp > cd /media/webapp > svn update svn: Working copy '.' locked svn: run 'svn cleanup' to remove locks (type 'svn help cleanup' for details) > svn clean up It's been 15 minutes and the svn clean up still isn't finished. I think it may have froze. So then I did the following: > ssh [email protected] > cd webappdir > svn update svn: This client is too old to work with working copy '.'; please get a newer Subversion client So now I can't update my webappdir because my /media/webapp is stuck on svn clean up, and my shared hosting server's svn client is out of date. I don't have privileges to install a new svn client on the shared hosting server. How do I get my svn update to work?

    Read the article

  • Thunderbird: export email account settings

    - by zpea
    I'd like to create a new profile for Thunderbird using the same mail accounts I already configured in my old profile. As it is quite a number of accounts, it would be great to have a way to export/import them instead of writing down the settings just to fill in again in the new profile. Using web search and search here I mainly found following suggestions that do not match what I need: Copy the whole profile: Not possible for me as I don't want to copy other settings, the downloaded mail data etc. and the old profile broke when running out of space in the home folder anyway. Use mozBackup: There seem to be several programs by that name (forks?). In any case, it's Windows-only and hence no option (I am mostly on Linux and prefer platform-independent solutions anyway) Use accountex: Seems to do what I want, but it is not compatible with current Thunderbird version (supports only up to version 3.1) Posts with various tips from 4 years ago: Top results in the web search with the G. But they do not work in current versions of Thunderbird either. Did I overlook anything? After all, it doesn't sound like I was looking for something nobody ever looked for.

    Read the article

  • Ubuntu: Network connection seems to fail after some time

    - by chrischu
    I just bought a Shuttle XS-35 barebone mini-PC and put a 1 TB WD hard drive and 2 Gigs of RAM into it and installed Ubuntu onto it. The machine will post as a media server (streaming videos to my PS3) and as a webserver for some small private projects. Now I wanted to copy my videos from my Windows 7 machine to the Ubuntu machine and therefore created a Samba share on the Ubuntu machine. I tried copying the files with the standard Windows copy function and with SyncToy but after some time (sometimes 5 copied files, sometimes 120 copied files) the Samba share just disappears. When that happens I can't reach the internet from the Ubuntu machine although the network connection still seems to be fine (IP still there etc.). Between the machines lies a LinkSys router. When I try to ping my router (after the connection doesn't work anymore) from the Ubuntu machine only a very small subset of the packages actually get there (something around 20%). When I restart the Ubuntu machine everything seems to work normal again. I have no idea where the problem lies here. Does anybody have a clue? Thanks in advance!

    Read the article

  • Cannot access new folders created in my Apache2 document-root

    - by user235101
    I have tried to create a new folder 'test' in my documentroot of my Apache2 installation, however, whenever I try and access it from a web-browser it gives me a 403 (forbidden) error. My virtualhosts file - <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName REMOVED DocumentRoot /var/www/ <Directory /> Options FollowSymLinks AllowOverride All AuthType Digest AuthName "documentroot" AuthDigestProvider file AuthUserFile /etc/apache2/htpasswd Require user REMOVED AllowOverride Indexes </Directory> <Directory /var/www/> Options FollowSymLinks Options -Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> <Directory /var/www/share/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/REMOVED/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/stream/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/test> Order Deny,Allow Allow from all Satisfy any </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined <Directory /var/www/REMOVED> AuthType Digest AuthName "rutorrent" AuthDigestDomain /var/www/REMOVED/ http://46.105.127.19/REMOVED AuthDigestProvider file AuthUserFile /etc/apache2/htpasswd Require valid-user SetEnv R_ENV "/var/www/REMOVED" AllowOverride Indexes </Directory> </VirtualHost> Image of the permissions - Other information - If I create a new folder (and use chmod --reference to ensure it has the same permissions as an accessible folder), I get a 403 client-side. If I copy folder 'rapidleech' to the name 'rapidleech1', it will let access 'rapidleech1', but no longer 'rapidleech', until I delete the copy. In my logs I found nothing logged in errors.log, and only that it delivered a 403 in access.log. All the appropriate users are members of www-data.

    Read the article

  • Specify default group and permissions for new files in a certain directory

    - by mislav
    I have a certain directory in which there is a project shared by multiple users. These users use SSH to gain access to this directory and modify/create files. This project should only be writeable to a certain group of users: lets call it "mygroup". During an SSH session, all files/directories created by the current user should by default be owned by group "mygroup" and have group-writeable permissions. I can solve the permissions problem with umask: $ cd project $ umask 002 $ touch test.txt File "test.txt" is now group-writeable, but still belongs to my default group ("mislav", same as my username) and not to "mygroup". I can chgrp recursively to set the desired group, but I wanted to know is there a way to set some group implicitly like umask changes default permissions during a session. This specific directory is a shared git repo with a working copy and I want git checkout and git reset operations to set the correct mask and group for new files created in the working copy. The OS is Ubuntu Linux. Update: a colleague suggests I should look into getfacl/setfacl of POSIX ACL but the solution below combined with umask 002 in the current session is good enough for me and is much more simple.

    Read the article

  • Rkhunter reports file properties have changed

    - by CountMurphy
    I am running a fully updated LTS copy of Ubuntu server. Today I ran rkhunter (as I do from time to time). This is the output I got: Warning: The file properties have changed: [15:52:25] File: /bin/ps [15:52:25] Current hash: f22991ec93ae966c856d367f42fc3d8a484bd827 [15:52:25] Stored hash : 1892268bf195ac118076b1b0f53e7a637eb6fbb3 [15:52:25] Current inode: 142902 Stored inode: 130894 [15:52:25] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:25] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:33] File: /usr/bin/ldd [15:52:33] Current hash: f1e2ca5aa3a28994e2cebb64c993a72b7d97b28c [15:52:33] Stored hash : 295d9cedb121a5e431a39a6d201ecd7ce5640497 [15:52:33] Current inode: 2236210 Stored inode: 2234359 [15:52:33] Current size: 5280 Stored size: 5279 [15:52:33] Current file modification time: 1331165514 (07-Mar-2012 16:11:54) [15:52:33] Stored file modification time : 1295653965 (21-Jan-2011 15:52:45) Warning: The file properties have changed: [15:52:37] File: /usr/bin/pgrep [15:52:37] Current hash: 3eada9a96760f3e2c9111cfe32901d1432813c1d [15:52:37] Stored hash : ce265d0db9964b173fe5036f703a9b8d66e55df3 [15:52:37] Current inode: 2229646 Stored inode: 2224867 [15:52:37] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:37] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:41] File: /usr/bin/top [15:52:41] Current hash: 6be13737d8b0950cea2f1ae3a46d4af713dbe971 [15:52:41] Stored hash : c7b495ecef3982eeb6f08a511861b1a1ae8775e6 [15:52:41] Current inode: 2229629 Stored inode: 2224862 [15:52:41] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:41] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:53] File: /usr/sbin/cron [15:52:53] Current hash: e783ca973f970aa8a4bf5edc670e690b33914c3d [15:52:53] Stored hash : 4718257a8060736b9058aed025c992f02a74a5a7 [15:52:53] Current inode: 2224719 Stored inode: 2228839 [15:52:54] Current file modification time: 1330965568 (05-Mar-2012 08:39:28) There were also a few other I left out. Has my server been rooted? I am running fail2ban and do monitor failed ssh logins. nothing has come up. Could someone compare these hashes to their copy of Ubuntu Server (lts)? Please tell me these are false positives..... Edit: is something else like rkhunter I can run for a second scan?

    Read the article

  • Safe use of Update-FormatData?

    - by Steve B
    In a custom PowerShell module, I have at the top of my module definition this code: Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") This is working fine as all .ps1xml files are loaded. However, the module is sometimes loaded using Import-Module MyModule -Force (actually, this is in the install script of the module). In this case, the call to Update-FormatData fails with this error : Update-FormatData : There were errors in loading the format data file: Microsoft.PowerShell, c:\pathto\myfile.Types.ext.ps1xml : File skipped because it was already present from "Microsoft.PowerShell". At line:1 char:18 + Update-FormatData <<<< -AppendPath "c:\pathto\myfile.Types.ext.ps1xml" + CategoryInfo : InvalidOperation: (:) [Update-FormatData], RuntimeException + FullyQualifiedErrorId : FormatXmlUpateException,Microsoft.PowerShell.Commands.UpdateFormatDataCommand Is there a way to safely call this command? I know I can call Update-FormatData with no parameters, and it will update any known .ps1xml file, but this would work only if the file has already been loaded. Can I list somewhere the loaded format data files? Here is a bit of background: I'm building a custom module that is installed using a script. The install script looks like : [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact="High")] param() process { $target = Join-Path $PSHOME "Modules\MyModule" if ($pscmdlet.ShouldProcess("$target","Deploying MyModule module")) { if(!(Test-Path $target)) { new-Item -ItemType Directory -Path $target | Out-Null } get-ChildItem -Path (Split-Path ((Get-Variable MyInvocation -Scope 0).Value).MyCommand.Path) | copy-Item -Destination $target -Force Write-Host -ForegroundColorWhite @" The module has been installed. You can import it using : Import-Module MyModule Or you can add it in your profile ($profile) "@ Write-Warning "To refresh any open PowerShell session, you should run ""Import-Module MyModule -Force"" to reload the module" Import-Module MyModule -Force Write-Warning "This session has been refreshed." } } MyModule defines, as first statement, this line : Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") As I updated my $profile to always load this module, the Update-Path command has been called when I run the install script. In the install script, I force import the module, which be fire again the module, and then, the Update-Path call

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >