Search Results

Search found 24334 results on 974 pages for 'directory loop'.

Page 695/974 | < Previous Page | 691 692 693 694 695 696 697 698 699 700 701 702  | Next Page >

  • Using robocopy and excluding multiple directories

    - by GorrillaMcD
    I'm trying to copy some directories from a server before I restore from backup (my latest backup was corrupt, so I have to use an older one :( ). I'm in the Windows Recovery Environment and have access to the server's file system G:\ and my backup media C:\. But, since I'm more familiar with Linux, I'm having a bit of trouble with the command line in Windows, specifically robocopy. I want to copy multiple directories (maintaining the same directory structure) from G:\ to C:\ while excluding others (namely, the Windows and Program Files folders). I can't figure out the syntax for the /XD option. I was hoping to do something like: robocopy G: C:\backup /CREATE /XD "dir1","dir2", ...

    Read the article

  • loss of sound in ubuntu 12.04

    - by Leo Simon
    I'm running Linux E6520 3.2.0-56-generic #86-Ubuntu SMP Wed Oct 23 09:20:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux on a Dell Latitude E6530. (This is a new machine; have run the same version of linux on an older machine for a year, without this happening.) I've been losing sound regularly, though have not been able to isolate the trigger for this. I've scoured the web on this subject, in particular https://help.ubuntu.com/community/SoundTroubleshootingProcedure and Audio stopped working suddenly in 12.04 Nothing from the first site seemed to work for me. From the second site, I learned enough to be able to fix the problem when it happens, but nothing on the web has helped me figure out why the problem is happening in the first place. Patching together stuff from the web, and with some blind luck, I've found that the following steps seem to restore sound pulseaudio --kill pulseaudio --start pavucontrol -> output devices Click on the "Mute audio" icon, which mutes audio Click on the "Mute audio" icon, which unmutes audio. This obviously doesn't make sense: audio wasn't muted in the first place, but somehow, magically, toggling mute audio off and on seems to reset something. Can anybody suggest from this information why sound would be disappearing in the first place (it seems as though something is getting muted at the system level, but I don't know what)? a simpler (command-line/script) way of restoring sound, in particular, is it possible to reset pavucontrol from the commandline? Some other pieces of information that may be of use: The problem is clearly happening at the system level, since I've set up a clean new user, and this user has the same problems that I do. So user fixes like deleting the .pulse directory aren't (and don't) help. Sound works fine in Windows (dual-boot) so it's not a hardware problem Any help/suggestions on this would be most appreciated.

    Read the article

  • Apache Issues on Windows Server 2008

    - by dlackey
    I'm looking to reinstall Apache 2.2.25 since I continue to get these errors in the Windows Application log every 2-5 minutes: Faulting application name: httpd.exe, version: 2.2.25.0, time stamp: 0x51dd049c Faulting module name: zlib1.dll, version: 1.2.3.0, time stamp: 0x4790446a Exception code: 0xc0000005 Fault offset: 0x00002bad Faulting process id: 0x38e8 Faulting application start time: 0x01cfbfd70cdfbc4f Faulting application path: C:\Apache2\bin\httpd.exe Faulting module path: C:\Apache2\bin\zlib1.dll Report Id: 745f20de-2bca-11e4-bd5d-002590f28d7e If the new install doesn't work or if there are some "issues", can I simply restore the Apache2 directory from backup and then I'll just be back where I started? I thought about just renaming the current install to something like c:\apache2_old and if something fails, I can delete the new install and rename c:\apace2_old back to c:\apache. What do you all think?

    Read the article

  • Printer Canon MP540 succesfully added finally but doesn't print? (Attached Debug log)

    - by NES
    i tried to setup my printer in Ubuntu 10.10. I had to use special guide to install because Canon used libcupsys2 in its packages and ubuntu expects libscupsys i followed this guide in reference to this advice. The first problem was this one (the ubuntu "add printer dialog asked for root credentials". The suggested workarout in Launchpad to create a root password worked. Now i added the printer. It's available for printing. Then i got an error message "cups insecure filter" which prevented me from printing. That could be solved by setting the need root rights in the /usr/lib/cups/filter/ directory. The error message disappeared after restarting cups service. Now it should work but it doesn't. The main problem is now, the printer seems to be proper setup but when i try to print a document, the printer icon appears for short time in gnomepanel. There's a printing job in Queue which got completed, but the printer doesn't print. I attached the Debuglog provided by printer error control, had to upload to another site, since it was to big in the question body here. Perhaps someone can identify the problem with it? Note: i know that it once worked fine with an older release of Ubuntu, but not sure which version this was.

    Read the article

  • Secure against c99 and similar shells

    - by Amit Sonnenschein
    I'm trying to secure my server as much as i can without limiting my options, so as a first step i've prevented dangerous functions with php disable_functions = "apache_child_terminate, apache_setenv, define_syslog_variables, escapeshellarg, escapeshellcmd, eval, exec, fp, fput, ftp_connect, ftp_exec, ftp_get, ftp_login, ftp_nb_fput, ftp_put, ftp_raw, ftp_rawlist, highlight_file, ini_alter, ini_get_all, ini_restore, inject_code, mysql_pconnect, openlog, passthru, php_uname, phpAds_remoteInfo, phpAds_XmlRpc, phpAds_xmlrpcDecode, phpAds_xmlrpcEncode, popen, posix_getpwuid, posix_kill, posix_mkfifo, posix_setpgid, posix_setsid, posix_setuid, posix_setuid, posix_uname, proc_close, proc_get_status, proc_nice, proc_open, proc_terminate, shell_exec, syslog, system, xmlrpc_entity_decode" but i'm still fighting directory travel, i can't seems to be able to limit it, by using a shell script like c99 i can travel from my /home/dir to anywhere on the disc. how can i limit it once and for all ?

    Read the article

  • Making document storage in Sharepoint a breeze (leave the Web UI behind)

    - by deadlydog
    Hey everyone, I know many of us regularly use Sharepoint for document storage in order to make documents available to several people, have it version controlled, etc.  Doing this through the Web UI can be a real headache, especially when you have multiple documents you want to modify or upload, or when IE isn’t your default browser.  Luckily we can access the Sharepoint library like a regular network drive if we like. Open Sharepoint in Internet Explorer (other browsers don’t support the Open with Explorer functionality), navigate to wherever your documents are stored, choose the Library tab, and then click Open with Explorer. This will open the document storage in Explorer and you can interact with the documents just like they were on any other network drive J  This makes uploading large numbers of documents or directory structures super easy (a simple copy-paste), and modifying your files nice and easy. As an added bonus, you can drag and drop that location from the address bar in Explorer to the Favorites menu so that it’s always easily accessible and you can leave the Sharepoint Web UI behind completely for modifying your documents.  Just click on the new favorite to go straight to your documents.   You can even map this folder location as a network drive if you want to have it show up as another drive (e.g N: drive). I hope you found this as useful as I did

    Read the article

  • df says disk is full, but it is not

    - by Chris
    On a virtualized server running Ubuntu 10.04, df reports the following: # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.4G 7.0G 0 100% / none 498M 160K 498M 1% /dev none 500M 0 500M 0% /dev/shm none 500M 92K 500M 1% /var/run none 500M 0 500M 0% /var/lock none 500M 0 500M 0% /lib/init/rw /dev/sda3 917G 305G 566G 36% /home This is puzzling me for two reasons: 1.) df says that /dev/sda1, mounted at /, has a 7.4 gigabyte capacity, of which only 7.0 gigabytes are in use, yet it reports / being 100 percent full; and 2.) I can create files on / so it clearly does have space left. Possibly relevant is that the directory /www is a symbolic link to /home/www, which is on a different partition (/dev/sda3, mounted at /home). Can anyone offer suggestions on what might be going on here? The server appears to be working without issue, but I want to make sure there's not a problem with the partition table, file systems or something else which might result in implosion (or explosion) later.

    Read the article

  • Force ID of user created by apt-get

    - by Bart van Heukelom
    Context: I'm automatically installing postgresql-9.1 on an Ubuntu server with apt-get. This creates the required postgres user. The Postgres data is on an external volume that survives reinstalls. This data is obviously owned by the postgres user. The problem I'm having is that the ownership is not recorded under the name postgres, but under the UID that postgres had at creation time. When the server is reinstalled, postgres sometimes gets a different UID, and no longer owns the data directory, and thus does not work. Question: Can I force the UID of the user postgres created by apt-get to something fixed? Or is there another way to solve my problem? (As you may have deduced, this is on Amazon EC2 with the data on an EBS volume)

    Read the article

  • How can I get rsync to ignore missing files?

    - by Joe Casadonte
    I'm executing a command like the following to several different systems: $ rsync -a -v [email protected]:'/path/to/first/*.log path/to/second.txt' /dest/folder/0007/. Sometimes *.log does not exist, and that's OK, but rsync generates the following error: receiving file list ... rsync: link_stat "/path/to/first/*.log" failed: No such file or directory (2) done Is there any way to suppress that? The only way I can think of is to use include and exclude filters, which just seem a PITA to me. Thanks!

    Read the article

  • I keep getting OpenSSL Header Version not found error when compiling OpenSSH Debian Squeeze

    - by Romoku
    I built Openssl1.0.0d ./config shared no-threads zlib It installed fine to the default /usr/local/ssl I went and downloaded OpenSSH 5.8p2 and ran ./configure but now it keeps giving me a Openssl version header not found error even when I set --with-ssl-dir= I've tried it with arguments /usr/local/ssl/include /usr/local/ssl/include/openssl /usr/include /usr/local/ssl/lib I looked in config.log and found error: openssl/opensslv.h: no such file or directory which makes little sense since I pointed openssh to where it is store. /etc/ld.so.conf include /usr/local/ssl/lib I'm at a loss at this point. Answer (maybe): Because I am an idiot. include /usr/local/ssl/lib is incorrect. /usr/local/ssl/lib is correct. It needs to be before the first include.

    Read the article

  • Google Chrome shows error messagebox every time it starts.

    - by Benjamin
    I removed Google Chrome 10.x dev version, and installed 8.x stable version again. After installing 8.x, chrome always shows this message box, every time it starts. Your profile can not be used because it is from a newer version of Google Chrome. Some features may be unavailable. Please specify a different profile directory or use a newer version of Chrome. What profile does it say. How to fix it? Thanks.

    Read the article

  • Organize code in Chef: libraries, classes and resources

    - by ColOfAbRiX
    I am new to both Chef and Ruby and I am implementing some scripts to learn them. Now I am facing the problem of how to organize my code: I have created a class in the library directory and I have used a custom namespace to maintain order. This is a simplified example of my file: # ~/chef-repo/cookbooks/mytest/libraries/MyTools.rb module Chef::Recipe::EP class MyTools def self.print_something( text ) puts "This is my text: #{text}" end def self.copy_file( dir, file ) cookbook_file "#{dir}/#{file}" do source "#{dir}/#{file}" end end end end From my recipe I call both methods: # ~/chef-repo/cookbooks/mytest/recipes/default.rb EP::MyTools.print_something "Hello World!" EP::MyTools.copy_file "/etc", "passwd" print_something works fine, but with copy_file I get this error: undefined method `cookbook_file' for Chef::Recipe::EP::FileTools:Class It's clear to me that I don't know how to create libraries in Chef or I don't know some basic assumptions. Can anyone help me, please? I am looking for a solution of this problem (organize my code, libraries, use resources in classes) or, better, a good Chef documentation as I find the documentation very deficient in clarity and disorganized so that research through it is a pain.

    Read the article

  • Linux: set up media server to stream video via the Internet?

    - by Hassan
    How do I set up a media server in Linux which streams video over the internet? Is it easy to do this? I want a server that will actually encode video in real time to allow it to stream over sometimes slow or unreliable networks. Basically, I want a server that works on the internet. I have a directory with a bunch of video files, and want to make this accessible to myself remotely. For other situations, I found great and useful software (such as the PS3 media server). I'd like to find something equally as useful for streaming video over the internet.

    Read the article

  • find files their name is smaller or greater than a given parameter

    - by Tzury Bar Yochay
    Say that in a given directory I got tzury@x200:~/Desktop/sandbox$ ls -l total 20 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N00.P000 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N00.P001 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N00.P002 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N00.P003 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N00.P004 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N01.P000 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N01.P001 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N01.P002 I seek for a bash way to grab the list of files which their name is either grater or smaller than a given parameter, for instance: $ my_finder lt N00.P003 shall return N00.P000, N00.P001 and N00.P002 $ my_finder gt N00.P003 shall return N00.P004, N01.P000, N01.P001 and N01.P002 I was thinking of iterating over for name in $(ls) and while $name != $2 but believe there are more elegant way of doing so

    Read the article

  • Is it possible to avoid umask 0002?

    - by Anatoly
    Is it possible to give an automatic ability to modify files(folders and all recursively) created by one user to another within one specified folder (let's say "shared") on the basis of both users belonging to the same secondary group (let's say "coworkers")? I've tried to achieve this by using ACL but with no success. Seems that umask wipes out corresponding bits.... I'm on FreeBSD 8.1 (but seems this problem is actual for other *NIX systems). Googling this problem (people often refer to it as "umask per directory" problem) gives the most relevant link: http://old.nabble.com/ACLs,-umask-and-shared-directories-td27820947.html that is not very promising... Want to ask ServerFault community - is it possible at all?

    Read the article

  • running autobench (httperf)

    - by Matthew
    So I ran apt-get install httperf on my system and I can now run httperf. But how can I run 'autobench'? I downloaded the file and unarchived it and if I go in it and run autobench it says -bash command not found I think it's a perl script but if I run perl autobench, it says: root@example:/tmp/autobench-2.1.2# perl autobench Autobench configuration file not found - installing new copy in /root/.autobench.conf cp: cannot stat `/etc/autobench.conf': No such file or directory Installation complete - please rerun autobench Even if I run it again it says the same thing.

    Read the article

  • How can I retrieve a MS SQL Express Database from a non-booting computer?

    - by Redandwhite
    A client has a very important database that has not been backed up in 6 months. The PC has promptly failed. The Windows directory is corrupt, and the computer will not boot. It had a Microsoft SQL Server Express 2005 database on it. I have access to the hard drive by booting in with an Ubuntu Live CD, but I am not sure if I can find the database. I am not sure what I am looking for, or where to look either. The dead machine had Windows XP on it.

    Read the article

  • Different configurations for ssh client depending on ip address or hostname

    - by John Smith Optional
    I have this in my ~/.ssh/config directory: Host 12.34.56.78 IdentityFile ~/.ssh/my_identity_file When I ssh to 12.34.56.78, everything works fine. I'm asked for the passphrase for "my_identity_file" and I can connect to the server. However, sometimes I'd also like to ssh to another server. But whatever the server, if I do: ssh [email protected] I'm also asked for the passphrase for "my_identity_file" (even though the server has a different ip address). This is very annoying because I don't have the public key for this file set up on all my servers. I'd like to connect to this other server (an old shared hosting account) with a password, and now I cant. How do I manage to use the key authentication only with one server, and keep using password by default for servers that aren't listed in my ~/.ssh/config ? Thanks for your help.

    Read the article

  • Can't get ZSH working on CentOS

    - by waveslider
    I've been using zsh for a couple of years now on Ubuntu and really like it a lot. I've installed it on our production server as well, which is running CentOS 5.2 However, I just installed it via yum on a new VM I created to use as a development box, to replicate our production box as closely as possible. Although yum shows that it is definitely installed (/bin/zsh) and that it is set as my shell, it does not appear to be working. Instead of creating the .zshrc and .profile files in my home directory, it created a .tcshrc file. Also, I did not receive the default configuration menu that is always displayed once you begin using ZSH, and none of the features (like advanced tab comple

    Read the article

  • Is rsync --delete safe in case of disk failure

    - by enedene
    I have two data hard drives on my Linux server and I use second as a backup for a first drive. I use rsync for that purpose. An example would be: rsync -r -v --delete /media/disk1/ /media/disk2/ What this does is that it copies every file/directory from /media/disk1/ to /media/disk2/ but also deletes any difference. For example, lets say that files A and B but not file C are on disk1, and on disk2 there is no A and B files, but there is C. The result would be that after the command on disk2 I'd have files A and B, but file C would be deleted, just like on disk1. Now, a rather disastrous scenario had crossed my mind; what if disk1 dies, system continues to work since system files are on my system disk, but when rsync tries to backup my data on disk2 from broken disk1, it deletes all the files from disk2 because it can't read anything on disk1. Is this a possible scenario, or is there a protection from it build in rsync?

    Read the article

  • How can I trace NTFS and Share Permissions to see why I can (or can't) write a file

    - by hometoast
    I'm trying to track down WHY I can write in a folder that, by my best estimation, I should not be able to write. The folder is shared with "Everyone" has "Full Control", with the files being more restrictive. My best guess is there's some sort of sub-group membership that's allowing me to write, but the nesting of groups that exists in our Active Directory is pretty extensive. Is there a tool, that will tell me which of the ACL entries allowed or disallowed my writing a file in a folder? The Effective Permissions dialog is marginally helpful, but what I need is something like a "NTFS ACL Trace Tool", if such a thing exists.

    Read the article

  • Windows 2008 startup script will not run?

    - by larsks
    I am trying to get a very simple batch script to run when my Windows 2008 Server (R2) system starts up. I have added the script to the "Startup Scripts" in the local group policy by running gpedit.msc, and I see the script listed under Windows Settings/Scripts (Startup/Shutdown)/Startup when I run rsop.msc, but the script is not being executed. The "Last Executed" column in rsop is empty even after a reboot, and a file that should be created by the script is never created. At the moment, the entire contents of the script are: rem Check if this script is running. date /t > c:\temp\flag The target directory (c:\temp) exists. The script is called c:\scripts\startup.bat, and works fine if I run it by hand.

    Read the article

  • How do I set the TEMP environment variable for the "Network Service" user?

    - by Chris Phillips
    We have a system that uses Path.GetTempFile and Path.GetTempPath calls to work with temporary files fairly frequently. This system also runs as the "Network Service" user. We're finding that we're running out of room on the C drive (for other issues, our temp files are cleaned up correctly) and would like to be able to move the temp directory to a different drive. The easiest solution to this seems to be to change the TMP or TEMP environment variables for the Network Service user, but I only seem to be able to set my own user or the "system" variables that are overwritten by the Network Service user profile. How do I set these variables for the Network Service user?

    Read the article

  • Using Quest AD cmdlets in an imported session

    - by ASTX813
    We are trying to use remote Powershell on our Exchange system: $rs = New-PSSession -ConnectionUri <uri> -ConfigurationName Microsoft.Exchange -Authentication Basic -Credential <username> -AllowRedirection Import-PSSession $rs After these commands, we can run Exchange cmdlets and all is well. However, we're unable to run any Quest Active Directory cmdlets. Yes, Quest is installed on the remote (as well as our local machines), and yes we are able to run those commands when running Powershell locally on the server. I tried -AllowClobber, but that didn't have an effect. Is there a way to get access to QAD?

    Read the article

  • Eliminating Windows 7 user tracking registry writes

    - by caffiend
    Windows 7 continues the practice of saving user actions in the registry. I'd like to disable this practice both to avoid reg-file fragmentation and SSD wear, as well as being uncomfortable with programs being able to quickly analyze my usage habits. Even with the "Turn off user tracking" policy enabled, there are at least two areas that still contain user tracking: HKCU\Software\Classes\Local Settings\MuiCache This key stores a cache of most-recently accessed strings, including most-recently ran exe descriptions. MKCU\Software\Classes\Local Settings\Software\Microsoft Windows\Shell\BagMRU This directory stores the most recently viewed folders along with timestamps. Are there additional policy settings/registry entries to disable these writes? If not, is it possible to make these entries Volatile? Would it be practical to create a temporary hive (eg, on ramdisk) and map it over this location?

    Read the article

< Previous Page | 691 692 693 694 695 696 697 698 699 700 701 702  | Next Page >