Search Results

Search found 3168 results on 127 pages for 'directories'.

Page 67/127 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Create 8.3 name for an existing directory

    - by Chris Karcher
    I have a machine that initially had 8.3 filename creation disabled. However, this was causing issues with some legacy software, so it was re-enabled. I'm wondering if it's possible to go back and "add" 8.3 filenames to certain existing directories. For example, say I have a directory named "C:\name with spaces" and I get the following output when I run "dir /x": C:\>dir /x Volume in drive C has no label. Volume Serial Number is 6873-65B8 Directory of C:\ 04/09/2010 01:57 PM <DIR> name with spaces ... I'd like to somehow add an 8.3 name for the directory without recreating it, and then get the following: C:\>dir /x Volume in drive C has no label. Volume Serial Number is 6873-65B8 Directory of C:\ 04/09/2010 01:57 PM <DIR> NAMEWI~1 name with spaces ... I tried the 'rename' command but it didn't do the trick.

    Read the article

  • Windows Explorer Jumplist delay

    - by Jeremy B.
    I've begun to have an issue with the Jump list for Windows Explorer (Windows 7). What happens is that when i gesture click, or right click on the explorer icon the first time after rebooting the initial menu (Windows Explorer, Unpin) and the Jump List with my pinned folders and frequent folders works fine. Any subsequent attempt will open the initial menu, but the jump list will experience a long delay (10-20 seconds) before the pinned and frequent folders will pop up. Initially my frequent folders would grab a networked drive which I thought might be the cause. I have removed that from the list and there is still no change. I really enjoy being able to quickly hit directories i use often and this is quite frustrating. Any help would be extremely appreciated.

    Read the article

  • Ubuntu, control the init startup

    - by Xolve
    Ubuntu uses upstart instead of sysvinit. However there are still runlevels and the links in them. I have installed tor and it has added itself to the startup of the OS. Now I want to remove it and the popular options are to remove the links of starting and stopping the service from runlevels or make the /etc/init.d/ script non-executable. This is fine but this will be problematic in case I want to put tor again on the startup list. How would I know to put the proper sequences in the proper runlevel directories. Is there any complete guide given? What are the rules for this? Any tools to manage the init? Please tell

    Read the article

  • Why are my files in /var/lock and where did they just go?!

    - by Nicky Hajal
    I am hosting a website on Debian 5.0 & Apache2. Today one of my websites was down, Apache said it couldn't find the directory. I located the files and the whole site once in /var/www/site was now /var/lock/site. All the files were present. I was confused, but figured I'd just move it back. mv /var/lock/site /var/www All looked fine... Except that only the directories moved and the files appear to be lost! I am working on restoring from backups but I would really love to know what happened and where my files went (the backups are a few days old). Thanks for your help!

    Read the article

  • Setting up NFS server on Gentoo

    - by StackedCrooked
    I'm trying to set up an NFS server on a Gentoo VM. I've installed nfs-utils-1.2.2 and added the following line to the /etc/exports file: /root/svn 10.0.0.0/255.0.0.0(rw,sync,no_subtree_check) However, when I try to start the nfs service I get the following errors: gentoo-amd64-francis orig # /etc/init.d/nfs start FATAL: Could not load /lib/modules/2.6.24-9-pve/modules.dep: No such file or directory * Exporting NFS directories ... [ ok ] * Starting NFS mountd ... [ !! ] * Starting NFS daemon ... [ !! ] * Starting NFS smnotify ... [ ok ] It complains about not finding the /lib/modules/2.6.24-9-pve/modules.dep file, but the /lib/modules directory doesn't even exist on this machine. Can anyone help me getting it to work?

    Read the article

  • Linking to lua libraries w/ codeblocks on linux

    - by person
    After I downloaded the source for lua, I followed the install instructions, doing... make linux install make generic install I've also done the make test and it passes, printing out Hello World, from Lua 5.1. However, I can't link to the lua libraries in CodeBlocks. I know where lualib.a is (usr/local/lib) which I set in my Search Directories for the linker. I still get error messages like... undefined reference to lua_isstring Am I missing something critical here? P.S. I had this running on Windows via Visual Studio.

    Read the article

  • Physically moving a hard drive from older iMac (c2d) to new iMac (i7) ?

    - by Inshim
    Instead of my usual habit of using superduper to mirror my drive to a new computer, I just physically moved the hard drive from an older iMac to a new one. But... it now doesn't boot, getting stuck at the apple logo screen. Since the hard drive that came with the new iMac works well, and my old drive works well when I return it to the older iMac, I conclude that there is some problem at the system/kernel level due to the different hardware. In the past I did similar things (e.g. starting a C2D machine from a Core Duo in target disk mode), so perhaps the change in architecture to the i5/i7 is too problematic? The main point: do you know of any way to get the system to rebuild for itself the proper versions of the system components when booting? Are there certain directories that I can safely delete to make that happen? Thanks

    Read the article

  • MacOS X 10.6 Portable Home Directory sync fails due to FileSync agent crashing

    - by tegbains
    On one of our cleanly installed MacPro machines running MacOS X 10.6.6 connected to our MacOS X 10.6.6 Server, syncing data using Portable Home Directories fails. It seems to be due to the filesync agent crashing during the home sync. We get -41 and -8026 errors, which we are suspecting are indicating that there is too much data or filesync agent can't read the files. The user is the owner of the files and can read/write to all of the files. < Logout 0:: [11/02/04 13:10:42.751] Error -41 copying /Volumes/RCAUsers/earlpeng/Library/Mail/Mailboxes/email from old imac./Attachments/12081/2.2. (source = NO) < Logout 0:: [11/02/04 13:10:42.758] Error -8062 copying /Volumes/RCAUsers/earlpeng/Library/Mail/Mailboxes/email from old imac./Attachments/12081/2.2/[email protected]. (source = NO) < Logout 1:: [11/02/04 13:10:42.758] -[DeepCopyContext deepCopyError:sourceError:sourceRef:]: error = -8062, wasSource = NO: return shouldContinue = NO

    Read the article

  • Mac Leopard Server Apache Permission Denied

    - by dallasclark
    I've setup the web server successfully on Mac Leopard Server and sites work fine within the DocumentRoot directory. I have mounted a volume which has restricted access to users within a group. I would like to point the web server to directories within this volume. Can I add the user the web service is using to the group that has access to this Volume, if so: how do I find out what the user is? I can confirm the web server is pointing to the right directory as log files show the full directory path. When you access the site's URL, it shows Access Forbidden.

    Read the article

  • Linux LVM snapshot commit or revert?

    - by Shewfig
    Hi, I'm about to perform an experimental upgrade on my CentOS 5 server. If the upgrade fails, I want to be able to back out the changes to the filesystem. This scenario seems similar to the example in Section 3.8 of the LVM HOWTO for LVM2 read-write snapshots - but the example is rather lacking in actual how-to. 1) How would I commit the changes, merging them back into the original partition? 2) How would I revert the changes, restoring the filesystem back to its original state? Should I assume that I'll need to restart several services, if not outright reboot? 3) Is it possible to snapshot only certain directories on a partition, or is it a partition-wide operation? Thanks...

    Read the article

  • Executed PHP files are stale unitl "touched" (Symlinked NFS mount as web root)

    - by mmattax
    We have a PHP application that has 3 web servers (running Nginx and Apache). The web server's directory root are symlinked directories that point to an NFS mount. For example: web01 has an NFS mount at /data/webapp, which is symlinked to /home/webapp. Apache serves content from /home/webapp/www. We also use ACP for our PHP opcode cache. When we deploy code, we SCP an archive file to the NFS server and extract it. Since upgrading RedHat 6, when we deploy our code the webserver execute "stale" PHP files until touch is run on the PHP files. We thought that APC might be causing a problem, but the issue exists, even after clearing the opcode cache. Any ideas on how to diagnose why the stale PHP code is being executed?

    Read the article

  • How do I force specific permissions for new files/folders on Linux file server?

    - by humble_coder
    I'm having an issue with my install of Ubuntu 9.10 (file server) and its samba permissions. Logging in and reading works fine. However, creation of new directories by users restricts access for other users. For instance, if Bob (Windows user who maps the drive) creates a folder in the directory, Jane (Mac user that simply smb mounts) can read from it, but can't write to it -- and vice versa. I then must go CHMOD 777 the directory for everyone to be happy. I've tried editing the "create/directory mask", and "force" options in the smb.conf file but this doesn't seem to help. I'm about to resort to CRONTABing a recursive chmod routine, although I'm sure this isn't the fix. How do I get all new items to always be 777? Does anyone have any suggestions to fix this ever-occurring situation? Best

    Read the article

  • Preparing a new physical system with VMWare

    - by Max
    I need to create a new installation of Windows, but at the same time I need this computer. So I decided to create a new physical disk from within VMWare, install windows/drivers/software and then just replace the HDD in the computer. I've bought a new HDD, split it into two partions and installed Windows 7 using the VMWare's ability to use phusical disks. I can see the windows files and directories that have been created on this partition, but when I'm replacing the HDD in the host machine it cannot boot from it. Why is that? Is it at all possible to create a bootable physical disk with VMWare or I should create a virtual disk and then use some HDD imaging tool to copy the HDD image to a physical disk? Maybe there's a better way of installing a new system and working on the computer at the same time?

    Read the article

  • Default file type supported by IHS web server

    - by SK
    Hello, We earlier used IIS web server. To redirect some URLs ending with .asp, we created a directory structure based on URL's to be redirected; wrote VB script in .asp files to redirect present page to desired page and placed these .asp files in appropriate directories. Finally copied this directory structure to the docroot of IIS webserver. Due to some reasons, we had to switch to IHS web server. As IHS does not support .asp files, we can't use same directory structure having .asp files to redirect our URLs. Please let me know the default file type that is supported by IHS webserver (as the default filetype supported in IHS is .asp). Thanks in advance! SK

    Read the article

  • Apache security for multi-user development web server.

    - by mrmartinblue
    I've been searching and reading through documents all morning and understand that I need to use some combination of chown and probably 'jailing' to securely give programmers access to directories on my centos webserver. Here's the situation: I have an apache web server that has any number of virtual sites located in /var/www/site1 /var/www/site2 etc.. I have different developers that need full access both ssh and vsFTP to only the site they are working on. What is the best way to create and maintain security in this scenario. My thought would be to create a new user for each coder, jail that user to the website directory they are allowed to work in, add their user to a group and set the webroot's owner to that group. Any thoughts? Good, bad, ugly? Thanks!

    Read the article

  • Questions about linux root file system.

    - by smwikipedia
    I read the manual page of the "mount" command, at it reads as below: All files accessible in a Unix system are arranged in one big tree, the file hierarchy, rooted at /. These files can be spread out over several devices. The mount command serves to attach the file system found on some device to the big file tree. My questions are: Where is this "big tree" located? Suppose I have 2 disks, if I mount them onto some point in the "big tree", does linux place some "special marks" in the mount point to indicate that these 2 "mount directories" are indeed seperate disks?

    Read the article

  • Is there a way to determine the original size or file count of a 7-zip archive?

    - by Zac B
    I know that when I compress an archive with the 7za utility, it gives me stats like the number of files processed and the amount of bytes processed (the original size of the data). Is it possible, using the commandline (on linux) or some programming language, to determine: the original size of an archive, before it was compressed? the number of files/directories contained within an archive? The answer might be "no, just decompress the whole archive and do counting/sizing then", but it would be useful to know if there was a faster/less space-greedy way.

    Read the article

  • Can you link an NTFS junction point to a directory on a Network Attached Storage?

    - by Zachary Burt
    I'm using Windows, and I want to use Dropbox to back up a folder outside my Dropbox directory. So I want to create a junction point from my target directory to my Dropbox folder. Accoding to the Wikipedia article on NTFS junction points, which the Dropbox answer links to: "Junction points can only link to directories on a local volume; junction points to remote shares are unsupported." I am looking to link to a directory on networked attached storage, which would not be a local volume, I believe. What should I do?

    Read the article

  • Windows Despooler Requires Administrator to Print

    - by Software Monkey
    Does anyone know what changes I might need to make to allow restricted users to print using a printer configured for spooling? My Windows XP SP3 system currently requires me to use an Admin account for printing if the print is configured to spool documents before printing. If the printer is configured for direct printing it works for all accounts. This used to work and some months back it just stopped, and I can't pin down why. The printer itself is configured for all uses to have complete authority. My system is locked down for restricted users given them only read authority to the entire file system except their data directories, which is how I have run my systems for years. I assume there may be a directory somewhere that I need to allow users to write to.

    Read the article

  • How to get ~/foo from /home/user1/foo?

    - by Claudius
    The Bash prompt supports the \w escape sequence, documented as \w the current working directory, with $HOME abbreviated with a tilde (uses the value of the PROMPT_DIRTRIM variable) Is there any way to get a similar abbreviation for an arbitrary string? That is, is there a general command that does something like the following, provided that HOME=/home/user1 /home/user1 ? ~ /home/user1/a/1 ? ~/a/1 /home/user2/b/2 ? ~user2/b/2 /root ? ~root Sure, I could try something ugly with sed, but that is unlikely to give me the result I want in any case. :-) The movitation behind this is that I would like to keep the titles in the tabs of my terminals as short as possible, hence abbreviate working directories where possible.

    Read the article

  • opening offline sync files from a .CAB file

    - by Rob
    OK, I have downloaded from Windows Live Spaces (don't know if this is useful, but might be) a .CAB file containing an Index.XML file and package.cab, package01.cab through to package12.cab. The index.XML simply has names of all the subsequent package.cab files and their offsets. The first package.cab has a single 26MB XML file which appears to be an OfflineSyncFile definition which I am guessing is the meta data for all the other packageXX.cab files. Now the question I have is how should i be going about extracting these things and piecing it all back together again. I have tried WinRAR, which extracts all 800MB for me into unnamed files and randomly named directories. I have also tried the standard extract in Windows Explorer with much the same resusts.

    Read the article

  • Trouble migrating Joomla from Linux to Windows server

    - by Matt
    I'm having some trouble migrating a Joomla website from a Linux server to a Windows server. The database came over fine, Besides that, all I've done is download all the files from the current site, and change configuration.php so the log and tmp directories show "./tmp/" and "./logs/" I keep having an error in the PHP log stating PHP Fatal error: Class 'JTable' not found I've downloaded the site multiple times now, and I'm convinced this is a configuration problem, and not a missing file problem. I've even tried installing a mod for backup on the linux box to try and migrate the site, but sadly the mod had problems installing. The new server is running IIS 7.5 on Windows Web Server. PHP 5.2.14 and MySQL

    Read the article

  • Printer Brother DCP-110C Linux 64-bit drivers

    - by Ondra Žižka
    Hi, I need 64-bit Linux driver for DCP-110C (for Ubuntu 10.04 64-bit) I found only 32-bit here. http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/index.html I've tried to follow those instructions. During the installation, I got this: ondra@ondra-doma:~/Downloads$ sudo dpkg -i --force-all dcp110clpr-1.0.2-1.i386.deb dpkg: warning: overriding problem because --force enabled: package architecture (i386) does not match system (amd64) (Reading database ... 257283 files and directories currently installed.) Preparing to replace dcp110clpr 1.0.2-1 (using dcp110clpr-1.0.2-1.i386.deb) ... Unpacking replacement dcp110clpr ... Setting up dcp110clpr (1.0.2-1) ... ln: creating symbolic link `/usr/lib/libbrcompij2.so.1.0': File exists ln: creating symbolic link `/usr/lib/libbrcompij2.so.1': File exists ln: creating symbolic link `/usr/lib/libbrcompij2.so': File exists After installation, the printer is listed at the cups server, but does not work (no command has any effect on printer (which is, of course, on and connected)). Anyone has found some working solution? Thanks, Ondra

    Read the article

  • How to configure sudoers with path wildcards?

    - by C. Lee
    I need sudo for a command for any path under a particular area. Example: sudo mycommand /opt/apps/myapp/... What is the sudoers syntax to allow this command to run in any path that falls under /opt/apps/myapp? This is Solaris 10 sudo. Thank you for your reply, but I don't need wildcards for the path to the commands, but wildcards for the arguments for the commands. For example, we want to do something like... sudo mycmd /opt/userarea/area1 sudo mycmd /opt/userarea/area1/area2 sudo mycmd /opt/userarea/area1/area2/area3 So far, using wildcards for the arguments in sudoers look like this: /opt/userarea/* /opt/userarea/*/* And it seems like if we want to have N levels of directories, then we need N lines in sudoers! Is there a better way to include all N levels in one line in sudoers? Thanks.

    Read the article

  • redundant/multi-site terminal server

    - by Adam
    Hi We have a Hyper-V cluster running 5 virtual terminal servers using HA. We need to be able make this system redundant and so if this site was to fail our users could log into the backup system at another location and access their data via the terminal servers. Any ideas? We were thinking of maybe using a NAS which replicated the data to the other location in real-time(pass-through disks)? and having a similar Hyper-V cluster setup in the backup location. However we would need to create the users in both location and create a virtual mirror without the data ie applications, directories, settings etc. Is this the best way to achieve this? We have read that using Hyper-v pass through disks is a big performance de-grade.

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >