Search Results

Search found 17278 results on 692 pages for 'directory conventions'.

Page 507/692 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • Linux find/search root partition ONLY?

    - by ~sd-imi
    Say I need to do: find / -name somefile.txt and say root partition / is mounted on /dev/sda5; however, let's say I also have 250GB partitions (/dev/sda6, /dev/sda7) mounted in /media - AND another location that I cannot currently remember. Say, also, that I know the file I'm looking for is on /dev/sda5. Obviously, the above command will also descend in /media and that other directory which represent the big partitions, wasting time in looking for the file in the wrong place. Is there a way to instruct find (or other command) to search only / on /dev/sda5, and NOT to descend to directories if they are on different partitions ? Thanks, Cheers!

    Read the article

  • How can I migrate local users/groups from old Windows 2000 server to new Windows 2003 server?

    - by dmr83457
    On a Windows 2000 box I have setup local users and one group for the purposes of ftp sites for our clients to transfer files to their own site. We are now moving to a different server running Windows 2003. I would like to be able to transfer the users/group and related folders with permissions to the new server without setting them all back up by hand. I see tools available for migrating users to Active Directory but nothing for local to local migration. How should I go about doing this? Is there a capability already built into Windows 2000/2003 for this purpose? Thanks

    Read the article

  • ACL permissions not behaving as expected

    - by Yarin
    I set the following ACL on my web directory: setfacl -R -d -m mask:002 /var/www and then created a file as root that I expected to be readable by the default (apache) group. -rw--w-r--+ 1 root apache 0 Dec 17 22:32 newfile.py When I run getfacl on the file, I get: # file: newfile.py # owner: root # group: apache user::rw- group::rwx #effective:-w- mask::-w- other::r-- I'm not sure how to read this- but all I know is that the webserver is throwing a permissions error because apache can't read the file. Can anyone explain what is going on here?

    Read the article

  • Make emacs aware of files externally moved/renamed on Mac os x

    - by Gyom
    I've been using mac os x for several years, and I realize that I've now gotten used to all applications transparently "following" files as I rename or move them (either via mv on the console or within the Finder's GUI), and emacs is pretty much the only program that does not. This is a shame though, because most of my time in front a screen is actually spent in front of emacs :-) Would anyone have any ideas or pointers about what measures I could take to get that behaviour in emacs ? (yes I know this is "impossible" to acheive in general, but when I just rename a simple file, or move it to a directory nearby, it's a shame I have to close/reopen it for emacs to notice. oh and no, I'm not going to use 'dired' as a file manager :-)

    Read the article

  • In Mac OS X Finder's column view, how do you show all columns, up to the list of volumes?

    - by John Douthat
    In OS X's olden times, column view always allowed you to scroll left back to the list of volumes. In recent versions, however, the Finder will hide parents and ancestors. For example, when you select a favorite "place" in the sidebar, no ancestors of that folder will be visitable without pressing Cmd+Up, but hitting Cmd+Up causes the current directory to lose focus, or disappear entirely, depending on the number of levels . Clicking "Back" sends you back to the folder you where in, but it also re-hides all of its ancestors :( I really wish I could see the entire hierarchy. Is that possible?

    Read the article

  • Optimize Windows file access over network

    - by Djizeus
    At my company I frequently need to access shared files over a Windows network. These files are located on the other side of the planet, so I guess the file share goes through some kind of VPN over Internet, but I don't control this and it is supposed to be "transparent" for me. However it is extremely slow. Displaying the content of a directory in the file explorer takes about 10s. Even if over the Internet, I did not expect that retrieving a list of file names would be that long. Are there any settings to optimize this from my Windows XP workstation, or is it mostly related to the way the network is configured? The only thing I have found so far is to cache all file names, while by default only short file names are cached (http://support.microsoft.com/kb/843418).

    Read the article

  • Accessing ActiveX control through web server

    - by user847455
    I have developed the ActiveX control & register with Common CLSID number . using the CLSID number accessing the active X control on the internet explorer (as web page).using following object tag used in .html file OBJECT id="GlobasysActiveX" width="1000" height="480" runat="server" classid="CLSID:E86A9038-368D-4e8f-B389-FDEF38935B2F" i want to access this web page through web server .I have place this web page into the vitual directory & access using localhost\my.html it's working. but when i have accessed from LAN computer it will not access the activeX control from my computer . how to embed or download the activeX control form my computer into the LAN computer through web server thanks in advance

    Read the article

  • Need help automating a task in Linux

    - by Niphoet
    I'm still kind of new to Linux, but here's what I'm trying to do. I need to copy all subdirectories and files from one directory to another ever 5 minutes or so, with the old data automatically being overwritten with the new data. I'd also like this to run at startup. Is there any way this can be done? If so, what program would I need to schedule the automation and what is the command line I would need (cp ???). Thanks in advance!

    Read the article

  • Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work? [closed]

    - by themoondothshine
    I'm trying to learn more about library versioning in Linux and how to put it all to work. Here's the context: I have two versions of a dynamic library which expose the same set of interfaces, say libsome1.so and libsome2.so. An application is linked against libsome1.so. This application uses libdl.so to dynamically load another module, say libmagic.so. Now libmagic.so is linked against libsome2.so. Obviously, without using linker scripts to hide symbols in libmagic.so, at run-time all calls to interfaces in libsome2.so are resolved to libsome1.so. This can be confirmed by checking the value returned by libVersion() against the value of the macro LIB_VERSION. So I try next to compile and link libmagic.so with a linker script which hides all symbols except 3 which are defined in libmagic.so and are exported by it. This works... Or at least libVersion() and LIB_VERSION values match (and it reports version 2 not 1). However, when some data structures are serialized to disk, I noticed some corruption. In the application's directory if I delete libsome1.so and create a soft link in its place to point to libsome2.so, everything works as expected and the same corruption does not happen. I can't help but think that this may be caused due to some conflict in the run-time linker's resolution of symbols. I've tried many things, like trying to link libsome2.so so that all symbols are alised to symbol@@VER_2 (which I am still confused about because the command nm -CD libsome2.so still lists symbols as symbol and not symbol@@VER_2), but nothing seems to work. What am I doing wrong?

    Read the article

  • Access denied to EFS encrypted files after PC joins domain

    - by mjmarsh
    I'm experiencing strange behavior with Windows Encrypted File System: I have a machine that is in workgroup mode (not joined to a domain) I encrypt an entire directory structure on the machine (basically a folder and subfolders with data files for my application). My application writes and reads files from the encrypted file hierarchy as a local Windows user (let's call the account 'SecureUser'). This works fine I then join the PC to a domain (Let's call it 'TEST') Afterwards, processes running as the local 'SecureUser' account can't read the files it wrote originally when it was off the domain (What is also strange is that the files are listed as "read only" now and I cannot unset this flag via Windows Explorer or the command line, even though it looks like it succeeds) I then 'un-join' the PC from the domain and everything works again Is there something about changing domain membership on a PC that changes the behavior of EFS so that previously encrypted files cannot be read, even by the originating user? Thanks in advance

    Read the article

  • Restrict Apache to only allow access using SSL for some directories

    - by DrStalker
    I have an Apache 2.2 server with an SSL certificate hosting several services that should be only access using SSL. ie: https://myserver.com/topsecret/ should be allowed while http://myserver.com/topsecret/ should be either denied or, ideally, redirected to https. http://myserver.com/public should not have this restriction, and should work using either http or https. The decision to allow/deny http is made at the top level directory, and affects all content underneath it. Is there a directive that can be placed in the Apache config to retrict access in this manner?

    Read the article

  • Copy files off FreeBSD

    - by Josh
    I have a FreeBSD machine that I have to copy everything off the drive. The fielsystem is UFS and not readable by any other operating system. (great...) I have a USB flash drive (FAT32) I need to copy everything to from the SATA in the bsd machine. I looked up cp commands, and got it to partially work, but it seems to copy to the wrong directory. I cannot find out the "name" of the USB drive, and if it can even copy to it.

    Read the article

  • All items in my pen drive had been renamed automatically and cannot open it

    - by pabz
    All the items (both files and folders) inside my pen drive had been renamed to some characters like :]h.¡?.A++ and when I try to open any folder Windows gives this message. The filename,directory name, or volume label syntax is incorrect I was told that it is a problem of the pen drive, not a virus. They say if I format the pendrive then I will be able to use it again normally. But I'm not sure. And I need those files. Does anybody knows a solution?

    Read the article

  • Mounting Solaris UFS partition on Debian(with FreeBSD kernel)

    - by hayalci
    I have some disks that were being used on a Solaris system. The disks are formatted as UFS. I attached them to a Debian system (with FreeBSD kernel. Debian/kFreeBSD), but I cannot mount them. $ mount -t ufs /dev/da2s1 /mnt/diska mount: /dev/da2s1 : Invalid argument Also the tunefs.ufs does not work; $ tunefs.ufs -p /dev/da2s1 tunefs.ufs: /dev/da2s1: could not read superblock to fill out disk Is there an incompatibility between FreeBSD UFS and Solaris UFS? Is it possible to mount one, under the other OS ? Note: tunefs.ufs works on the root partition $ tunefs.ufs -p /dev/da7s2 tunefs.ufs: ACLs: (-a) disabled tunefs.ufs: MAC multilabel: (-l) disabled tunefs.ufs: soft updates: (-n) disabled tunefs.ufs: gjournal: (-J) disabled tunefs.ufs: maximum blocks per file in a cylinder group: (-e) 2048 tunefs.ufs: average file size: (-f) 16384 tunefs.ufs: average number of files in a directory: (-s) 64 tunefs.ufs: minimum percentage of free space: (-m) 8% tunefs.ufs: optimization preference: (-o) time tunefs.ufs: volume label: (-L)

    Read the article

  • Invalid user names when creating a LDAP account

    - by h1d
    I'm trying to set up a system where a visitor can enter any user name in a form to create a new user and in the end it gets built on LDAP directory and I'm planning that to be mapped as a UNIX account as well (on Ubuntu Linux) by making the system look up for system accounts on the LDAP. Doing so is fine, but I feel that many user names should be avoided, one of the obvious being 'root' and all the other user names taken for daemons etc. How do you tackle at this problem? Do you make up a list of disallowed user names by checking /etc/passwd? I was thinking that if, internally, the user names could be prepended as 'ldap_' or something, it will avoid any naming conflicts but that seems hard when the LDAP entry name is 'joe' but the system account will look like 'ldap_joe'. Not even sure how that can be achieved.

    Read the article

  • Redirecting HTTP traffic from a local server on the web

    - by MrJackV
    Here is the situation: I have a webserver (let's call it C1) that is running an apache/php server and it is port forwarded so that I can access it anywhere. However there is another computer within the webserver LAN that has a apache server too (let's call it C2). I cannot change the port forwarding nor I can change the apache server (a.k.a. install custom modules). My question is: is there a way to access C2 within a directory of C1? (e.g. going to www.website.org/random_dir will allow me to browse the root of C2 apache server.) I am trying to change as little as possible of the config/other (e.g. activating modules etc.) Is there a possible solution? Thanks in advance.

    Read the article

  • User WinWget to keep web site alive in a Windows Server 2003

    - by Menelaos Vergis
    I have a site that must stay alive due to a service that runs and check a directory for changes. The site is running in IIS at a Windows Server 2003 and the solution I came up it that I will Schedule a task that requests the home page every 5 minutes. I am sure that this way the site will stay alive almost all the time. I have downloaded Wget from Wget from Windows and I have installed it at my windows server 2003 but I don't know how to use it in order to ping the server but not download anything. Since I want to use this forever I don't want to save anything on the disk, can you provide me with the command that pings a web page but don't save anything on the disk?

    Read the article

  • Ways to deduplicate files

    - by User1
    I want to simply backup and archive the files on several machines. Unfortunately, the files have some large files that are the same file but stored differently on different machines. For instance, there may a few hundred photos that were copied from one computer to the other as an ad-hoc backup. Now that I want to make a common repository of files, I don't want several copies of the same photo. If I copy all of these files to a single directory, is there a tool that can go thru and recognize duplicate files and give me a list or even delete one of the duplicates?

    Read the article

  • Setting up linux server with multiple access rights

    - by Mark
    I am a graduate student and want to set up a linux server (preferably Ubuntu) in my office. I also want to give my friends SSH access to that box. My question is can I set up my server such that I can give one of my friends rights to install software on my machine but he cannot brows around outside the directory he is allowed to? Can I set up multiple apache instances (on different ports) for different people? so each has access to their own apache instance?

    Read the article

  • Crontab stopped unexpectedly

    - by naka
    I have following entries in the crontab: 0 0 * * * /mnt/voylla-production/releases/20131111011431/script/rubber cron --task util:rotate_logs --directory=/mnt/voylla-production/releases/20131111011431/log 0 4 * * * /mnt/voylla-production/releases/20131111011431/voylla_scripts/cj_daily.sh 0 2 * * 6 /mnt/voylla-production/releases/20131111011431/voylla_scripts/cj_saturday.sh I worked fine until today. It didn't run as scheduled after a capistrano deploy, didn't get a mail either. It worked fine earlier, and I am unable to understand what wrong. The only change that was made was the deploy, but I think it should not affect the cron. I tried using pgrep cron to see if crons is working. It gives 904 as output. Could someone please help. Thanks

    Read the article

  • Additional Hard Drives for Servers

    - by Abs
    Hello all, I am developing a web app where I will have to save lots of files and I am just trying to work out the directory structure and where things should be saved to. I have had a look at the dedicated server I want to buy and for storage it shows this: 2x 1TB SATA in RAID1 The space is enough but I am guessing this will not be on one hard drive? I will have to save files on one hard drive and when that fills up, I have to use the other? For the Fedora distro - what is the path for the second drive? Is there a primary drive where I will be able to setup my webroot? I am sorry, this is all new to me. It would be great to links and advice on how things actually work when it comes to additional hard drives etc. Thanks all

    Read the article

  • Windows hiding other user's files?

    - by JoshJordan
    I had a hard drive whose windows installation (running Vista) became corrupt. I bought a new hard drive, installed Windows 7, and hooked up the old drive using an external enclosure. The Users folder on the old drive shows the users that existed on the machine, but it doesn't show any of the contents of them. I assume this is due to not having the permissions I need. I have "taken control" of the folders I'm interested in, but this didn't prompt me for the original owner's password as I expected, and I still can't see the file contents. I would guess that this is a fairly common issue, but I'm not sure what to Google here. How can I get access to files in that drive's User directory?

    Read the article

  • Ultimate way to use Picasa in a home network

    - by luisfarzati
    I've been trying a lot of approaches but still didn't find any effective solution. I want gigs of photos in a network drive (a IOMega Home Media Network Drive, plugged to my wifi router). I'd like to do 2 things: Do a Picasa import process of all the photos in the drive, making Picasa organize all the files in a year/month folder structure physically. Ideally, the import target directory should be the same network drive, otherwise I should move all the imported files in my local computer back to the drive myself. Share the Picasa database over the network, by uploading it to the network drive. Have me and other members of the family point our Picasas to the network database, and see the photos as well as make changes (tag faces, create logical albums, etc) into it. Is ANY possibility to accomplish this? Or should I be looking for another photo management app, and in that case do you know such one? Thank you!

    Read the article

  • User and Key Press Issues with Putty

    - by DizzyDoo
    Ubuntu Server newbie here, got some annoying issues with remote accessing my box with Putty. When I create a user and then login as that user, the terminal always starts with just '#' and not 'user@hostname:~#' which isn't useful where I want to see where I've changed directory too, like I can normally. Also, when logged in as a user, I can't press the cursor keys to move the caret (blinking thing) around, or press up to see previously executed commands. Instead it gives me this representation of the button pressed: ^[[D ^[[A ^[[B ^[[C. Pressing Delete, too, gives me ^[[3~. This is all strange to me, because when logged in as root, it all works fine. I'm hoping this is just something I've accidentally changed in Putty, or added the user wrongly, or perhaps just got caps lock on. Thanks.

    Read the article

  • rsync invocation to replace symlinks pointing to source?

    - by bdbaddog
    Currently I'm moving a big filesystem to a new server as the original fileserver is no longer able to handle the filesystem writes. To make this quick I made symlinks at the target filesystem pointing to the original filesystem. Initially: /company/release (mountpoint of the original filesystem) After migration: /company/release.old (points to original filesystem after automount map update) /company/release (points to new fileserver/filesystem after automount map update) In /company/release there are symlinks like the following: /company/release/product-1.0.tar.gz - /company/release.old/product-1.0.tar.gz /company/release/product-1.0 - /company/release.old/product-1.0 (this is a tree of files) Using symlinks allowed me to move the writes to the new filesystem quickly. Now I'd like to slowly migrate the existing files and directories to the new filesystem. The problem I'm running into is that since the symlinks point back at the original files rsync doesn't see any difference and so it doesn't actually copy the file(s) or directory(s) and remove/overwrite the symlinks. Is there a set of rsync flags which will do what I want?

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >