Search Results

Search found 16797 results on 672 pages for 'directory traversal'.

Page 488/672 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • How do I get debuild to put the binary in /usr/bin?

    - by SammySP
    I have been recently trying to package a small Python utility to put on my PPA and I've almost got it to work, but I'm having problems in making the package install the binary (a chmod +x Python script) under /usr/bin. Instead it installs under /. I have this directory structure - http://db.tt/0KhIYQL. My package Makefile is like so: TARGET=usr/bin/txtrevise make: chmod +x $(TARGET) install: cp -r $(TARGET) $(DESTDIR) I've used $(DESTDIR), as I understand it to place the file under the debian subdir when debuild is run. I have the txtrevise script, my executable, under usr/bin folder under the root of my package. I also have the Makefile and usr/bin/textrevise in my tarball: txtrevise_1.1.original.tar.gz. However when I build this and look inside of the Debian package, txtrevise is always at the root of the package instead of under usr/bin and will be installed to / instead of /usr/bin. How can I get debuild to put the script in the right place? Thanks. Any help would be greatly appreciated. I'm stumped.

    Read the article

  • Move /var directories to to /mnt on an EC2 instance

    - by Geoff Lanotte
    I am trying to work on a standard configuration for a set of EC2 instances running ubuntu 12.04. These servers are going to be primarily web servers for a Ruby on Rails application. When you configure a new large instance, you are given a primary of 8GB and then ephemeral storage of 400 GB that is mounted to /mnt. It seems logical to me to move some directories that have a potential for growth off to the /mnt directory, I was specifically thinking of /var/www and /var/log. My question is two-fold: Is this a good idea or are there pitfalls that I cannot see? If this is a good idea, how should I go about configuring this. I do have the ability to configure new instances and down our old instances. My concern is over long term, doing this in such a way that it prevents downtime. I am a developer with some experience in devops, but mounting drives is something I have not faced before, so explicit directions would be greatly appreciated.

    Read the article

  • ACL permissions not behaving as expected

    - by Yarin
    I set the following ACL on my web directory: setfacl -R -d -m mask:002 /var/www and then created a file as root that I expected to be readable by the default (apache) group. -rw--w-r--+ 1 root apache 0 Dec 17 22:32 newfile.py When I run getfacl on the file, I get: # file: newfile.py # owner: root # group: apache user::rw- group::rwx #effective:-w- mask::-w- other::r-- I'm not sure how to read this- but all I know is that the webserver is throwing a permissions error because apache can't read the file. Can anyone explain what is going on here?

    Read the article

  • How to find all photos taken in April - any April?

    - by Mawg
    I have 100+ gB of photos going back 25 years. They are arranged in a directory tree by category, with nested sub-directories. How can I make a search for all photos taken in a given month, say April, in any of those directories? I don't think that a Windows search will work as that will probably be the file creation data, which could be a month or two later wen I finally more the files from SD card to PC. Perhaps searching the EXIF data? Is there a free program which can do that?

    Read the article

  • Mounting Solaris UFS partition on Debian(with FreeBSD kernel)

    - by hayalci
    I have some disks that were being used on a Solaris system. The disks are formatted as UFS. I attached them to a Debian system (with FreeBSD kernel. Debian/kFreeBSD), but I cannot mount them. $ mount -t ufs /dev/da2s1 /mnt/diska mount: /dev/da2s1 : Invalid argument Also the tunefs.ufs does not work; $ tunefs.ufs -p /dev/da2s1 tunefs.ufs: /dev/da2s1: could not read superblock to fill out disk Is there an incompatibility between FreeBSD UFS and Solaris UFS? Is it possible to mount one, under the other OS ? Note: tunefs.ufs works on the root partition $ tunefs.ufs -p /dev/da7s2 tunefs.ufs: ACLs: (-a) disabled tunefs.ufs: MAC multilabel: (-l) disabled tunefs.ufs: soft updates: (-n) disabled tunefs.ufs: gjournal: (-J) disabled tunefs.ufs: maximum blocks per file in a cylinder group: (-e) 2048 tunefs.ufs: average file size: (-f) 16384 tunefs.ufs: average number of files in a directory: (-s) 64 tunefs.ufs: minimum percentage of free space: (-m) 8% tunefs.ufs: optimization preference: (-o) time tunefs.ufs: volume label: (-L)

    Read the article

  • Login-time quota for VPN users

    - by Isaac
    I have configured Routing and Remote Access Service in Windows Server 2003 as the VPN server. VPN users are defined in Active Directory which is running on this server too. How i can configure the server to give each user a limited download size (for example 1GB) and does not authenticate them when they exceeds their download quota. The VPN server should also disconnect the users that reach their quota. Update: Apparently a third-party RADIUS server could provide this feature. One solution I have found is TekRADIUS but it is commercial. FreeRADIUS is a open-source free RADIUS server but I am not sure if it could these kind of features.

    Read the article

  • Invalid user names when creating a LDAP account

    - by h1d
    I'm trying to set up a system where a visitor can enter any user name in a form to create a new user and in the end it gets built on LDAP directory and I'm planning that to be mapped as a UNIX account as well (on Ubuntu Linux) by making the system look up for system accounts on the LDAP. Doing so is fine, but I feel that many user names should be avoided, one of the obvious being 'root' and all the other user names taken for daemons etc. How do you tackle at this problem? Do you make up a list of disallowed user names by checking /etc/passwd? I was thinking that if, internally, the user names could be prepended as 'ldap_' or something, it will avoid any naming conflicts but that seems hard when the LDAP entry name is 'joe' but the system account will look like 'ldap_joe'. Not even sure how that can be achieved.

    Read the article

  • In Mac OS X Finder's column view, how do you show all columns, up to the list of volumes?

    - by John Douthat
    In OS X's olden times, column view always allowed you to scroll left back to the list of volumes. In recent versions, however, the Finder will hide parents and ancestors. For example, when you select a favorite "place" in the sidebar, no ancestors of that folder will be visitable without pressing Cmd+Up, but hitting Cmd+Up causes the current directory to lose focus, or disappear entirely, depending on the number of levels . Clicking "Back" sends you back to the folder you where in, but it also re-hides all of its ancestors :( I really wish I could see the entire hierarchy. Is that possible?

    Read the article

  • Access denied to EFS encrypted files after PC joins domain

    - by mjmarsh
    I'm experiencing strange behavior with Windows Encrypted File System: I have a machine that is in workgroup mode (not joined to a domain) I encrypt an entire directory structure on the machine (basically a folder and subfolders with data files for my application). My application writes and reads files from the encrypted file hierarchy as a local Windows user (let's call the account 'SecureUser'). This works fine I then join the PC to a domain (Let's call it 'TEST') Afterwards, processes running as the local 'SecureUser' account can't read the files it wrote originally when it was off the domain (What is also strange is that the files are listed as "read only" now and I cannot unset this flag via Windows Explorer or the command line, even though it looks like it succeeds) I then 'un-join' the PC from the domain and everything works again Is there something about changing domain membership on a PC that changes the behavior of EFS so that previously encrypted files cannot be read, even by the originating user? Thanks in advance

    Read the article

  • How to verify if my copy operation is complete in Windows 7?

    - by Tim
    Yesterday, I was leaving some job of copying a directory to run overnight. This morning however, I found the computer had restarted because of Windows Update or something. I was wondering if there is some way to check if the copy is complete? One way I guess would be check the last modified time of the copy, and when the system restarted. But I was wondering where to find the time when the system restarted? I was also wondering if where to find some logging files that have the records. I know Event Viewer, but don't know where to find within it. Other methods are welcome too. I also would like to hear suggestions for other ways to accomplish the copy instead of just simple copy and paste. Thanks and regards!

    Read the article

  • Need help automating a task in Linux

    - by Niphoet
    I'm still kind of new to Linux, but here's what I'm trying to do. I need to copy all subdirectories and files from one directory to another ever 5 minutes or so, with the old data automatically being overwritten with the new data. I'd also like this to run at startup. Is there any way this can be done? If so, what program would I need to schedule the automation and what is the command line I would need (cp ???). Thanks in advance!

    Read the article

  • Problems getting auditd set up on my server

    - by Tola Odejayi
    I'm trying to figure out which processes are deleting files from a specific directory, so I want to set up and run auditd on my system. I've set up the following rule in audit.rules: -w S unlink -S truncate -S ftruncate -a exit,always -k cache_deletion -w /home/myfolder/cache Then I type this to start the audit daemon: auditctl -R /etc/audit/audit.rules -e 1 But I get this error message: Error - nested rule files not supported Does anyone know what I am doing wrong here, and how I can resolve this? Also, what do I have to do to get the daemon running at startup?

    Read the article

  • Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work? [closed]

    - by themoondothshine
    I'm trying to learn more about library versioning in Linux and how to put it all to work. Here's the context: I have two versions of a dynamic library which expose the same set of interfaces, say libsome1.so and libsome2.so. An application is linked against libsome1.so. This application uses libdl.so to dynamically load another module, say libmagic.so. Now libmagic.so is linked against libsome2.so. Obviously, without using linker scripts to hide symbols in libmagic.so, at run-time all calls to interfaces in libsome2.so are resolved to libsome1.so. This can be confirmed by checking the value returned by libVersion() against the value of the macro LIB_VERSION. So I try next to compile and link libmagic.so with a linker script which hides all symbols except 3 which are defined in libmagic.so and are exported by it. This works... Or at least libVersion() and LIB_VERSION values match (and it reports version 2 not 1). However, when some data structures are serialized to disk, I noticed some corruption. In the application's directory if I delete libsome1.so and create a soft link in its place to point to libsome2.so, everything works as expected and the same corruption does not happen. I can't help but think that this may be caused due to some conflict in the run-time linker's resolution of symbols. I've tried many things, like trying to link libsome2.so so that all symbols are alised to symbol@@VER_2 (which I am still confused about because the command nm -CD libsome2.so still lists symbols as symbol and not symbol@@VER_2), but nothing seems to work. What am I doing wrong?

    Read the article

  • Optimize Windows file access over network

    - by Djizeus
    At my company I frequently need to access shared files over a Windows network. These files are located on the other side of the planet, so I guess the file share goes through some kind of VPN over Internet, but I don't control this and it is supposed to be "transparent" for me. However it is extremely slow. Displaying the content of a directory in the file explorer takes about 10s. Even if over the Internet, I did not expect that retrieving a list of file names would be that long. Are there any settings to optimize this from my Windows XP workstation, or is it mostly related to the way the network is configured? The only thing I have found so far is to cache all file names, while by default only short file names are cached (http://support.microsoft.com/kb/843418).

    Read the article

  • Copy files off FreeBSD

    - by Josh
    I have a FreeBSD machine that I have to copy everything off the drive. The fielsystem is UFS and not readable by any other operating system. (great...) I have a USB flash drive (FAT32) I need to copy everything to from the SATA in the bsd machine. I looked up cp commands, and got it to partially work, but it seems to copy to the wrong directory. I cannot find out the "name" of the USB drive, and if it can even copy to it.

    Read the article

  • Accessing ActiveX control through web server

    - by user847455
    I have developed the ActiveX control & register with Common CLSID number . using the CLSID number accessing the active X control on the internet explorer (as web page).using following object tag used in .html file OBJECT id="GlobasysActiveX" width="1000" height="480" runat="server" classid="CLSID:E86A9038-368D-4e8f-B389-FDEF38935B2F" i want to access this web page through web server .I have place this web page into the vitual directory & access using localhost\my.html it's working. but when i have accessed from LAN computer it will not access the activeX control from my computer . how to embed or download the activeX control form my computer into the LAN computer through web server thanks in advance

    Read the article

  • Windows hiding other user's files?

    - by JoshJordan
    I had a hard drive whose windows installation (running Vista) became corrupt. I bought a new hard drive, installed Windows 7, and hooked up the old drive using an external enclosure. The Users folder on the old drive shows the users that existed on the machine, but it doesn't show any of the contents of them. I assume this is due to not having the permissions I need. I have "taken control" of the folders I'm interested in, but this didn't prompt me for the original owner's password as I expected, and I still can't see the file contents. I would guess that this is a fairly common issue, but I'm not sure what to Google here. How can I get access to files in that drive's User directory?

    Read the article

  • Find command exclude files whose path match a certain pattern

    - by user40570
    I have a find command that looks for files that was modified recently and outputs the date find /path/on/server -mtime -1 -name '*.js' -exec ls -l {} \; I would like it to exclude any deeply nested folder that matches a certain pattern e.g. there are a number of folders that have a "statistics" directory and ".svn" directories. So i'd like to be able to say if the file that was modified yesterday is in a folder named statistics ignore it. Or perhaps not search for files in those folders at all.

    Read the article

  • User and Key Press Issues with Putty

    - by DizzyDoo
    Ubuntu Server newbie here, got some annoying issues with remote accessing my box with Putty. When I create a user and then login as that user, the terminal always starts with just '#' and not 'user@hostname:~#' which isn't useful where I want to see where I've changed directory too, like I can normally. Also, when logged in as a user, I can't press the cursor keys to move the caret (blinking thing) around, or press up to see previously executed commands. Instead it gives me this representation of the button pressed: ^[[D ^[[A ^[[B ^[[C. Pressing Delete, too, gives me ^[[3~. This is all strange to me, because when logged in as root, it all works fine. I'm hoping this is just something I've accidentally changed in Putty, or added the user wrongly, or perhaps just got caps lock on. Thanks.

    Read the article

  • Grub can not boot after resizing windows XP (NTFS) partition. What is to be done? [closed]

    - by cipricus
    Possible Duplicate: How to Repair Grub while dual booting ( win7 / ubuntu 11.10) I had installed Lubuntu on a PC with Windows XP and used dual boot for some time with no problems. Since I had almost abandoned Windows (kept it for printing...) I decided to resize its ntfs partition and add the free space to my Ubuntu space. Tried that with a gparted stick and a live cd but would not work due to an issue related to the ntfs partition: gparted signaled with a red exclamation point that there was a problem with that partition. I read that a checkdisk might solve it but in the end used EaseUS in Windows to shrink (resize) the ntfs partition and create a new one (ext3) from the space left. All seemed ok with that procedure: but resizing the partition and moving the data might have affected the grub file: or whatever the following message means, which I get when trying to start my pc: error: file not found grub rescue> Booting from a live cd I see, beside the shrinked windows partition and my old linux one, the newly created partition, containing a directory called lost+found that I cannot open. Can I fix the grub file and recover both my XP and Lubuntu installations?

    Read the article

  • User WinWget to keep web site alive in a Windows Server 2003

    - by Menelaos Vergis
    I have a site that must stay alive due to a service that runs and check a directory for changes. The site is running in IIS at a Windows Server 2003 and the solution I came up it that I will Schedule a task that requests the home page every 5 minutes. I am sure that this way the site will stay alive almost all the time. I have downloaded Wget from Wget from Windows and I have installed it at my windows server 2003 but I don't know how to use it in order to ping the server but not download anything. Since I want to use this forever I don't want to save anything on the disk, can you provide me with the command that pings a web page but don't save anything on the disk?

    Read the article

  • Additional Hard Drives for Servers

    - by Abs
    Hello all, I am developing a web app where I will have to save lots of files and I am just trying to work out the directory structure and where things should be saved to. I have had a look at the dedicated server I want to buy and for storage it shows this: 2x 1TB SATA in RAID1 The space is enough but I am guessing this will not be on one hard drive? I will have to save files on one hard drive and when that fills up, I have to use the other? For the Fedora distro - what is the path for the second drive? Is there a primary drive where I will be able to setup my webroot? I am sorry, this is all new to me. It would be great to links and advice on how things actually work when it comes to additional hard drives etc. Thanks all

    Read the article

  • Ways to deduplicate files

    - by User1
    I want to simply backup and archive the files on several machines. Unfortunately, the files have some large files that are the same file but stored differently on different machines. For instance, there may a few hundred photos that were copied from one computer to the other as an ad-hoc backup. Now that I want to make a common repository of files, I don't want several copies of the same photo. If I copy all of these files to a single directory, is there a tool that can go thru and recognize duplicate files and give me a list or even delete one of the duplicates?

    Read the article

  • Network Performance issue

    - by qubemarker
    We have three Ubuntu 10.04 servers. One server is a storage server and the other two servers are configured as clients. The storage server has a good amount of capacity and it is integrated with windows Active directory server for Authentication. I am uploading some video files from both clients to the server and when I am uploading data from any one client alone I get about 26 MB/s data transfer rate. When I upload data from both the clients simultaneously I am only getting about 8 MB/s from each client. I have gigabit ethernet cards in all of the servers and a L2 Managed gigabit switch for connectivity. I don’t know why the data transfer rate is decreasing so much in simultaneous read and write. I have tried all of the TCP stack related settings suggested here. Can any assist with getting better read/write performance out of this setup? Any help is appreciated.

    Read the article

  • Why do some actions not work with Remote Desktop?

    - by Holgerwa
    I usually connect to other PCs in the same building using Remote Desktop, which works great. For some reason, some actions cannot be performed through Remote Desktop. These are, for example: Installation of certain software Accessing the directory of a DVD (that is inserted at the remote computers drive) several other tasks that just "don't react or start", unless you do the same thing without RDP All these actions work with any other remote access tool, like VNC, Teamviewer, LogMeIn, etc. My question is: What is the difference when I use a computer through RDP instead of directly? Is there a list of prohibited actions available so that one could know upfront if something can be done with RDP or not?

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >