Search Results

Search found 27819 results on 1113 pages for 'linux intel'.

Page 99/1113 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Tool or script to detect moved or renamed files on Linux prior to a backup

    - by Pharaun
    Basically I am searching to see if there exists a tool or script that can detect moved or renamed files so that I can get a list of renamed/moved files and apply the same operation on the other end of the network to conserve on bandwidth. Basically disk storage is cheap but bandwidth isn't, and the problem is that the files often will be reorganized or moved around into a better directory structure thus when you use rsync to do the backup, rsync won't notice that its a renamed or moved file and re-transmission it over the network all over again despite having the same file on the other end. So I am wondering if there exists a script or tool that can record where all the files are and their names, then just prior to a backup, it would rescan and detect moved or renamed files, then I can take that list and re-apply the move/rename operation on the other side. Here's a list of the "general" features of the files: Large unchanging files They can be renamed or moved around [Edit:] These all are good answers, and what I end up doing in the end was looking at all of the answers and will be writing some code to deal with this. Basically what I am thinking/working on now is: Using something like AIDE for the "initial" scan and enable me to keep checksums on the files because they are supposed to never change, so it would aid on detecting corruption. Creating an inotify daemon that would monitor these files/directory and recording any changes relating to renames & moving the files around to a log file. There are some edge cases where inotify might fail to record that something happened to the file system, thus there is a final step of using find to search the file system for files that has a change time latter than the last backup. This has several benefits: Checksums/etc from AIDE to be able to check/make sure that some media did not get corrupt Inotify keeps resource usage low and no need to re-scan the filesystem over and over No need to patch rsync; If I have to patch things I can, but I would prefer to avoid patching things to keep the burden lower, (IE don't need to re-patch everytime there is an update). I've used Unison before and its really nice, however I could've sworn that Unison does keep copies around on the filesystem and that its "archive" files can grow to be rather large?

    Read the article

  • Reinitialize GPU on RADEON HD 7970 under linux

    - by user1610662
    I have got a RADEON HD 7970 sapphire on Debian Squeeze. Since I often use it with running GPU codes, sometimes the performances highly decrease as I test it with "glxgears" tool (I get only 20 FPS in fullscreen). So I would like to be able to reinitialize the GPU without reboot the system. I know the "clinfo" tool which display the features of the graphics card. Is there a tool which allows to do this reinitialization ?

    Read the article

  • HD read error while booting linux

    - by sidharth sharma
    I have been dual booting windows 7 and ubuntu on my laptop since the past 3 years and all was working fine until I started getting logs like ata1.00: status: { DRDY ERR } ata1.00: error: { UNC } ata1.00: configured for UDMA/133 sd 0:0:0:0: [sda] Unhandled sense code sd 0:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE sd 0:0:0:0: [sda] Sense key: Medium Error [Current][discreptor] I figured it was a hardware problem and ignored it as long as I could until the HD crashed on me. Then I got a brand new HD and put on windows and ubuntu afresh on it but the problem still persists. Any Help?

    Read the article

  • Inheriting file ownership on linux

    - by John Hunt
    We have an ongoing problem here at work. We have a lot of websites set up on shared hosts, our cms writes many files to these sites and allows users of the sites to upload files etc.. The problem is that when a user uploads a file on the site the owner of that file becomes the webserver and therefore prevents us being able to change permissions etc via FTP. There are a few work arounds, but really what we need is a way to set a sticky owner if that's possible on new files and directories that are created on the server. Eg, rather than php writing the file as user apache it takes on the owner of the parent directory. I'm not sure if this is possible (I've never seen it done.) Any ideas? We're obviously not going to get a login for apache to the server, and I doubt we could get into the apache group either. Perhaps we need a way of allowing apache to set at least the group of a file, that way we could set the group to our ftp user in php and set 664 and 775 for any files that are written? Cheers, John.

    Read the article

  • Run Jar in Background on Linux

    - by Benny
    I have a jar that runs forever (infinite loop with socket listening thread) and need it to run in the background at all times. An example would be: "java -jar test.jar" How do I do this? Thanks in advance!

    Read the article

  • Debugging Connection Issues Between Two Linux Servers

    - by clickfault
    I have two CentOS 5 servers running iptables and apf. I am having issues connecting with ssh from server 1 to server 2. I can connect from server 1 to a third server and from that third server to both 1 and 2. In all cases I am using the IP address and not a host name. I have stopped iptables and apf on all servers and it doesn't seem to change anything. What is the best way to debug this process?

    Read the article

  • Linux - Multiple service statuses with one command

    - by Jimbo
    I'm trying to retrieve a list of multiple service statuses in Unix. I'm using the service command: man page. The statuses all start with the transmission-daemon string, for example. I require the ability to list multiple services' statuses, with a single command. Here is what I'm currently trying (and failing) with: Here I'm trying to grab a list of statuses using grep. service $(ls /etc/init.d | grep "transmission-daemon") status Here I'm trying to list all statuses, and then grep for them. service --status-all | grep "transmission-daemon" This produces the following, which isn't much help: How can I effectively achieve what I require with a single command, so that I can then continue piping to awk for further customisation? Desired example output: transmission-daemon started transmission-daemon2 stopped transmission-daemon3 started

    Read the article

  • Routing using Linux with 2 NIC cards

    - by Kevin Parker
    Configured Clear OS to be in Gateway mode on a machine with two NIC cards. eth0:192.168.2.0/24 with ip 192.168.2.27 which is connected to a modem and thus have internet connectivity. eth1:192.168.122.0/24 with ip 192.168.122.10 which is connected to other machines in LAN through switch. LAN machines with network 192.168.122.0 is not getting internet.How can they get internet Through Clear OS gateway.I have enabled packet forwarding in clear os using "ip_forward=1" What am i missing?.Can you please help me in this. Following are the static routing i have added: on LAN machine1 with ip address 192.168.122.11 ip route add 192.168.2.0/24 via 192.168.122.10 dev eth0 ip route show 192.168.2.0/24 via 192.168.122.10 dev eth0 192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.11 But still 192.168.2.0/24 network is not reachable.Where can be the problem??

    Read the article

  • Killing CLOSE_WAIT sockets without killing parent process on Linux

    - by Alex Neth
    Tomcat is leaving me with CLOSE_WAIT sockets which ultimately saturate the maximum number of connections. I've tried many methods in my client and server code to get rid of these to no avail, including closing connections, calling System.gc(), etc. Now I'm trying to find a way to simply time these out quickly in the OS. I've got conntrack working, but am not sure how to use that to kill these connections. I've also set /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_close_wait to 1, which of course is too low but the connections persist. Is there a way to kill these zombie sockets? Running Ubuntu.

    Read the article

  • Resizing an existing linux partition during Ubuntu install

    - by Richard100
    Hello All, I have Fedora Core 10 installed on a PC, occupying the whole disk (no free space). I want to add Ubuntu 10.04 desktop edition. Does Ubuntu 10.04 allow you to resize existing partitions during the installation process in order to free up some space for the Ubuntu installation? Without losing or trashing existing data, obviously. Thanks.

    Read the article

  • Upgrading PHP 5.1 to 5.3 on Linux Server

    - by nicorellius
    I trying to find the best way to upgrade from PHP 5.1 to 5.3. The CRM software I am running on this server requires this upgrade or else I probably wouldn't even perform it, because it seems like it's going to be perhaps trickier than I hoped it would be. Being still new to the programming world, these routine upgrades are still worrisome to me. I am running apache 2.2.6 (Fedora), PHP 5.1.6 and MySQL 5.0.27 on this server.

    Read the article

  • Linux - quota per directory?

    - by depesz
    I have following scenarios: Single partition mounted as /, with lots of disk space. There is a range of directories (/pg/tbs1, /pg/tbs2, /pg/tbs3 and so on), and I would like to limit total size of these directories. One option is to make some big files, and then mkfs them, and mount over loopback, and then set quota, but this makes expansion a bit problematic. Is there any other way to make the quota work per directory?

    Read the article

  • Copy data from a remote Linux box to my Windows desktop

    - by Sanjay Rao
    I use Putty to login to the remote server and then set the environment and change the path to a particular directory. Now from this dir, I need to copy a folder to my desktop which is Windows? How can I achieve this ? Some of my failed attempts are as follows scp -r remote_foldername srao@my_ipaddress:C:\srao\Users\Desktop So from the remote server which is to be copied through putty, to my_username_in_windows@ip_address:path to destination

    Read the article

  • Network on linux server is periodically down

    - by Fabian
    I have an old server running Fedora 4 that occasionally just stops responding via network for about an hour. This happens 1-2 times a week. Also no connection from the server itself to any other computer on the network is possible when it happens. The network settings and routes look fine. There are no unusual log messages and no unusual processes running at that time. If I restart the network or just do an ifconfig eth0 down & ifconfig eth0 up it works fine afterwards. I know that the server should be updated to a currently supported OS, but that is not really an easy option right now. Any ideas on how I could diagnose and fix that problem?

    Read the article

  • How to allow users to transfer files to other users on linux

    - by Jon Bringhurst
    We have an environment of a few thousand users running applications on about 40 clusters ranging in size from 20 compute nodes to 98,000 compute nodes. Users on these systems generate massive files (sometimes 1PB) controlled by traditional unix permissions (ACLs usually aren't available or practical due to the specialized nature of the filesystem). We currently have a program called "give", which is a suid-root program that allows a user to "give" a file to another user when group permissions are insufficient. So, a user would type something like the following to give a file to another user: > give username-to-give-to filename-to-give ... The receiving user can then use a command called "take" (part of the give program) to receive the file: > take filename-to-receive The permissions of the file are then effectively transferred over to the receiving user. This program has been around for years and we'd like to revisit things from a security and functional point of view. Our current plan of action is to remove the bit rot in our current implementation of "give" and package it up as an open source app before we redeploy it into production. Does anyone have another method they use to transfer extremely large files between users when only traditional unix permissions are available?

    Read the article

  • Advanced merge directory tree with cp in Linux

    - by mtt
    I need to: Copy all of a tree's folders (with all files, including hidden) under /sourcefolder/* preserving user privileges to /destfolder/ If there is a conflict with a file (a file with the same name exists in destfolder), then rename file in destfolder with a standard rule, like add "old" prefix to filename (readme.txt will become oldreadme.txt) copy the conflicted file from source to destination Conflicts between folders should be transparent - if same directory exists in both sourcefolder and destfolder, then preserve it and recursively copy its content according to the above rules. I need also a .txt report that describes all files/folders added to destfolder and files that were renamed. How can I accomplish this?

    Read the article

  • Write permissions on uploaded files - Linux, Apache, PHP

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • Linux: Alternative to rsync? (ie, scp with resume)

    - by Joernsn
    I've been using rsync to automatically send files from one box to another, which is great compared to scp, since it supports resuming. However, when resuming a very large file (10gb) rsync has to read both files and compare them, which is very slow. I don't need fancy error handling, just "scp with resume", so here's my question: Is there an alternative to rsync/scp, that supports resuming without having to read both source and destination files? I've read the manuals without finding anything I can use, please let me know if I've missed something. This is the rsync line I've been using: rsync -av --partial --progress --inplace SRC DST

    Read the article

  • linux/shell: change a file's modify timestamp relatively?

    - by index
    My Canon camera produces files like IMG_1234.JPG and MVI_1234.AVI. It also timestamps those files. Unfortunately during a trip to another timezone several cameras were used, one of which did not have the correct time zone set - meta data mess.. Now I would like to correct this (not EXIF, the file's "modify" timestamp on disk). Proposed algorithm: 1 read file's modify date 2 add delta, i.e. hhmmss (preferred: change timezone) 3 write new timestamp Unless someone knows a tool or a combination of tools that do the trick directly, maybe one could simplify the calculation using epoch time (seconds since ..) and whip up a shell script. Any help appreciated!

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >