Search Results

Search found 20394 results on 816 pages for 'network filesystem'.

Page 275/816 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Why use multiple partitions on a rhel server?

    - by Jakobud
    I'm about to reformat and reinstall CentOS onto an old server. The server runs on a modest 30 node small business network and has a variety of responsibilities including MySQL, a Samba share, DHCPd & SVN/Trac. The old sysadmin had this server setup with almost a dozen different partitions for various things. I'm trying to understand what the advantages of multiple partitions are as opposed to a just one filesystem at /. Speed? Flexibility? Security? It seems like if you misjudge the necessary size for any given partition and it ends up filling up too fast, it requires a sysadmin to go in and expand the partition, etc... Seems like it would be easier if everything was just one flat / filesystem. But I'm sure there are some advantages I'm not aware of. The server is currently running a handful of HDDs raided to ~2TB (raid 0).

    Read the article

  • How to add commands of windows to local shell of XShell 4

    - by dylanninin
    XShell is a very powerful tools to ssh remote computers such as Unix/Linux. And it has built some internal commands for you to run within your Windows. Xshell:\> help Internal Commands: new: Creates a new session. open: Opens a session or the session dialog box. edit: Opens the Session Property dialog box for a session. list: Lists information of all available sessions. 'ls' and 'dir' do the same. cd: Changes the current working directory. clear: Clears the screen/address/command history. help: Displays this help. '?' does the same. quit: Quits Local Shell. 'exit' does the same. ssh: Connects to a host using the SSH protocol. telnet: Connects to a host using the TELNET protocol. rlogin: Connects to a host using the RLOGIN protocol. sftp: Connects to a host to transfer files securely. ftp: Connects to a host to transfer files. External Commands: ipconfig: Configures TCP/IP network interfaces. ping: Sends ICMP ECHO_REQUEST packets to network hosts. tracert: Prints the route packets take to network host. netstat: Displays current protocol statistics and current TCP/IP network connections. nslookup: Resolves a hostname to IP address. For more information, type 'help command' for each command. ex) help telnet But these commands are limited, so how to add commands of windows to local shell of XShell 4

    Read the article

  • No space left on device with encrypted disk that takes all space

    - by Yosef
    I use Ubuntu 11.04. There's no space left on device. I have encrypted the disk that takes up space (maybe it's good to disable it, but I don't know how). In shell, I get this message: No space left on device I run df -I: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda1 3055616 602499 2453117 20% / none 210161 890 209271 1% /dev none 214789 8 214781 1% /dev/shm none 214789 53 214736 1% /var/run none 214789 3 214786 1% /var/lock /home/myuser/.Private 3055616 602499 2453117 20% /home/myuser df -I Edit: When I run only df: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 48060296 45618928 0 100% / none 1538340 684 1537656 1% /dev none 1547596 808 1546788 1% /dev/shm none 1547596 104 1547492 1% /var/run none 1547596 0 1547596 0% /var/lock /home/myuser/.Private 48060296 45618928 0 100% /home/myuser Edit: I thinking about few solution but I don't know which better and how exactly to do them: to enlarge partition size (I cant install gparted - no more disk space) remove encryption of partition - I really not need that

    Read the article

  • Missing over 100GB of Space on sda1 RHEL

    - by WifiGhost
    I have a server setup with a RAID 5 using (3) 500GB drives, 1 as a spare so unused in the RAID. So in my mind i start out with 990GB with the RAID 5 in place. When looking at DF or the built in disk space utility i only see a total of about 882GB, how can i find where the 100+GB went? How can i get it back? I've checked the RAID 5 BIOS and i see all the space. I've tried looking manually and through terminal commands with no luck. Filesystem - 1K-blocks - Used Available - Use% - Mounted on /dev/mapper/vg_web-lv_root 838084192 48368700 747153060 7% / tmpfs 12104644 592 12104052 1% /dev/shm /dev/sda1 495844 121546 348698 26% /boot /dev/mapper/vg_web-lv_home 82569904 259136 78116468 1% /home Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_web-lv_root 800G 47G 713G 7% / tmpfs 12G 592K 12G 1% /dev/shm /dev/sda1 485M 119M 341M 26% /boot /dev/mapper/vg_web-lv_home 79G 254M 75G 1% /home

    Read the article

  • apache-user & root access

    - by ahmedshaikhm
    I want to develop few scripts in php that will invoke following commands; using exec() function service network restart crontab -u root /xyz/abc/fjs/crontab etc. The issue is that Apache executes script as apache user (I am on CentOS 5), regardless of adding apache into wheel or doing good, the bad and the ugly group assignment does not run commands (as mentioned above). Following are my configurations; My /etc/sudoers root ALL=(ALL) ALL apache ALL=(ALL) NOPASSWD: ALL %wheel ALL=(ALL) ALL %wheel ALL=(ALL) NOPASSWD: ALL As I've tried couple of combination with sudoer & httpd.conf, the recent httpd.conf look something as follows; my httpd.conf User apache Group wheel my PHP script exec("service network start", $a); print_r($a); exec("sudo -u root service network start", $a); print_r($a); Output Array ( [0] => Bringing up loopback interface: [FAILED] [1] => Bringing up interface eth0: [FAILED] [2] => Bringing up interface eth0_1: [FAILED] [3] => Bringing up interface eth1: [FAILED] ) Array ( [0] => Bringing up loopback interface: [FAILED] [1] => Bringing up interface eth0: [FAILED] [2] => Bringing up interface eth0_1: [FAILED] [3] => Bringing up interface eth1: [FAILED] ) Without any surprise, when I invoke restart network services via ssh, using similar user like apache, the command successfully executes. Its all about accessing such commands via HTTP Protocol. I am sure cPanel/Plesk kind of software do use something like sudoer or something and what I am trying to do is basically possible. But I need your help to understand which piece I am missing? Thanks a lot!

    Read the article

  • Why is (free_space + used_space) != total_size in df? [migrated]

    - by Timothy Jones
    I have a ~2TB ext4 USB external disk which is about half full: $ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdc 1922860848 927384456 897800668 51% /media/big I'm wondering why the total size (1922860848) isn't the same as Used+Available (1825185124)? From this answer I see that 5% of the disk might be reserved for root, but that would still only take the total used to 1921328166, which is still off. Is it related to some other filesystem overhead? In case it's relevant, lsof -n | grep deleted shows no deleted files on this disk, and there are no other filesystems mounted inside this one.

    Read the article

  • How to connect a VM running on an ESXi host to that host via a VMKernel NIC?

    - by Zac B
    Say I have an ESXi (5.0) host that runs a Linux distribution which hosts iSCSI targets, which contain the images for other VMs which the host will run. When it's used, I'll start the host first, then the iSCSI server, and then refresh all storage targets/HBAs in order to see the provided shares as online. I know it's a strange puzzle-box solution, but I was told to implement it. The ESXi host itself has a gigabit NIC which connects to the outside world. The guest OS (CentOS) supports VMXNet3, however, and if I can, I'd like to use its VMXNET3 NIC to host iSCSI for the ESXi host. How should I go about doing this? I went to create a new virtual network, and selected "VKernel", as it suggested that I use that type of network for SAN traffic, but it is apparently not set up for "self-hosted" SAN hosts, as the new network did not appear as an option to attach the CentOS box's VMXNET3 NIC to. How should I best connect an iSCSI host out to its "parent" ESXi host, if I need a) a 10gb connection, and (optionally) b) a VMKernel network for it?

    Read the article

  • Out of disk space on 4GB partiton yet it's only using 2GB

    - by Camsoft
    I'm running Ubuntu and have had a problem where the root partition has run out of disk space. When I perform df -h I get the following: Filesystem Size Used Avail Use% Mounted on /dev/sda6 4.6G 4.5G 0 100% / Yet there are only 2GB of files actually using up this partition. I then ran the following df -i and I get the following: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda6 305824 118885 186939 39% / I have no idea what the -i flag does but it clearly shows that only 39% is used. Can anyone explain where my disk space has gone?

    Read the article

  • windows 2003 under Hyper-V - can't send/receive ping

    - by glaucon
    I've installed Windows 2003 x64 R2 SP2 under Hyper-V (the Windows Pro 8 edition). I have a NIC configured but I can't move any traffic on it. In particular I can't send or receive Pings. Scoreboard There is a second VM running Ubuntu under the Windows 8 host which is able to send and receive pings from the host O/S . When I try to ping from Windows 2003 guest to Windows 8 host I get 'Request Timed Out'. When I try to ping from Windows 8 host to Windows 2003 guest I get 'Reply from 192.168.10.107 Destination Host Unreachable'. There's no problem pinging from the Ubuntu guest to the Windows 8 host and no problem pinging from the Windows 8 host to the Unbuntu guest. Environment Integration services are installed on Windows 2003. The windows 2003 needs a static IP address of 192.168.10.15. The Windows 2003 ipconfig output looks like this : While the host o/s ipconfig output looks like this : Event Logs The only things I can see in the event logs which is (a) looks signifcant and (b) is not related to the lack of networking is this : I'm not sure if that's significant or not. Hyper-V and NICs When the Windows 2003 guest was first booted it had no NIC; I subsequently added a 'Legacy Network Connector' which I couldn't get Windows 2003 to recognise; I subsequently removed that and added a 'Standard Network Connector' and at least on the surface this works ... only it doesn't. 'Virtual Network Type' is external. Although I've only mentioned ping there's no other evidence of network activity. 'Allow incoming echo request' is enabled on the Windows 2003 guest. HELP ? What else should I look at or do to resolve this problem ? EDIT 1: I should have said that I turned off the firewall on the W2003 server for a while and retested the pings; same result.

    Read the article

  • How to restore broken Ethernet functionality on Mac G5 running Mac OS 10.4.11 (Tiger)

    - by willc2
    I had a disk error that rendered my Mac unbootable. I repaired it with Tech Tool 4, but now networking does not work. Network Preferences reports that my Ethernet cable is unplugged. I know this is bogus because when I boot from an emergency partition, networking works correctly. Furthermore, wireless networking is also broken, which I tested with a known-good Wi-Fi dongle. Whenever I try to change Network Port Configurations by creating a New or Renaming an existing one, example: I get this message in the console: Error - PortScanner - setDevice, device == nil! Error - PortScanner - setDevice, device == nil! In sets of two as shown. When I try to invoke the Network Diagnostics app, it immediately crashes. My first thought is to reinstall Tiger with the Archive and Install method so I don't have to reinstall all my applications but I have lost my Tiger installer disk. My next thought is to buy Leopard for $107 on Amazon. If there is any way I can just repair my Tiger install I would be happy to save that money, though. This is not my main machine and I am loathe to put more money into it. How can I recover my network functionality? UPDATE: I found my Tiger install disk and tried an Archive and Install. It failed with an unhelpful error message along the lines of "Can't install, try again". I tried again but had the same error. My guess is, some corrupt or missing file in my User folder is preventing migration. I have a backup created with Super Duper that is a bit out of date but will startup the machine (with functional networking). I would love to just copy over the file(s) that got messed up but I don't even know where to look. What is the likely location of the System files that would cause the aforementioned symptoms?

    Read the article

  • Can't connect to public WiFi with MacBookPro at coffee shops and libraries

    - by Nathan Bowers
    The Problem: I can't connect to public, unencrypted WiFi at my local public library or Peets Coffee. My Setup: Late 2006 MacBookPro running 10.5.8. I have Parallels installed. It's supposed to work like this: 1) Connect to their unencrypted WiFi network 2) Open a browser which redirects you to their "enter password/agree to terms" page. 3) Browse normally. I can connect to the WiFi network, but when I try to authenticate I always get stuck in a redirect loop. It's been like this for a while. Even before I upgraded to 10.5.8. I never have trouble with encrypted networks or regular open WiFi. What I've tried: Disabling Parallels connections in Network Prefs. Superstition: somehow Parallels installed something in the network stack that's messing me up. Pinging the IP address of the WiFi node I'm connected to. I can ping it, it's there, but I still get stuck in this authentication redirect loop. Tried different browsers, tried different cookie and security settings. Even tried IE under Parallels. No dice. Tried flushing DNS cache. Asked library and coffee employees for help. It didn't go well. My Question: Anybody else have this problem? What should I be looking for?

    Read the article

  • For kvm host images. GFS2 or OCFS2 with drbd8?

    - by yvess
    I want a shared filesystem on top of drbd8 on two nodes. The servers run ubuntu 9.10. I googled a lot, but couldn't find an clear trend what the web community prefers. It's seems that OCFS2 is more used at the moment. Which filesystem is more reliable, faster? GFS2 or OCFS2? Is the linux community going more towards GFS2 or OCFS2? Which of this two is better supported by ubuntu 9.10? Are there better (or more common) alternatives?

    Read the article

  • Retrieving a specific value from “df -h” using shell

    - by diegodias
    When I use df -h, I get the following output: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 59G 2.2G 54G 4% / /dev/sda1 122M 38M 78M 33% /boot tmpfs 1.1G 0 1.1G 0% /dev/shm 10.10.0.105:/somepath 11T 8.4T 2.1T 81% /storage4 10.11.0.101:/somepath 15T 8.9T 5.9T 61% /storage1 /dev/mapper/patha 5.0T 255G 4.8T 5% /storage5_vol0 /dev/mapper/pathb 5.0T 195G 4.9T 4% /storage5_vol1 /dev/mapper/pathc 5.0T 608G 4.5T 12% /storage5_vol2 I want to write a script that gets the value of Avail column on a specific storage. I used to use df -k /storage_name | tail -1 | awk '{print $3}' But the FileSystem column can have a value or not .. which would change the variable of my script from $3 to $4. How can I get the Avail on a single command line even if there are no values on the previous columns?

    Read the article

  • How to clean up an unprocessed orphan inode list?

    - by bmk
    I tried to mount a formerly readonly mounted filesystem read-writeable: mount -o remount,rw /mountpoint Unfortunately it did not work: mount: /mountpoint not mounted already, or bad option dmesg reports: [2570543.520449] EXT4-fs (dm-0): Couldn't remount RDWR because of unprocessed orphan inode list. Please umount/remount instead A umount does not work, too: umount /mountpoint umount: /mountpoint: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) Unfortunately neither lsof of fuser don't show any process accessing something located under the mount point. So - how can I clean up this unprocessed orphan list to be able to mount the filesystem again without rebooting the computer?

    Read the article

  • RHEL5: Can't create sparse file bigger than 256GB in tmpfs

    - by John Kugelman
    /var/log/lastlog gets written to when you log in. The size of this file is based off of the largest UID in the system. The larger the maximum UID, the larger this file is. Thankfully it's a sparse file so the size on disk is much smaller than the size ls reports (ls -s reports the size on disk). On our system we're authenticating against an Active Directory server, and the UIDs users are assigned end up being really, really large. Like, say, UID 900,000,000 for the first AD user, 900,000,001 for the second, etc. That's strange but should be okay. It results in /var/log/lastlog being huuuuuge, though--once an AD user logs in lastlog shows up as 280GB. Its real size is still small, thankfully. This works fine when /var/log/lastlog is stored on the hard drive on an ext3 filesystem. It breaks, however, if lastlog is stored in a tmpfs filesystem. Then it appears that the max file size for any file on the tmpfs is 256GB, so the sessreg program errors out trying to write to lastlog. Where is this 256GB limit coming from, and how can I increase it? As a simple test for creating large sparse files I've been doing: dd if=/dev/zero of=sparse-file bs=1 count=1 seek=300GB I've tried Googling for "tmpfs max file size", "256GB filesystem limit", "linux max file size", things like that. I haven't been able to find much. The only mention of 256GB I can find is that ext3 filesystems with 2KB blocks are limited to 256GB files. But our hard drives are formatted with 4K blocks so that doesn't seem to be it--not to mention this is happening in a tmpfs mounted ON TOP of the hard drive so the ext3 partition shouldn't be a factor. This is all happening on a 64-bit Red Hat Enterprise Linux 5.4 system. Interestingly, on my personal development machine, which is a 32-bit Fedora Core 6 box, I can create 300GB+ files in tmpfs filesystems no problem. On the RHEL5.4 systems it is no go.

    Read the article

  • Unable to get squid working for remote users

    - by Sean
    I am trying to setup squid 3.2.4, but I have not been able to get it working for remote users. Works fine locally. Unable to figure out what I am doing wrong... http_port 3128 transparent ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/usr/share/ssl-cert/myCA.pem refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network acl localnet src 172.16.0.0/12 # RFC 1918 possible internal network acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access allow localhost http_access allow localnet http_access allow all cache deny all via off forwarded_for off header_access From deny all header_access Server deny all header_access WWW-Authenticate deny all header_access Link deny all header_access Cache-Control deny all header_access Proxy-Connection deny all header_access X-Cache deny all header_access X-Cache-Lookup deny all header_access Via deny all header_access Forwarded-For deny all header_access X-Forwarded-For deny all header_access Pragma deny all header_access Keep-Alive deny all acl ip1 localip 1.1.1.90 acl ip2 localip 1.1.1.91 acl ip3 localip 1.1.1.92 acl ip4 localip 1.1.1.93 acl ip5 localip 1.1.1.94 tcp_outgoing_address 1.1.1.90 ip1 tcp_outgoing_address 1.1.1.91 ip2 tcp_outgoing_address 1.1.1.92 ip3 tcp_outgoing_address 1.1.1.93 ip4 tcp_outgoing_address 1.1.1.94 ip5 tcp_outgoing_address 1.1.1.90

    Read the article

  • How to get my W2003-server (back) into the web (after setting up bridged networking)

    - by MBaas
    I have recently set up Virtualbox on a W2003-Server (which is also used as webserver, accessed from the web). My vbox worked nicely, but then I wanted more, I wanted to have the vm appear in the intranet like any ordinary pc. I was advised to setup bridged networking as opposed to NAT. I did so, and in the server's network connections have bridged the LAN-Connection and the "VirtualBox Host-Only Network" (yes, it says "host only network", but I assure that VBox networking is configured to use network bridge). So now my VM is visible in the intranet and it also has www-accesss, the server can also access the web. The only problem that came up is that the server is no longer accessible from the web. I've traced an HTTP-Request and it says "Can't connect to *:80 (connect: No route to host)". So maybe something in the router's config needs to be adjusted (yeah, well, the server's IP-Address changed from 192.168.1.199 to ...198). So I went into the router-config, reviewed port-forwarding for port 80 and adjusted the IP there, but it still didn't work. Unsure if it was a router-problem or rather something in the server's config, I've setup a "demilitarized zone" in the router and have put the server into it. (My understanding is that this would put the server straight into the web...) But the result of the HTTP-Requests is still the same :(

    Read the article

  • Set up WLAN in 3-level house

    - by Balint Erdi
    I'm having a hard time setting up the network in our house. It has three levels (basement, ground floor, first level). The WLAN is set up by an ASUS RT-N12 router which provides perfect coverage for the ground floor and the basement. However, I set up my "home office" in the basement where the signal barely arrived. So I purchased a TP-Link TL-WA901ND (300 Mbps) Access Point which I set up in the other corner of the ground floor to expand the ASUS router's range. I used the AP's Repeater mode for that. The distance between my computer and the TP-Link AP is 6-7 meters. There is a staircase going down from the ground floor to the basement so there are no solid walls between the computer and the AP. This setup mostly works (I am writing this from the basement) but it is not reliable (the signal strength sometimes goes down to ~40% of the max) sometimes so I wonder if I am doing it correctly or if there is a better way. Screenshot of the router's and the AP's dashboard screen follow: Any comments on what I am doing wrong or hints for improvement are appreciated. Thank you. UPDATE Tried one more thing, setting up the TP-LINK AP in Access Point mode. That way, I can make it use a different SSID. I enabled WDS/Bridge so that it expands the range of the ASUS router (see screenshot). That does not work, either, if I connect to the network set up by the TP-LINK device (PELSTER-2), I can't reach the external network (the Internet). It seems the problem always comes back to this, the TP-LINK does not have access to the external network, whatever its mode of operation.

    Read the article

  • Determining a realistic measure of requests per second for a web server

    - by Don
    I'm setting up a nginx stack and optimizing the configuration before going live. Running ab to stress test the machine, I was disappointed to see things topping out at 150 requests per second with a significant number of requests taking 1 second to return. Oddly, the machine itself wasn't even breathing hard. I finally thought to ping the box and saw ping times around 100-125 ms. (The machine, to my surprise, is across the country). So, it seems like network latency is dominating my testing. Running the same tests from a machine on the same network as the server (ping times < 1ms) and I see 5000 requests per second, which is more in-line with what I expected from the machine. But this got me thinking: How do I determine and report a "realistic" measure of requests per second for a web server? You always see claims about performance, but shouldn't network latency be taken into consideration? Sure I can serve 5000 request per second to a machine next to the server, but not to a machine across the country. If I have a lot of slow connections, they will eventually impact my server's performance, right? Or am I thinking about this all wrong? Forgive me if this is network engineering 101 stuff. I'm a developer by trade. Update: Edited for clarity.

    Read the article

  • Format as NTFS without Journal

    - by palswim
    I have a flash drive that I'd like to format for use in Windows. I would like support for symbolic links, so I can't use FAT/FAT32/exFAT. I would prefer to use the ext4 filesystem and disable journaling, with the Ext2Fsd filesystem driver, but have (so far) found that I can't make soft links across filesystems that Windows will read, Ext2Fsd has an annoying bug about always mounting partitions as read-only and has problems resuming from sleep, and some programs have problems writing to the partition even after manually configuring Ext2Fsd to allow writes. So, I would like to use NTFS for the flash drive, but disable the journaling feature (causes extra writes), if possible. How can I do this?

    Read the article

  • Finding cause of TCP retransmission within a LAN

    - by Surreal
    Hello denizens of Server Fault I have an irritating problem with a LAN of about 100 computers, 2 Windows domain servers, and 12 VoIP phones. Since their installation around a year ago, every week or so, we notice a VoIP phone resetting itself - occasionally in the middle of a call. Simultaneously there are often signs of temporary loss of connection on computers: freezes in explorer while accessing network shares, errors in our administration software due to loss of connection to the database server. I have been doing some Wireshark monitoring on the connection between the VoIP PBX and the rest of the network. Wireshark picks up a clump of retransmitted TCP packets at the times when we record phone restarts. The Wireshark log shows about 2 clusters of retransmissions a day ranging from 5 packets to hundreds. Those in each cluster are mainly between the PBX and some set of the VoIP phones, but not always the same set. Often retransmissions at the same time are to phones connected to the same switch, but sometimes retransmissions occur together to phones at opposite ends of the network. There are usually some coincident retransmissions in passing TCP traffic, for example between client machines and the file servers. The spikes in retransmissions and phone resets do not correlate well with when the network is heavily loaded. They seem to occur slightly more during the day, but most in the evening, when traffic should be decreasing. They occur reasonably often late at night when most computers are turned off and traffic should be lowest. Do you have any ideas that might help diagnose the cause of problems like this? One thing I have not yet tried, but should have, is updating the firmware of all the switches.

    Read the article

  • Skipping hardlinks when using TSM Backup

    - by Lars Haugseth
    We need to backup a filesystem with lots of hardlinks. Since there are several hardlinks for each "true" file, we would like to skip all the hardlinks when backing up the filesystem to avoid n exact copies of each file. The backup is done using Tivoli Storage Manager Backup, and we've been unable to get it to treat hardlinks as anything other than separate files to be backed up alongside each other. In case it's relevant for possible solutions, I'd like to note that it's possible to tell a hardlink from a proper file by the filename: foobarbaz-123.ext # file foobarbaz-123-1.ext # hardlink foobarbaz-123-2.ext # hardlink barbazfoo-456.ext # file barbazfoo-456-1.ext # hardlink barbazfoo-456-2.ext # hardlink barbazfoo-456-3.ext # hardlink That is, all hardlinks have two hyphens in the filename, where as proper files have just the one. The server is running Ubuntu Linux, and the files are situated on a gfs volume on our SAN.

    Read the article

  • Folder Redirection/Offline Files on Win 7 | Folders are empty when not connected to the domain

    - by Matt
    I've been struggling with this issue for days and cannot seem to find anyone else with a similar issue. I will note first that I have tried using both roaming profiles and the group policy setting for force local profiles.... now onto the problem. What I am trying to do is have my teachers accounts log onto their laptops using their domain credentials. Once logged in their desktop and documents are redirected to a network share //server/redirects/documents/. This works fine when the computer is connected to the domain network. Offline File Sync works great and caches the files locally. However this all breaks down when the user logs in when the computer is no longer connected to the domain network. When the user logs in the desktop and documents are empty. What I find very odd is if I manually go to the offline file folder all of the files are there, The group policy folder redirection does not execute to the offline folder. Is this by Design? (It does not work like this on Vista, I have the exact same group policy settings set on vista machines and it works flawlessly). Additional Info When I look at the event log there is no folder redirection events at all when user logs in and is not connected to the network. In addition a new profile is create in c:/users/username.domain.00x. Every log in creates an additional profile. There is a event that states that a registry files were still in use. Any help would be appreciated.

    Read the article

  • Linux - Help, I'm running out of inodes!

    - by Rory McCann
    I have a filesystem that has lots of small files. Currently about 80% of inodes are used (I checked with df -i), however only 60% of disk space is used. How can I 'increase' the number of inodes? If it was just disk space, I know that I could just increase the size of the disk (this disk is on LVM). If I increase the size of the disk, will that make me have more inodes? I'm willing to grow the filesystem this disk is on, if that'd help.

    Read the article

  • Virtualization and best hardware sharing scenario for me

    - by azera
    Hello, Following this thread on super user, I now want to start installing all my vm on the hardware. As a remainder, i have a (powerful enough) server on which i want to install 3 OS: there is a debian (general dev testbed purposes), an ipcop (network control/firewall) and a freenas (local network file sharing). I'm wondering which scenario would be the best for me and if I will be able to share the hardware to do what i want; either a - install an hypervisor like the free vmware esx and all three vms in it, or b - install debian, and the other two running inside it with virtual box My need being that: the ipcop should handle all network traffic to the internet, meaning all traffic from my main computer but also all traffic from the other two vm the freenas shares should be accessible from the other two vm and my main computer too i don't really care about the debian access, i only need to access it from my main computer, not the other vms Will I need to install additionnal network cards for each vm or can they all share the same one happily ? (right now I have two, one linking the server to my router [which only ipcop is gonna use] and one linking it to my switch [which i would like all three to use]) As for harddrives, I was going to use 1 harddrive cut in 3 partitions to install all three OSes, then add to that the freenas drives, will it be correct ? Thanks a lot for anyone who can help me, this is kind of a vast area and I'm not sure which way to go at all

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >