Search Results

Search found 13682 results on 548 pages for 'move constructor'.

Page 487/548 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • How do I tell mdadm to start using a missing disk in my RAID5 array again?

    - by Jon Cage
    I have a 3-disk RAID array running in my Ubuntu server. This has been running flawlessly for over a year but I was recently forced to strip, move and rebuild the machine. When I had it all back together and ran up Ubuntu, I had some problems with disks not being detected. A couple of reboots later and I'd solved that issue. The problem now is that the 3-disk array is showing up as degraded every time I boot up. For some reason it seems that Ubuntu has made a new array and added the missing disk to it. I've tried stopping the new 1-disk array and adding the missing disk, but I'm struggling. On startup I get this: root@uberserver:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d1 : inactive sdf1[2](S) 1953511936 blocks md0 : active raid5 sdg1[2] sdc1[3] sdb1[1] sdh1[0] 2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] I have two RAID arrays and the one that normally pops up as md1 isn't appearing. I read somewhere that calling mdadm --assemble --scan would re-assemble the missing array so I've tried first stopping the existing array that ubuntu started: root@uberserver:~# mdadm --stop /dev/md_d1 mdadm: stopped /dev/md_d1 ...and then tried to tell ubuntu to pick the disks up again: root@uberserver:~# mdadm --assemble --scan mdadm: /dev/md/1 has been started with 2 drives (out of 3). So that's started md1 again but it's not picking up the disk from md_d1: root@uberserver:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid5 sde1[1] sdf1[2] 3907023872 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU] md_d1 : inactive sdd1[0](S) 1953511936 blocks md0 : active raid5 sdg1[2] sdc1[3] sdb1[1] sdh1[0] 2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] What's going wrong here? Why is Ubuntu trying to pick up sdd into a different array? How do I get that missing disk back home again? [Edit] - After adding the md1 to mdadm.conf it now tries to mount the array on startup but it's still missing the disk. If I tell it to try and assemble automatically I get the impression it know it needs sdd but can't use it: root@uberserver:~# mdadm --assemble --scan /dev/md1: File exists mdadm: /dev/md/1 already active, cannot restart it! mdadm: /dev/md/1 needed for /dev/sdd1... What am I missing?

    Read the article

  • routing through multiple subinterfaces in debian

    - by Kstro21
    my question is as simple as the title, i have a debian 6 , 2 NICs, 3 different subnets in a single interface, just like this: auto eth0 iface eth0 inet static address 192.168.106.254 netmask 255.255.255.0 auto eth0:0 iface eth0:0 inet static address 172.19.221.81 netmask 255.255.255.248 auto eth0:1 iface eth0:1 inet static address 192.168.254.1 netmask 255.255.255.248 auto eth1 iface eth1 inet static address 172.19.216.3 netmask 255.255.255.0 gateway 172.19.216.13 eth0 is conected to a swith with 3 differents vlans, eth1 is conected to a router. No iptables DROP, so, all traffic is allowed. Now, passing the traffic through eth0 is OK, passing the traffic through eth0:0 is OK, but, passing the traffic through eth0:1 is not working, i can ping the ip address of that sub interface from a pc where this ip is the default gateway, but can't get to servers in the subnet of the eth1 interface, the traffic is not passing, even when i set the iptables to log all the traffic in the FORWARD chain and i can see the traffic there, but, the traffic is not really passing. And the funny is i can do any the other way around, i mean, passing from eth1 to eth0:1, RDP, telnet, ping, etc, doing some work with the iptable, i manage to pass some traffic from eth0:1 to eth1, the iptables look like this: iptables -t nat PREROUTING -d 192.168.254.1/32 -p tcp -m multiport --dports 25,110,5269 -j DNAT --to-destination 172.19.216.1 iptables -t nat PREROUTING -d 192.168.254.1/32 -p udp -m udp --dport 53 -j DNAT --to-destination 172.19.216.9 iptables -t nat PREROUTING -d 192.168.254.1/32 -p tcp -m tcp --dport 21 -j DNAT --to-destination 172.19.216.11 iptables -t nat POSTROUTING -s 172.19.216.0/24 -d 172.19.221.80/29 -j SNAT --to-source 172.19.221.81 iptables -t nat POSTROUTING -s 172.19.216.0/24 -d 192.168.254.0/29 -j SNAT --to-source 192.168.254.1 iptables -t nat POSTROUTING -s 172.19.216.0/24 -o eth0 -j SNAT --to-source 192.168.106.254 dong this is working, but,it is really a headache have to map each port with the server, imagine if i move the service from server, so, now i have doubts: can debian route through multiple subinterfaces?? exist a limit for this?? if not, what i'm doing wrong when i have the same setup with other subnets and it is working ok?? without the iptables rules in the nat, it doesn't work thanks and i hope good comments/answers

    Read the article

  • $PATH is driving me nuts

    - by Chris4d
    OK, apologies if this is something dumb, but I'm running out of ideas. Goal: prepend /usr/local/bin to $PATH Problem: $PATH won't do what I want or expect How I got here: I want to start learning to program, so I'm getting comfortable messing around under the hood, but don't have a lot of experience. I installed the fish shell (because it's friendly!) using homebrew and set it as my default shell (under system prefs>users & groups>advanced). At some point, I ran brew doctor to see if my installs were all kosher, and it suggested I move /usr/local/bin to the front of $PATH so that I could use my installation of git rather than the system copy. Fine - but between path_helper and fish, something was happening to $PATH that was out of my control, and I could never get the paths arranged in the right way. Environment: OSX 10.8.2, upgraded from 10.7ish, with xcode and devtools installed, plus x11, homebrew, and fish More info: I've set my user's default shell back to bash, and tried a variety of shells thru terminal.app - bash, fish, sh. I moved /usr/local/bin to the top of /etc/paths but it didn't change anything. I looked thru the various config.fish files and commented out stuff that might mess with $PATH, didn't help. I have the following files in /etc/paths.d/: ./10-homebrew containing /usr/local/bin ./20-fish containing /usr/local/Cellar/fish/1.23.1/bin ./40-XQuartz containing /opt/X11/bin I added set +x to my profile and when I start terminal.app I get: Last login: Mon Oct 1 13:31:06 on ttys000 + '[' -x /usr/libexec/path_helper ']' + eval '/usr/libexec/path_helper -s' ++ /usr/libexec/path_helper -s PATH="/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/Cellar/fish/1.23.1/bin:/opt/X11/bin"; export PATH; + '[' /bin/bash '!=' no ']' + '[' -r /etc/bashrc ']' + . /etc/bashrc ++ '[' -z '\s-\v\$ ' ']' ++ PS1='\h:\W \u\$ ' ++ shopt -s checkwinsize ++ '[' Apple_Terminal == Apple_Terminal ']' ++ '[' -z '' ']' ++ PROMPT_COMMAND='update_terminal_cwd; ' ++ update_terminal_cwd ++ local 'SEARCH= ' ++ local REPLACE=%20 ++ local PWD_URL=file://Chriss-iMac.local/Users/c4 ++ printf '\e]7;%s\a' file://Chriss-iMac.local/Users/c4 Chriss-iMac:~ c4$ So it looks like path_helper runs, but then running echo $PATH nets me /usr/bin:/bin:/usr/sbin:/sbin. So, it looks like path_helper isn't even doing what it's supposed to anymore? I'm sure there is some well-defined behavior here that I don't understand, or I borked something while trying to fix it. Please help!

    Read the article

  • How to set up a file server in a restricted corporate environment

    - by Emilio M Bumachar
    I work in a big corporation, and the disk space my team gets in the corporate file server is so low, I am considering turning my work PC into a file server. I ask this community for links to tutorials, software suggestions, and advice in general about how to set it up. My machine is an Intel Core2Duo E7500 @ 3GHz, 3 GB of RAM, Running Windows XP Service Pack 3. Upgrading, formatting or installing another OS is out of the question. But I do have Administrator priviledges on the PC, and I can install programs (at least for now). A lot of security software I don't even know about is and must remain installed. But I only need communication whithin the corporate network, which is not restricted. People have usernames (logins) on the corporate network, and I need to use them to restrict access. Simply put, I have a list of logins of team members, and only people in the list should access the files. I have about 150 GB of free disk space. I'm thinking of allocating 100 GB to the team's shared files. I plan monthly backups on machines of co-workers, same configuration. But automation of backups is a nice, unnecessary feature: it's totally acceptable for me to manually copy the contents to a different machine once a month. Uptime is important, as everyone would use these files in their daily work. I have experience as a python and C programmer, but no experience whatsoever as a sysadmin, and almost nothing of my programming experience is network programming. I'm a complete beginner in this. Thanks in advance for any help. EDIT I honestly appreciate all the warnings, I really do, but what I plan to make available is mostly stuff that now is solely on DVDs just for space reasons. It's 'daily work' to read them, but 'daily work write' files will remain on the corporate server. As for the importance of uptime, I think I overstated it: a few outages are OK, it's already an improvement over getting the DVDs. As for policy, my manager is kind of on my side, I will confirm that before making my move. As for getting more space through the proper channels, well, that was Plan A, and it's still on the table... But I don't have much hope. I'm not as "core businees" as I'd like.

    Read the article

  • SQL Clustering on Hyper V - is a cluster within a cluster a benefit.

    - by Chris W
    This is a re-hash of a question I asked a while back - after a consultant has come in firing ideas in to other teams in the department the whole issue has been raised again hence I'm looking for more detailed answers. We're intending to set-up a multi-instance SQL Cluster across a number of physical blades which will run a variety of different systems across each SQL instance. In general use there will be one virtual SQL instance running on each VM host. Again, in general operation each VM host will run on a dedicated underlying blade. The set-up should give us lots of flexibility for maintenance of any individual VM or underlying blade with all the SQL instances able to fail over as required. My original plan had been to do the following: Install 2008 R2 on each blade Add Hyper V to each blade Install a 2008 R2 VM to each blade Within the VMs - create a failover cluster and then install SQL Server clustering. The consultant has suggested that we instead do the following: Install 2008 R2 on each blade Add Hyper V to each blade Install a 2008 R2 VM to each blade Create a cluster on the HOST machines which will host all the VMs. Within the VMs - create a failover cluster and then install SQL Server clustering. The big difference is the addition of step 4 whereby we cluster all of the guest VMs as well. The argument is that it improves maintenance further since we have no ties at all between the SQL cluster and physical hardware. We can in theory live migrate the guest VMs around the hosts without affecting the SQL cluster at all so we for routine maintenance physical blades we move the SQL cluster around without interruption and without needing to failover. It sounds like a nice idea but I've not come across anything on the internet where people say they've done this and it works OK. Can I actually do the live migrations of the guests without the SQL Cluster hosted within them getting upset? Does anyone have any experience of this set up, good or bad? Are there some pros and cons that I've not considered? I appreciate that mirroring is also a valuable option to consider - in this case we're favouring clustering since it will do the whole of each instance and we have a good number of databases. Some DBs are for lumbering 3rd party systems that may not even work kindly with mirroring (and my understanding of clustering is that fail overs are completely transparent to the clients). Thanks.

    Read the article

  • Computer freezes +/- 30 seconds, suspicion on SSD

    - by Robert vE
    My computer freezes for about 30 seconds, this happens occasionally. When it happens I can still move the mouse, sometimes even alternate between tabs in google chrome. If I try to open windows explorer nothing happens. Also chrome rapports "waiting for cache". It also happens in starcraft II, during which the sounds loops. I have made a Trace as this topic describes: How do I troubleshoot a Windows 7 freeze or slowness? Trace: https://docs.google.com/open?id=0B_VkKdh535p6NklhSDdBLURUMnc I have looked at it, but I couldn't figure it out. My system specs are: AMD Athlon X4 651 Asus Ati HD6670 ADATA SSD sp900 Asus f1a55 mainboard 4 GB crucial 1333 ram 500 watt atx ps I'm running Windows 7, fully updated. Any help is much appreciated. Update: I tried something before your reply that may have helped the problem. I don't know for sure if it has, it's too soon to tell. A bit of history first. I had problems installing win7 on my ssd from the start. In IDE mode it worked, but I had the same problems as above. AHCI was a total fail, with it on before install as well as turning it on after install (including tweaking register). I didn't bother installing the AMD chipset/AHCI as it was reported to have no TRIM function and thus make problems worse. Eventually I did install the AMD SMbus driver as the stability issues were driving me crazy. It worked, no more issues, until I installed some extra drivers and software. Audio/LAN/ASUS suite, I don’t see the relation, but somehow it screwed up my system again. As a last effort I posted here on this site. After which the thought occurred to me turn on AHCI again as by now I had all necessary drivers installed anyway. (plus all windows updates downloaded/installed in the meantime) I did and stability didn’t seem great the first few reboots, but eventually everything seemed to work great. I tried to play starcraft II – an almost guaranteed freeze before – and I had no problems. I’m basically crossing my fingers and hope the problem is gone for good. I still think it has something to do with my SSD. In my research into the problem I noticed a lot of these issues with sandforce 2281 firmware, the exact same firmware I have. People reported the same problem that I had, freezes. Additionally they reported that during a freeze the hdd light stayed on, I noticed after I read this that this happened with my computer as well. None of this is conclusive evidence that my SSD is really to fault, but it is suspicious. And why turning on AHCI would fix it I don’t know. Thank you Tom for taking a look, if the problem returns I will certainly do what you advised.

    Read the article

  • Moving from VPS to Cloud

    - by GRIGORE-TURBODISEL
    ...and I have a few questions. I'm basically working on a MySQL+PHP based webapp. Since I don't have on-demand scaling with VPS, I'm planning to move from VPS to Cloud when I hit the 1000 subscribers barrier. I'm looking at Windows Azure but I'm ok with other suggestions. So here are my questions: Will it really cost me a kidney? Every subscriber needs to download around 4-5MB of static resources each day. Bandwidth is free on the VPS but here I see costs can easily get to $800.00/mo; this makes me very insecure about the whole thing, I mean VPS is just $2,000/yr. Do I need another VM or is PHP included in the Web Sites? I have basic sysadmin skills, I think I can handle setting up a PHP install, but will I have to do this? If yes, what other service do I need to setup manually? What about Memcached, MySQL, etc? What security protections does it include? For example I have some basic protection included, like directory traversals and executable files upload; I also have CloudFlare on my other websites for DDoS protection; will I need to do the same thing here too, can it even be installed, can I edit my DNS records, etc? How are e-mails, subdomains, add-on domains, parked domains, etc. handled? I haven't seen any references to e-mail boxes. On the VPS I simply add them from cPanel ([email protected] / whatever.mysite.com / ...); do I have a similar management interface here? Do I get SSH access? Or at least FTP, remote MySQL access and maybe some incremental back-ups or something? Can I see my quotas and advanced traffic info? I must mention that I really like the idea of the whole "cloud" concept, the added reliability and everything but I really need maybe a parallel to regular hosting or something so I know what to expect.

    Read the article

  • Fedora 11 System - Failed Hard Drive Removed, and Boot gets GRUB Hard Disk Error

    - by Mindful
    Greetings, I have a machine with a 120GB ATA drive that has what I thought to be non-essential data on it. I also have a 320GB SATA hard drive with the OS/Application/Files (good data I want to keep). My 120GB ATA is failing I believe, as my computer kept slowing to a halt. However, when I move the drive from BIOS my computer will not start, says "GRUB Hard Disk Error". I know that my Fedora system has an LVM setup. I am looking to just remove the 120GB drive from "the mix", and just have one hard drive. How do I recover ? Thank you. I have access to a Linux Live CD right now and can make any changes. However, it won't boot into my OS - it fails. UPDATE: here's my Grub.Conf # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd1,0) # kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00 # initrd /initrd-version.img #boot=/dev/sda1 default=0 timeout=5 splashimage=(hd1,0)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.30.10-105.2.23.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.10-105.2.23.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.10-105.2.23.fc11.i686.PAE.img title Fedora (2.6.30.9-102.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.9-102.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.9-102.fc11.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.img title Fedora (2.6.27.21-170.2.56.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.21-170.2.56.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.21-170.2.56.fc10.i686.img title Fedora (2.6.27.19-170.2.35.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.19-170.2.35.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.19-170.2.35.fc10.i686.img title Upgrade to Fedora 10 (Cambridge) kernel /upgrade/vmlinuz preupgrade repo=hd::/var/cache/yum/preupgrade stage2=http://chi-10g-1-mirror.fastsoft.net/pub/linux/fedora/linux/releases/10/Fedora/i386/os/images/install.img ks=hd:UUID=f11769ba-29bc-46de-8c40-a949720a438e:/upgrade/ks.cfg initrd /upgrade/initrd.img title Win rootnoverify (hd0,0) chainloader +1

    Read the article

  • How to disable or tune filesystem cache sharing for OpenVZ?

    - by gertvdijk
    For OpenVZ, an example of container-based virtualization, it seems that host and all guests are sharing the filesystem cache. This sounds paradoxical when talking about virtualization, but this is actually a feature of OpenVZ. It makes sense too. Because only one kernel is running, it's possible to benefit from sharing the same pages of filesystem cache in memory. And while it sounds beneficial, I think a set up here actually suffers in performance from it. Here's why I think why: my machines aren't actually sharing any files on disk so I can't benefit from this feature in OpenVZ. Several OpenVZ machines are running MySQL with MyISAM tables. MyISAM relies on the system's filesystem cache for caching of data files, unlike InnoDB's buffer pool. Also some virtual machines are known to do heavy and large I/O operations on the same filesystem in the host. For example, when running cat *.MYD > /dev/null on some large database in one machine, I saw the filesystem cache lowering in another, monitored by htop. This essentially flushes all the useful filesystem cache in guests (FIFO) and so it flushes the MySQL caches in the guests. Now users are complaining that MySQL is very slow. And it is. Some simple SELECT queries take several seconds on times disk I/O is heavily used by other machines. So, simply put: Is there a way to avoid filesystem cache being wiped out by other virtual machines in container-based virtualization? Some thoughts: Choosing algorithm for flushing filesystem cache in the kernel. (possible? how?) Reserving a certain amount of pages for a single VM. (seems no option for filesystem cache type of pages that reading man vzctl) Will running MySQL on another filesystem get me anywhere? If not, I think my alternatives are: Use KVM for MySQL-MyISAM running VMs. KVM actually assigns memory to the VM and does not allow swapping out caches unless using a balloon driver. Move to InnoDB and tune the buffer pools, dirty pages, etc. This is now considered to be 'nice to have' on the long-term as not everyone responsible for administration of the system understands InnoDB. more suggestions welcome. System software: Proxmox (now 1.9, could be upgraded to 2.x). One big LV assigned for the VMs.

    Read the article

  • Recommendations or advice for shared computer control

    - by Telemachus
    Basic scenario: we are a school (overwhelmingly Mac, some Windows machines via BootCamp), and we are considering using DeepFreeze to guard the state of our shared machines. We have roughly 250 machines that are either shared laptops (which move around quite a bit) or common desktops in public spaces. Obviously, we spend a lot of time maintaining the machines and trying to reverse the inevitable drift as people make changes to the computers. We would like to control the integrity of the build we initially put onto the machines without handcuffing users and especially without using Mac's Parental Control software. (We've had nothing but bad experiences with it.) We've been testing DeepFreeze, and so far it's very impressive. But I'm curious to hear if people who have used DeepFreeze or any similar software have any advice or tips. To get things started, I will post my own pros and cons. Pros: The state of the machine is frozen in our chosen state. All changes made to the machine after that disappear upon restart. (This frozen state really appears to cover everything. I have yet to do something to a test machine that isn't instantly healed.) Tons of trivial but time-consuming maintenance is gone in an instant. Also, lots of not-so-trivial breakage should be avoided. There are good options, however, that allow you to create storage spaces either globally or per user. (Otherwise, stored files disappear upon reboot. For some machines, this is a good option itself. Simply warn people: save externally or else; this machine is a kiosk, not your storage space.) Cons: Anytime we actually need to make a change (upgrade basic software, add a printer or an airport permanently, add new software), the process is a bit more complex. Reboot into a special mode (thaw state), make changes, reboot back into frozen mode. If (when?) we forget this, we will end up making changes that disappear after the next reboot. Users will forget to save files correctly (in the right place or externally), and we will have loud, unpleasant conversations explaining that we can't recover the document they worked on all afternoon yesterday. The machine rebooted. The file is gone. These are my initial thoughts, but I would love to hear from other people who have experience with DeepFreeze or any similar software. What should we be careful about? Do the pros outweigh the cons? What gains or problems am I not seeing? Thanks.

    Read the article

  • Problem with several USB devices on Windows XP. How to diagnose USB device?

    - by Lukasz Baran
    Hello All, Recently I've bought some electronic devices (like e.g. Sony Ebook reader PRS-600 or HP PDA IPaq) which are connected to PC using USB ports. These devices have own batteries and they are recharged using USB. However, I am experiencing a problem with Ebook Reader and the similiar thing happens with my PDA (but in this case it's more rare). Sometimes, I am unable to make it start recharging, when it's connected to USB port. The LCD diode on my Reader lights up and the icon in the system tray appears. They both indicate that the device has been connected. However, the device should start recharching and it should appear on Windows XP as a 'explorable' device (just like pendrive it offers some HDDs). None of these happens:( This problem appears on two PCs - desktop and laptop. Both have Windows XP SP3 installed. And what's interesting and surprising to me is the fact that these devices sometimes work without any problems. I mean, I just plug them in and they work properly. Besides that, there are some computers (I have tested a variety of possible conditions) on which the problem does not appear! I am confused and, on the other hand, determined to find out what is the real cause of such strange behavior. Maybe I'm wrong but I always assumed that I'm using deterministic machines, but the described situation makes me feel like I was dreaming:) And that's why I'm asking you here. Are there any tools that could help me diagnose USB ports? Or maybe anyone knows the solution? I have a lot experience with low level programming (mostly from the times of MS-DOS applications), but I'm not an expert when it comes to WinXP technology. If this was a problem with network connection, I would know what to do - I know how to track network traffic, but with USB ports I feel powerless. I have a feeling that the problem may lie in a way WinXP handles USB devices. Personally, I hate auto-discovery and P'n'P features available in Windows. I know that some of may suggest to move to Linux or other *Nix system. Unfortunately, I have to use WinXP in my professional life, so it's rather impossible. Besides that, I don't consider XP as a total disaster. Any help from you will be appreciated!

    Read the article

  • Frequent and weird wifi disconnections

    - by Sidou
    How would you explain, troubleshoot (and solve) the following problem? Wifi ADSL modem router D-link 2640R installed in living room at about 1.8m height. Working fine, synchronising and getting/serving stable internet connection. First situation: -Laptop 01 in other end of the house, let's say in room01 southern to the living room, distant by about 15m. Getting stable signal of good to very good quality. No disconnection. -Laptop 02 in room02 opposite to room01 (5m West) which makes it almost at the same distance and direction from the router located 15m North. Getting stable signal of good to very good quality. No disconnection. Second situation: -Laptop 01 moved to room03 Northern to the living room (actually just 3m behind the wall where the router lies). Getting stable signal of excellent quality. No disconnection. -Laptop 02 still in room02 but now experiences frequent disconnections (actually almost impossible to get the Internet even though the signal level is still very good. Either no Internet with the wifi icon appearing connected to access point or no connection established at all which happens every 2 minutes and that means virtually no Internet at all as I can just get a timeframe of 1 minute or so to load any website or even get to the router's web based control panel. If Laptop 01 is completely shut down or its wifi adapters shut down or even still working but its wifi MAC address forbidden, then Laptop 02 has no problem at all. If Laptop 02 is moved to a nearer location to the router, in the living room for instance, then no connection problem occurs even if Laptop 01 is also connected. And also if we move back Laptop 01 to its original location (room 01), then no problem as well. I'm completely lost and don't know how to address this issue. I tried to change the Wifi channel and even tried the auto channel scan but that didn't solve it. I know that the problem is probably coming from Laptop 01 being in its new location or some sort of interference as the problem occurs only under the described condition but I have no idea how to solve it! I also scanned the neighborhood for wifi jam using InSSIDer, there are few other access points but they don't seem to affect the situation. Any ideas about the steps to follow or tools to use ?

    Read the article

  • How can I remove old log entries from a log file and archive them somewhere else in Linux?

    - by Mike B
    CentOS 4.x I apologize in advance if this is not the appropriate place to ask this question. It pertains to a linux server / IT admin task. I've got a log file on an old CentOS 4.x server and I want to remove log entries older than a certain date and place them in a new file for archive. Here's an example of the log format: 2012-06-07 22:32:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:03,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:04,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:10,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:12,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:15,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:40,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:58,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:02,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| Essentially, I'm looking for a one-liner that will do the following: Find any events older than a provided YYYY-MM-DD and remove them from the primary log file. Take the deleted events from step 1 and put them in a new log file (Optional) Compress the new archive log file holding the deleted events. I'm aware that there are log rotate tools that do this but this should just be a one-time task so I'd prefer not to set that up. Additional notes: If the date part it tricky or too resource intensive, an alternative would be to just keep the last X number of lines and move the rest. I was originally thinking of something like tail -n 10000 > newfile.txt but that would mean moving the "good" logs to a new file and then doing a name swap... and then I'd still need to remove the "good" entries from the archive. This particular log file is pretty large (1 GB) so I'd prefer the task to be as resource and time efficient as possible. The extra pipes in the log concern me and I'm not sure if I'd need extra protection in the commands to avoid that from causing problems.

    Read the article

  • Looking for personal scheduling software / todo list with rather particular requirements

    - by Cthulhu
    I've been scouring the web for a couple of (my boss') hours, looking for a piece of software that can organize my tasks in two ways. First, I have a list of bullet points / todo items I can do at any given time. Think of stuff like solve issue X, ask X about Y, write documentation about Z, etcetera. Second, I have a number of running projects I'd like to organize better, as in schedule for a certain part of a day of the week. Ideally (I think), my day would be organized as 50% spent on projects and 50% on the other small things. Now, I don't like most calendar applications (such as Outlook & friends), their UI is too 'official', not really easy to move stuff around (in my experience). I don't like most todo lists either, too static and things. I like new, fast and hip software. I've looked at GTD versions of Tiddlywiki, and I like mGSD for one particular feature. You can make lists of tasks and basically give them one of three statusses - Now (nothing required, you can do it right away), Waiting (you need someone or something before you can work on this), or the most gratifying of all, Done. I like that feature because it's a simple todo list, but indicates more accurately the things you can do right now and the things you depend on someone else for to do. Anyways, that's just a small aspect of that program - most of the other things in there I can't find a particularly good use for. If there's something like that (maybe something that works even snappier, cleaner UI), combined with an easy to use bit of scheduling software (optionally separated into two applications, but preferrably not), I think I'd like that. (Besides something like that, I also use several instances of Trac to monitor tasks and bugs and things for the various clients and projects I have to serve, and TaskCoach to monitor the amount of time I spend on each task / each client. An easy / low-maintenance time tracking software would be neat too) Of course, the software has to be free to use. I don't like shareware, trials, limited software and the like. I could develop my own too, but I'm lazy like that and there's a dozen other projects I'd like to do in my free time (neither of which I actually do). Edit: I like David Seah's printable CEO stuff, if something like that (with some video game / instant achievement / gratification) exists in software, it'd be awesome.

    Read the article

  • Lots of dropped packages when tcpdumping on busy interface

    - by Frands Hansen
    My challenge I need to do tcpdumping of a lot of data - actually from 2 interfaces left in promiscuous mode that are able to see a lot of traffic. To sum it up Log all traffic in promiscuous mode from 2 interfaces Those interfaces are not assigned an IP address pcap files must be rotated per ~1G When 10 TB of files are stored, start truncating the oldest What I currently do Right now I use tcpdump like this: tcpdump -n -C 1000 -z /data/compress.sh -i any -w /data/livedump/capture.pcap $FILTER The $FILTER contains src/dst filters so that I can use -i any. The reason for this is, that I have two interfaces and I would like to run the dump in a single thread rather than two. compress.sh takes care of assigning tar to another CPU core, compress the data, give it a reasonable filename and move it to an archive location. I cannot specify two interfaces, thus I have chosen to use filters and dump from any interface. Right now, I do not do any housekeeping, but I plan on monitoring disk and when I have 100G left I will start wiping the oldest files - this should be fine. And now; my problem I see dropped packets. This is from a dump that has been running for a few hours and collected roughly 250 gigs of pcap files: 430083369 packets captured 430115470 packets received by filter 32057 packets dropped by kernel <-- This is my concern How can I avoid so many packets being dropped? These things I did already try or look at Changed the value of /proc/sys/net/core/rmem_max and /proc/sys/net/core/rmem_default which did indeed help - actually it took care of just around half of the dropped packets. I have also looked at gulp - the problem with gulp is, that it does not support multiple interfaces in one process and it gets angry if the interface does not have an IP address. Unfortunately, that is a deal breaker in my case. Next problem is, that when the traffic flows though a pipe, I cannot get the automatic rotation going. Getting one huge 10 TB file is not very efficient and I don't have a machine with 10TB+ RAM that I can run wireshark on, so that's out. Do you have any suggestions? Maybe even a better way of doing my traffic dump altogether.

    Read the article

  • 20GB+ worth of emails in my /home what is a better solution for that?

    - by Skinkie
    My email storage requirements are outgrowing anything reasonable with respect to local mail storage. As we speak 99% of my home partition is filled with personal mail in Thunderbirds mail dirs. Needless to say, this is just painful, badly searchable and as history has proven me that backups work, but Thunderbird is capable of loosing a lot of mail very easily. Currently I have an remote IMAPS server (Dovecot) running for my daily mail, accessible from anywhere, which from my own practice works efficiently up to about 1000 emails. Then some archive directories should be used to move mail around. I have been looking into DBMail, but I wonder if I make my case worse or better which such solution. None of the supported database employ string deduplication or string compression out of the box, so is this going to help me with 20GB+ mail? What about falling back to a plain old IMAP server? A filesystem like ZFS would support stuff like GZIP transparently, which could help. Could someone share their thoughts? The 20GB mostly consists of mailinglists, and normal mail. Not things like attachments. To add some clarifications; As we speak, my mail is not server side indexed at all - only my new mail arrives at a remote IMAP server. It is all local storage from former POP3 accounts, local mirrored Gmail and IMAP accounts. In my perspective it is not Thunderbird that sucks, its fileformat that sucks. Regarding the 1000 mails. On the road I am using Alpine and MobileMail, quite happy with both of them, but some management is required to actually manage the mail. Sieve helps a lot with that, but browing through 10.000 e-mails is not fun, especially not on a mobile client. I am quite happy with Dovecot, never had any issues with it. I just wonder if this is the way to go. Or if there are any other better solutions. What my question is: what is the best practice solution that allows 20GB+ mails and is -on demand remotely accessible, easy to backup and archive worthy. It doesn't need to be available 24x7. The final approach I took was installing a local IMAP server (Dovecot), configured it for being my archive, using the following guide: http://en.gentoo-wiki.com/wiki/Dovecot/InstallThunderbird

    Read the article

  • Rails /tmp/cache/assets permissions issue using Debian virtual machine hosted on OS X Lion

    - by Jim
    I am running Parallels Desktop 7 on OS X Lion. I have a VM with Debian installed, and inside that VM I setup a Rails development environment. I am using Parallels Tools to share out my OS X home directory to the VM - the goal here is to run the Rails server on the VM, but host the files on OS X (so they are automatically backed up, and so I can use tools like Textmate to develop with). Everything seems to work with the shared directory - my Debian user can read, write, and execute files. However, when I cloned a recent Rails project from Git, I got an error message when it tried to compile the CSS assets. My symptoms are exactly the same as in the question: http://stackoverflow.com/questions/7556774/rails-sprocket-error-compiling-css-assest-chown-issue I believe this is permissions-based, but it is really weird. My entire Rails project directory has permissions set to 777 and my Debian user owns it. If I navigate into /tmp/cache/assets, those permissions are the same. However, the three-character directories Rails is creating (DCE, DA1, D05, etc...) are being created without write permissions! If I refresh the Rails page a few times, about 4 or 5 (with Rails creating new three-character directories every time), eventually it will create one of the directories with the proper 777 permissions and everything will work! This will persist until I make a change to the CSS files and it has to recompile. Does anyone have any idea what might be going on here? I can't fathom why it is creating temp directories with incorrect permissions, or why after a few refreshes the good permissions kick in and it works... It definitely seems to be an issue with the share, since if I move the project into a different directory on the VM, it seems to work fine. On the OS X side, I've given the shared folder 777 permissions as well, but no dice...any ideas? Update I've found that the number of times I need to refresh before it works is not random - it has to do with how many assets are being compiled. For example, if I edit one of my CSS files, and there are four CSS files in the app/assets/stylesheets directory, I have to refresh four times before the app will finally work without the operation not permitted error...

    Read the article

  • Installing EclipseFP on Mac OS X

    - by Dom Kennedy
    I am trying to install EclipseFP. I'm running OS X Mavericks. I've tried following both the official installation instructions and the advice in this answer on SU, but I'm still having the same problem. I can get the plugin itself installed painlessly using Help -> Install New Software..., Bbut when I restart and switch to the Haskell perspective, things start to go wrong. The installation instructions tells me that I should receive a prompt to install BuildWrapper and Scion Browser. I do not receive this prompt. Furthermore, if I create a new Haskell project, my code has no syntax highlighting, and the Hoogle search feature does not appear to do anything. It's clear that the plugin is not set up correctly yet. I've tried running cabal update in Terminal, but this does not change anything. After several attempts going round in circles with this on Eclipse Juno, I uninstalled Eclispe and the Haskell Platform and performed a clean install of Eclipse Luna and the latest Haskell Platform. However, the problems are persisting. I've tried going into Preferences to see if I could sort any of this out manually. I should initially point out that my GHC installation seems to be correctly references under Preferences -> Haskell Implementations Under Haskell -> Helper executables, there are areas for configuring the options of both BuildWrapper and Scion Browser. At present, both are blank. I tried clicking the Install from Hackage... button beside each of them with no success; I receive an error message saying Expected executable <workspace>/.metadata/.plugins/net.sf.eclipsefp.haskell.ui/sandbox/.cabal-sandbox/bin/buildwrapper not found!` (replace buildwrapper for scion-browser and the message is the same) The Eclipse console displays the following exception after doing the above with BuildWrapper: src/Language/Haskell/BuildWrapper/GHCStorage.hs:313:32: Not in scope: data constructor ‘MatchGroup’ cabal.real: Error: some packages failed to install: buildwrapper-0.7.4 failed during the building phase. The exception was: ExitFailure 1 and after doing it for Scion-Browser: zip-archive-0.2.3.4 (reinstall) changes: text-1.1.0.0 -> 0.11.3.1 pandoc-1.12.3.3 (latest: 1.13) -http-conduit (new version) Graphalyze-0.14.1.0 (reinstall) changes: pandoc-1.12.4.2 -> 1.12.3.3, text-1.1.0.0 -> 0.11.3.1 cabal.real: The following packages are likely to be broken by the reinstalls: pandoc-1.12.4.2 unordered-containers-0.2.4.0 aeson-0.7.0.4 scientific-0.2.0.2 case-insensitive-1.1.0.3 HTTP-4000.2.10 Use --force-reinstalls if you want to install anyway. After receiving similar results as the above on previous attempts, I've tried using force-reinstalls and ended up at more dead ends. I am at a loss as to what is wrong and how to solve this. I should point out that my GHC installation appears to be correctly configured under Preferences -> Haskell -> Haskell Implementations. Apologies if any of this information is irrelevant, I'm just not really sure what is important and what isn't at this point. Any help anyone could provide me with would be greatly appreciated.

    Read the article

  • New-ManagedContentSettings - not working properly under Exchange 2010

    - by mfinni
    I have a client that is divesting a business unit into a new AD forest, Exchange org, etc. We're using Quest tools to migrate users and mailboxes. However, I have to build the new infrastructure to match the old one. In the old one, we're using Managed Folder Mailbox Policies to limit (or allow) retention. They started with Exchange 2007 and never upgraded to Retention Policies; oh well. So, in the old environment, when you use a 2007 server to define a new Managed Content Setting, you can pick "Email" from the dropdown for MessageClass. This is a display name; the actual MessageClass values are thus: MessageClass : IPM.Note;IPM.Note.AS/400 Move Notification Form v1.0;IPM.Note.Delayed;IPM.Note.Exchange.ActiveSync.Report;IPM.Note.JournalReport.Msg;IPM.Note.JournalReport.Tnef;IPM.Note.Microsoft.Missed.Voice;IPM.Note.Rules.OofTemplate.Microsoft;IPM.Note.Rules.ReplyTemplate.Microsoft;IPM.Note.Secure.Sign;IPM.Note.SMIME;IPM.Note.SMIME.MultipartSigned;IPM.Note.StorageQuotaWarning;IPM.Note.StorageQuotaWarning.Warning;IPM.Notification.Meeting.Forward;IPM.Outlook.Recall;IPM.Recall.Report.Success;IPM.Schedule.Meeting.*;REPORT.IPM.Note.NDR If I take that and try to mangle it into a new cmdlet for Ex2010 in my new environment here's what I get New-ManagedContentSettings -Name "Delete Messages older then 90 days" -FolderName "Entire Mailbox" -RetentionEnabled $True -AgeLimitForRetention 90 -TriggerForRetention WhenDelivered -RetentionAction DeleteAndAllowRecovery -MessageClass "IPM.Note","IPM.Note.AS/400MoveNotificationFormv1.0","IPM.Note.Delayed","IPM.Note.Exchange.ActiveSync.Report","IPM.Note.JournalReport.Msg","IPM.Note.JournalReport.Tnef","IPM.Note.Microsoft.Missed.Voice","IPM.Note.Rules.OofTemplate.Microsoft","IPM.Note.Rules.ReplyTemplate.Microsoft","IPM.Note.Secure.Sign","IPM.Note.SMIME","IPM.Note.SMIME.MultipartSigned","IPM.Note.StorageQuotaWarning","IPM.Note.StorageQuotaWarning.Warning","IPM.Notification.Meeting.Forward","IPM.Outlook.Recall","IPM.Recall.Report.Success","IPM.Schedule.Meeting.*","REPORT.IPM.Note.NDR" -whatif Invoke-Command : Cannot bind parameter 'MessageClass' to the target. Exception setting "MessageClass": "The length of t he property is too long. The maximum length is 255 and the length of the value provided is 518." At C:\Users\MFinnigan.sa\AppData\Roaming\Microsoft\Exchange\RemotePowerShell\pfexcas02.fve.ad.5ssl.com\pfexcas02.fve.ad .5ssl.com.psm1:28204 char:29 + $scriptCmd = { & <<<< $script:InvokeCommand ` + CategoryInfo : WriteError: (:) [New-ManagedContentSettings], ParameterBindingException + FullyQualifiedErrorId : ParameterBindingFailed,Microsoft.Exchange.Management.SystemConfigurationTasks.NewManaged ContentSettings So, the config object can store all that mess, but I can't fit it in through the cmdlet to create the object. Lovely. Any ideas?

    Read the article

  • APC PHP cache size does not exceed 32MB, even though settings allow for more

    - by hardy101
    I am setting up APC (v 3.1.9) on a high-traffic WordPress installation on CentOS 6.0 64 bit. I have figured out many of the quirks with APC, but something is still not quite right. No matter what settings I change, APC never actually caches more than 32MB. I'm trying to bump it up to 256 MB. 32MB is a default amount for apc.shm_size, so I am wondering if it's stuck there somehow. I have run the following echo '2147483648' > /proc/sys/kernel/shmmax to increase my system's shared memory to 2G (half of my 4G box). Then ran ipcs -lm which returns ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 2097152 max total shared memory (kbytes) = 8388608 min seg size (bytes) = 1 Also made a change in /etc/sysctl.conf then ran sysctl -p to make the settings stick on the server. Rebooted, too, for good measure. In my APC settings, I have mmap enabled (which happens by default in recent versions of APC). php.ini looks like: apc.stat=0 apc.shm_size="256M" apc.max_file_size="10M" apc.mmap_file_mask="/tmp/apc.XXXXXX" apc.ttl="7200" I am aware that mmap mode will ignore references to apc.shm_segments, so I have left it out with default 1. phpinfo() indicates the following about APC: Version 3.1.9 APC Debugging Disabled MMAP Support Enabled MMAP File Mask /tmp/apc.bPS7rB Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Oct 11 2011 22:55:02 Directive Local Value apc.cache_by_default On apc.canonicalize O apc.coredump_unmap Off apc.enable_cli Off apc.enabled On On apc.file_md5 Off apc.file_update_protection 2 apc.filters no value apc.gc_ttl 3600 apc.include_once_override Off apc.lazy_classes Off apc.lazy_functions Off apc.max_file_size 10M apc.mmap_file_mask /tmp/apc.bPS7rB apc.num_files_hint 1000 apc.preload_path no value apc.report_autofilter Off apc.rfc1867 Off apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 256M apc.slam_defense On apc.stat Off apc.stat_ctime Off apc.ttl 7200 apc.use_request_time On apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock On apc.php reveals the following graph, no matter how long the server runs (cache size fluctuates and hovers at just under 32MB. See image http://i.stack.imgur.com/2bwMa.png You can see that the cache is trying to allocate 256MB, but the brown piece of the pie keeps getting recycled at 32MB. This is confirmed as refreshing the apc.php page shows cached file counts that move up and down (implying that the cache is not holding onto all of its files). Does anyone have an idea of how to get APC to use more than 32 MB for its cache size?? **Note that the identical behavior occurs for eaccelerator, xcache, and APC. I read here: http://www.litespeedtech.com/support/forum/archive/index.php/t-5072.html that suEXEC could cause this problem.

    Read the article

  • Sharing files between multiple sites using only desktop software

    - by perlyking
    Our organisation has three sites; a head office, where the master copies of company files are stored, plus two branch offices using only workstations and a NAS or two. Currently we're talking about <10GB. At the main office, we have no admin access to the file server, as this is entirely controlled by the larger institution where we are located. For the same reason, we have no VPN remote access to this network. Instead, we simply have access to a network share using over a Novell LAN. Question: how can we share files between offices in way that minimises latency, i.e. that gives us a mirror of the main network share at each site? (There is little likelihood of concurrent editing, and we can live with the odd file conflict now and again). Up to now branch office staff have had to use GotoMyPC-type solutions to remotely access files held at the main office. Or email. I was hoping to use Google Drive on a dedicated workstation at each office to sync the contents of the network share (head office) or NAS (branch offices) via the cloud, but at my last attempt (29 Jun '12), the Google Drive installer would not allow me to designate the remote network share as the "target" folder. (I chose Google Drive over Drobbox et al. as we already use GMail for corporate mail) The next idea was to use a designated workstation at head office to mirror the network share to a local drive, then use Google Drive to push that to the cloud. This seems a step too far. Nor do I have any good ideas about how to achieve this network/local mirroring, as we can't, for example, install the rsync daemon on the server. I do not want to use Google Drive locally on each workstation as this will inconvenience users, and more importantly, move files off the backed-up, well-maintained (UPS, RAID etc) network share at head office. Our budget is only in the £100's. Should we perhaps just ditch the head office server and use something like JungleDisk? At least this presents the user with what appears to be a mapped drive.

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment

    - by user69508
    (Please forgive if this is posted in an incorrect forum. We didn’t know exactly where to post it.) We have an ASP.NET Web API single page application - a browser-based app running in IIS to serve up HTML5/CSS3/JavaScript, which talks to the ASP.NET Web API endpoint only to access a database and transfer JSON data. Everything is working great in our development environment - that is, we have one Visual Studio solution with an ASP.NET Web API project and two class library projects for data access. While development and testing on development boxes, using IIS Express to a localhost:port to run the site and access the Web API, everything is fine. Now we need to move it to a production environment (and we’re having problems - or just not understanding what needs to be done). The production environment is all internal (nothing will be exposed on the public Internet). There are two domains. One domain, the corporate domain, is where all users login normally. The other domain, the process domain, contains the SQL Server instance that our app and Web API will need to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. So, I guess what they want is: corp domain (end users) <– firewall (open port 80) <– DMZ (web server running IIS) <– firewall (open port 80 or 1433????) <– process domain (IIS for Web API and SQL Server) We’re developers and don’t really understand all the networking aspects, so we’re wondering how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code (HTML5/CSS3/JavaScript/images/etc.) is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Or, does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to "http://server/appname/api/getitmes"? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? I’m sure there are other questions, but we’ll start with these. Thanks!!! (Note: the servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.)

    Read the article

  • Why does Excel now give me already existing name range error on Copy Sheet?

    - by WilliamKF
    I've been working on a Microsoft Excel 2007 spreadsheet for several days. I'm working from a master template like sheet and copying it to a new sheet repeatedly. Up until today, this was happening with no issues. However, in the middle of today this suddenly changed and I do not know why. Now, whenever I try to copy a worksheet I get about ten dialogs, each one with a different name range object (shown below as 'XXXX') and I click yes for each one: A formula or sheet you want to move or copy contains the name 'XXXX', which already exists on the destination worksheet. Do you want to use this version of the name? To use the name as defined in destination sheet, click Yes. To rename the range referred to in the formula or worksheet, click No, and enter a new name in the Name Conflict dialog box. The name range objects refer to cells in the sheet. For example, E6 is called name range PRE on multiple sheets (and has been all along) and some of the formulas refer to PRE instead of $E$6. One of the 'XXXX' above is this PRE. These name ranges should only be resolved within the sheet within which they appear. This was not an issue before despite the same name range existing on multiple sheets before. I want to keep my name ranges. What could have changed in my spreadsheet to cause this change in behavior? I've gone back to prior sheets created this way and now they give the message too when copied. I tried a different computer and a different user and the same behavior is seen everywhere. I can only conclude something in the spreadsheet has changed. What could this be and how can I get back the old behavior whereby I can copy sheets with name ranges and not get any errors? Looking in the Name Manager I see that the name ranges being complained about show twice, once as scope Template and again as scope Workbook. If I delete the scope Template ones the error goes away on copy however, I get a bunch of #REF errors. If I delete the scope Workbook ones, all seems okay and the errors on copy go away too, so perhaps this is the answer, but I'm nervous about what effect this deletion will have and wonder how the Workbook ones came into existence in the first place. Will it be safe to just delete the Workbook name manager scoped entries and how might these have come into existence without my knowing it to begin with?

    Read the article

  • Cloud storage provider lost my data. How to back up next time?

    - by tomcam
    What do you do when cloud storage fails you? First, some background. A popular cloud storage provider (rhymes with Booger Link) damaged a bunch of my data. Getting it back was an uphill battle with all the usual accusations that it was my fault, etc. Finally I got the data back. Yes, I can back this up with evidence. Idiotically, I stayed with them, so I totally get that the rest of this is on me. The problem had been with a shared folder that works with all 12 computers my business and family use with the service. We'll call that folder the Tragic Briefcase. It is a sort of global folder that's publicly visible to all computers on the service. It's our main repository. Today I decided to deal with some residual effects of the Crash of '11. Part of the damage they did was that in just one of my computers (my primary, of course) all the documents in the Tragic Briefcase were duplicated in the Windows My Documents folder. I finally started deleting them. But guess what. Though they appeared to be duplicated in the file system, removing them from My Documents on the primary PC caused them to disappear from the Tragic Briefcase too. They efficiently disappeared from all the other computers' Tragic Briefcases as well. So now, 21 gigs of files are gone, and of course I don't know which ones. I want to avoid this in the future. Apart from using a different storage provider, the bigger picture is this: how do I back up my cloud data? A complete backup every week or so from web to local storage would cause me to exceed my ISP's bandwidth. Do I need to back up each of my 12 PCs locally? I do use Backupify for my primary Google Docs, but I have been storing taxes, confidential documents, Photoshop source, video source files, and so on using the web service. So it's a lot of data, but I need to keep it safe. Backup locally would also mean 2 backup drives or some kind of RAID per PC, right, because you can't trust a single point of failure? Assuming I move to DropBox or something of its ilk, what is the best way to make sure that if the next cloud storage provider messes up I can restore?

    Read the article

  • Can't recover hard drive

    - by BreezyChick89
    My drive got corrupt after a thunderstorm. It used to be 1 partition of 2.5tb but now it shows 2 partitions. It's weird because 300gig free space is about how much it had before corrupting, but it was part of the first partition. I tried $ sudo resize2fs -f /dev/sdb1 Resizing the filesystem on /dev/sdb1 to 536870911 (4k) blocks. resize2fs: Can't read an block bitmap while trying to resize /dev/sdb1 Please run 'e2fsck -fy /dev/sdb1' to fix the filesystem after the aborted resize operation. sudo e2fsck -f /dev/sdb1 e2fsck 1.42 (29-Nov-2011) The filesystem size (according to the superblock) is 610471680 blocks The physical size of the device is 536870911 blocks Either the superblock or the partition table is likely to be corrupt! Abort? n .... Error reading block 537395215 (Invalid argument) while reading inode and block bitmaps. Ignore error<y>? yes Force rewrite<y>? yes Error writing block 537395215 (Invalid argument) while reading inode and block bitmaps. Ignore error<y>? yes ... A lot of these. I can't use e2fsck -y because the first question aborts if I say "y". If I put a weight on the 'y' key it fails because none of the errors were really fixed. I asked this question before and tried using gparted but gparted fails because the first thing it does is: e2fsck -f -y -v /dev/sdb1 giving the same error. The disk status says healthy. There are no bad blocks. This is very frustrating because I can see the data in testdisk and it looks like it's all there. I already bought another 2.5tb drive and made a clone using dd. The next step if I can't fix this is to wipe that drive and just move the data with testdisk, but it seems certain folders will copy infinitely until the drive is full because of symlinks or errors so it's also a difficult option. sudo fdisk -l Disk /dev/sdb: 2500.5 GB, 2500495958016 bytes 255 heads, 63 sectors/track, 304001 cylinders, total 4883781168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0005da5e Device Boot Start End Blocks Id System /dev/sdb1 * 2048 4294969342 2147483647+ 83 Linux sudo badblocks -b 4096 -n -o badfile /dev/sdb 610471680 536870911 badfile is empty I also tried changing the superblock with "fsck -b" but all of them are the same.

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >