Search Results

Search found 14734 results on 590 pages for 'clear cache'.

Page 437/590 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • Can't remove route from routing table

    - by anon
    (I am on Windows Server 2003.) I see a couple of unusual entries in my routing table that I would like to remove: Network Destination Netmask Gateway Interface Metric XXX.27.44.1 255.255.255.255 127.0.0.1 127.0.0.1 20 XXX.27.255.255 255.255.255.255 XXX.27.44.1 XXX.27.44.1 20 All the "XXX"'s are the same octet. I would strongly prefer NOT to clear the routing table, since this is a production server. Here is what I've tried: route delete XXX.27.44.1 The route specified was not found. route delete XXX.27.44.1 mask 255.255.255.255 127.0.0.1 metric 20 The route specified was not found. route delete XXX.27.255.255 The route specified was not found. route delete XXX.27.255.255 mask 255.255.255.255 XXX.27.44.1 metric 20 The route specified was not found. I also tried adding the routes in hopes that I could delete them: route add XXX.27.44.1 mask 255.255.255.255 127.0.0.1 metric 20 The route addition failed: The parameter is incorrect. route add XXX.27.255.255 mask 255.255.255.255 XXX.27.44.1 metric 20 The route addition failed: The parameter is incorrect. Bonus question: What do these entries do, and how did they get there?

    Read the article

  • Why do I get a DegradedArray event with mdadm

    - by azera
    Hello Just so we're clear on what's happening: I bought 4 new sata 2 drives, with the intent of using them in a raid5 all drive are fully recognised by both my bios and my linux box (gentoo) I created a raid5 array, fiddled a bit with it to understand how it works, how to monitor ect At some point, this triggered a degradedarray event, even though the array is brand new. I tried to stopping the array and recreating a new array with the same drive but the new array starts degraded too. here is what I used to create it mdadm --create -l5 -n4 /dev/md/md0-r5 /dev/sdb /dev/sdd /dev/sde /dev/sdf here are the output from my /proc/mdstat and mdadm --detail --scan **mdstat** Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md127 : active raid5 sdf[4] sde[2] sdd[1] sdb[0] 4395415488 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_] [>....................] recovery = 2.8% (41689732/1465138496) finish=890.3min speed=26645K/sec unused devices: <none> **detail** ARRAY /dev/md/md0-r5 metadata=0.90 spares=1 UUID=453e2833:81f22a74:64188b84:66721085 As such I have a couple questions: does a raid5 array always start in degraded mode at first ? why does sdf have the number 4 between bracket instead of 3, why does it see a spare disk and why is the 4th drive marked with _ instead of U ? (bad configuration ?) How can I recreate the array from scratch, do i have to format each drive on its own before recreating it ? Thanks for any help, I'm not sure about what I should do at the moment

    Read the article

  • Securing bash scripts

    - by minnur
    Hi There, Does anybody know what is the best way to secure bash scripts. I have a script which creates database and source code backup and ftp it to other server. And login/password for destination ftp are plain text. I need somehow encrypt it or hide it in case of website hacking. Or should i create script written on C to create bash file then run it and delete ? Thanks. Thanks for the answers and I am sorry, i wasn't clear enough. I would like to clarify my question in the following items. We are storing the data in Rackspace Cloud files. We can't pull as Cloud files doesn't allow you run a script. We can write the script to run on Server A and pull FTP and MySQL data on servers B, C, D, etc. And we want to protect the passwords on A from the situation where A is hacked. Can we compile our script file to hide them? Thanks

    Read the article

  • Cannot access drive in Windows 7 after scandisk lockup, but can in safe mode....

    - by Matt Thompson
    I ran scandisk on my external USB drive due to the inability to delete a few files. Windows asked me if I wanted to unmount the drive before the scan, warning me that it would be unusable until the scan was finished, and I said yes. During the scan, my machine locked up, and I was forced to reboot the machine. When it came up, I was unable to access the drive, getting an error that "L:is not accessible, access is denied". Comupter Management sees the drive, and has the proper amount of disk space filled. I booted into safe mode, and can access the drive with no problems, and I noticed that in explorer, all the folders have locks on them. I booted back into windows, but still could not access the drive, getting the same error as above. Hovever, if I right click on the drive, select properties, and go to Customize, in the folder pictures ares, I select Choose File, and a window open up, that shows the root of the directory, with all the folder able to be accessed, but again, the icon is the folder icon with a lock on it. I can even copy files from the drive to another. So, the files are not gone, windows can obviously access the drive no matter what it thinks, so there has to be a problem with the flag windows put on the drive when it ran the original scan that failed. I was able to run a scan both in safe mode with no problems, and in windows. In windows, I received the cannot access error the first time I run scan disk on it, but if I try again, it works fine. Any ideas on how to clear the flag that windows set, so I can access the drive normally again?

    Read the article

  • ssh connection operation timed out using rsync

    - by Mark Molina
    I use rsync to backup my remote server on my local device but when I combine it with a cron job my ssh times out. Just to be clear, the data is stored on a remote server and I want it stored on my local server. The backup request must be sent from my local server to the remote server. The command for backup up the data is working when I just type it in terminal like this: rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP but when I combine it with a cron job like this: 10 11 * * * rsync -chavzP --stats USERNAME@IP_ADDRESS: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP the ssh connection times out. When the cronjob executes it send a mail to the root user with the output like this: From local.xx.xx.xx Tue Jul 2 11:20:17 2013 X-Original-To: username Delivered-To: [email protected] From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <username@server> rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=username> X-Cron-Env: <USER=username> X-Cron-Env: <HOME=/Users/username> Date: Tue, 2 Jul 2013 11:20:17 +0200 (CEST) ssh: connect to host IP_ADDRESS port XX: Operation timed out rsync: connection unexpectedly closed (0 bytes received so far) [receiver] rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-42/rsync/io.c(452) [receiver=2.6.9] So the rsync command is working when just typed in terminal but not when used by a cronjob. Can anybody explain this?

    Read the article

  • NVIDIA GeForce GT750M drivers for Ubuntu 13.10 on iMac

    - by Eugene B
    I have just got a new iMac 14.3 with 21.5' display. It has Intel i7 processor and NVIDIA GeForce GT750M graphics card with GPU (Device ID: 0x00fe9). I have installed Ubuntu 13.10 on it (dual boot with refit). Everything works fine apart from graphics. It is clear that graphics' not accelerated, and there are no drivers suggested in the "Additional drivers" section. I have tried to Install nvidia drivers from the repository. Download nvidia drivers and install them manually. Follow instructions here. Follow instructions here. Follow instructions here. Install bumblebee. Play around with xorg.conf file. Blacklist nouveau. As you may guess, nothing worked. The weirdest thing is, that when I boot from a LiveCD, it picks up screen resolution and all the rest correctly, though when I install the system to my Hard Drive it clearly does not have proper drivers. Does anyone have any suggestion what to do?

    Read the article

  • Messed up USB stick doesn't show in blkid

    - by Felix
    I was playing around with a USB stick (booting archlinux with qemu off of it and trying to perform an installation on the same stick at the same time -- brave, I know, but I was just messing around). Now, after failing to boot and install at the same time, it seems I have sort of messed up my stick. What I think happened is that I used cfdisk to wipe everything on it and create one big partition, but formatting it then failed, so now there's a big partition with no filesystem. Just to make it clear: I'm not worried for my stick, I know I can recover it at any point. What I find intriguing is that after plugging the stick into my computer (using Ubuntu), there's no (terminal) way to find out what block device (/dev/sdx) it has associated. The only way I could determine that was with GParted: But blkid shows the following: /dev/sda1: UUID="12F695CFF695B387" LABEL="System Reserved" TYPE="ntfs" /dev/sda2: UUID="A0BAA6EABAA6BC62" TYPE="ntfs" /dev/sdb1: UUID="546aec8b-9ad6-4571-b07a-adba63e25820" TYPE="ext4" /dev/sdb2: UUID="2a8b82d8-6c6e-4053-a446-bab970d93d7c" TYPE="swap" /dev/sdb3: UUID="7cbede7d-c930-4e59-9d1b-01f2d79bd092" TYPE="ext4" No trace of /dev/sdc. My question is: if I didn't have a graphical interface (to use GParted), how would I have known which block device is my stick?

    Read the article

  • Bash script dosn't open in terminal on reboot

    - by twigg
    Quick overview, I have created a script that reboots the laptop after x amount of time and x amount of cycles. I have added the script to the start-up applications and the script does seem to be running in the background but never opens a terminal Window. Am I missing something? Adding Code (this is saved in a file called countdown.sh) #!/bin/bash # check if passed.txt exists if it does, send to soak test if [ -f passed.txt ]; then echo reboot has passed $nol cycles sleep 5; echo Starting soak tests sleep 5; rm testlog.txt; rm passed.txt; phoronix-test-suite run quick-test exit 0; fi # check if file testlog.txt exists if not create it if [ ! -f testlog.txt ]; then echo >> testlog.txt; fi # read reboot file to see how many loops have been completed exec < testlog.txt nol=0 while read line do nol=`expr $nol + 1` done # start the countdown, x is time limit let x=10; while [ $x -gt 0 ]; do clear; figlet "Rebooting in..."; figlet $x; let x-=1; sleep 1; done; echo reboot success $nol >> testlog.txt; shutdown -r now; # set how many times the script should shutdown the laptop reboot_count=1 # if number of reboots matches nol's then stop the script # create a new text file called passed.txt if [ "$nol" == "$reboot_count" ]; then echo reboot passed $nol cycles >> passed.txt; fi

    Read the article

  • guest crash on long backup via rsync

    - by ToreTrygg
    I recently upgraded host to Ubuntu 9.10 with vmware server 2.0.2, i had two guest machine. One is a sme server i had several crash during a session of backup with rsync to another pc. Normal activities run regularly. The other guest is up without problem since 25 days. I found in the log a lot o f row like these Dec 20 05:29:27.445: vcpu-1| VLANCE: Ethernet0 skipped 2560 time(s) Dec 20 05:29:27.445: vcpu-1| VLANCE: 66 12 5 8 2 3 3 0 1 0 0 1 0 1 2 0 Dec 20 05:29:27.445: vcpu-1| VLANCE: 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 2452 Dec 20 05:29:27.651: vmx| ide0:0: Command WRITE(10) took 1.947 seconds (ok) Dec 20 05:29:37.945: vmx| ide0:0: Command WRITE(10) took 1.033 seconds (ok) when the vitual machine crash the log report, I paste here only some part to limit the lenght of the message Dec 27 01:48:05.686: Worker#2| Caught signal 6 -- tid 700 Dec 27 01:48:05.686: Worker#2| SIGNAL: eip 0x460422 esp 0xb124c024 ebp 0xb124c03 Dec 27 01:48:05.712: Worker#2| SymBacktrace12 00000000 eip 0x39d7ee in function clone in object /lib/tls/i686/cmov/libc.so.6 loaded at 0x2d1000 Dec 27 01:48:05.719: Worker#2| Unexpected signal: 6. Dec 27 01:48:05.720: Worker#2| Core dump limit is 0 KB. Dec 27 01:48:05.762: Worker#2| Child process 10455 failed to dump core (status 0 x6). Dec 27 01:48:05.762: Worker#2|SymBacktrace13 00000000 eip 0x39d7ee in function clone in object /lib/tls/i686/cmov/libc.so.6 loaded at 0x2d1000 Dec 27 01:48:05.779: Worker#2|Msg_Post: Error Dec 27 01:48:05.780: Worker#2|http://msg.log.error.unrecoverable VMware Server unrecoverable error: (Worker#2) Dec 27 01:48:05.780: Worker#2|Unexpected signal: 6. I have no idea how to solve the problem with this installation, I think to dowgrade the host to a version more compatible with vmware server 2. I read a lot of post about difficult of installation I think the problem of compilation during install could be related to my problem. Excuse me if the post isn't very clear, it's my first post here. Any help or suggest will be appreciated. Thanks

    Read the article

  • Workstations cannot see new MS Server 2008 domain, but can access DHCP. (solved)

    - by Radix
    The XP Pro workstations do not see the new replacement domain upon boot; they only see their cached entry for the old (server 2003) domain controller. The old_server is not connected to the network. I have DHCP working with the same scope as the old_server. In my "before-asking" search for a solution I came across the following two articles, and I recall doing things as suggested by the articles. http://www.windowsreference.com/windows-server-2008/how-to-setup-dhcp-server-in-windows-server-2008-step-by-step-guide/ http://www.windowsreference.com/windows-server-2008/step-by-step-guide-for-windows-server-2008-domain-controller-and-dns-server-setup/ The only possible issue is: I was under the impression that the domain netbios needed to match the DC's netbios. The DC netbios is city01 while the domain's FQDN is city.domain.org (I think this is mistaken and should have been just domain.org) But, the second link led me to a post which I believe answers my question. I did as they instructed by opening Local Area Connection Properties, then selecting TCP/IPv4 and setting the sole preferred DNS server to the local hosts static IP (10.10.1.1). Search for "Your problems should clear up" for the post I'm referencing: http://forums.techarena.in/active-directory/1032797.htm Have I misunderstood their instructions? I am hoping to reach the point where I can define users and user groups. Also, does TechNet have a single theoretical overview document I could read. I really don't like treating comps as magic. I will be watching this closely and will quickly answer any questions. If I've left anything out it is because I did not know it was needed. PS: I am loath to ask obviously basic questions, but I am tired and wish to fix this before tomorrow. Also, this is my first server installation, thank you for your help.

    Read the article

  • 403 Forbidden error on Mac OSX - Apache and nginx

    - by tlianza
    Hi All, There are a million questions like this on Google, but I haven't found a solution to my problem. The default Apache install on my Mac is giving 403 Forbidden errors for everything (default directory, user home directory, virtual server, etc). After sifting through the config files, I figured I'd give nginx a try. Nginx serves files fine from it's home directory, but it won't serve files from a subfolder of my user directory. I've configured a simple virtual host, and requesting index.html returns a 403-forbidden. The error message in nginx's log file is pretty clear - it can't read the file: 2011/01/04 16:13:54 [error] 96440#0: *11 open() "/Users/me/Documents/workspace/mobile/index.html" failed (13: Permission denied), client: 127.0.0.1, server: local.test.com, request: "GET /index.html HTTP/1.1", host: "local.test.com" I've opened up this directory to everyone: drwxrwxrwx 6 me admin 204B Dec 31 20:49 mobile And all the files in it: $ ls -lah mobile/ total 24 drwxrwxrwx 6 me admin 204B Dec 31 20:49 . drwxr-xr-x 71 me me 2.4K Dec 31 20:41 .. -rw-r--r--@ 1 me me 6.0K Jan 2 18:58 .DS_Store -rwxrwxrwx 1 me admin 2.1K Jan 4 14:22 index.html drwxrwxrwx 5 me admin 170B Dec 31 20:45 nbproject drwxrwxrwx 5 me admin 170B Jan 2 18:58 script And yet, I cannot figure out why the nginx process cannot read index.html. It's running as the "nobody" user, but the permissions are set such that anyone can read them.

    Read the article

  • Kindle (client) for Mac

    - by doug
    So we're clear, i'm talking bout the client/software version here--ie, that you install on your Mac or PC--not the device. The Kindle client was recently released for the Mac. I bought a couple of Kindle-edition books and i'm reading them using this client. Astonishingly, two features i consider to be more or less essential to any ebook reader are missing in the Kindle client, either that, or i can't find them: (i) text searching; and (ii) highlighting text. First, does anyone know how to access the search feature? I'm aware of the "Go To" button at the top middle of the reader window--the options in that menu when you click the button are: "Cover", "Table of Contents", "Beginning" and "Location." "Location" requires that you type in an integer (but it doesn't correspond to page number--e.g., typing "167" brought me to the table of contents), not a search term. Second, there's a button on the upper right-hand corner of the window "Show Notes and Marks" yet i can't find any way to highlight text. The only kind of "note" or "mark" i have been able to record is to "bookmark" a page by clicking the "bookmark" button also at the top of the window.

    Read the article

  • How to get german QWERTY on Windows?

    - by Arturas M
    Well I'm used to having the world standard keyboard which is qwerty and not qwertz... But on windows I can't find the choice for german input which would be qwerty, not qwertz. In linux there was german input with qwerty so it was fun. I believe it should be on Windows too? Cause i'm sick of this qwertz always having to correct and search for z or y... Yeah, I know, if Germans made a mistake, it wasn't loosing WW2... Edit: Maybe I wasn't clear enough, it's not about changing between different language inputs? It's about having all german keys in german input, just the z and y would be in the correct places like all the worlds keyboards use and like the US keyboard uses... Solution: Well unfortunately by searching everywhere and waiting for answers I couldn't find what I needed, so I came up with this solution - I used The The Microsoft Keyboard Layout Creator And created my own very custom layout by modifying the default german layout and switching places of z and y. In case somebody needs it and doesn't want to go the hard way, I decided to upload them, so here are the links to download and install the keyboard layout: http://www20.zippyshare.com/v/95071447/file.html http://rapidshare.com/files/3822150342/German%20QWERTY%202%20by%20Arturas%20M.zip It will appear as a choice under German input between languages in your language bar. If you just want to use this german layout, just remove all german layouts and install this one. Good luck and have fun using it!

    Read the article

  • Install a KVM guest with LVM partitioning on an LVM partitioned host using virt-install fails, volume group already exists

    - by Rilik84
    our setup consists of a ubuntu 10.04 kvm host server with LVM, on top of which we install the ubuntu 10.04 virtual machines via virt-install and preseed. Each VM has a dedicated logical volume which we pass to virt-install via the "--disk path=/dev/sysvg/vm-name" parameter. The virtual machines themselves are using LVM as well. Virt-install seems to have a few issues with this setup. Say we want to install two virtual machines in parallel. The first vm installs correctly while the second halts during the partitioning, complaining with this error: "volume group name already in use". I tried removing the logical volumes, recreated them, rebooted the kvm host to clear any leftovers but the issue perists. The preseed is telling the vms to create a volume group named "sysvg", which is the same name used by the volume group of the kvm host. Considering this I did another test: I got rid of the preseed part taking care of the partitioning/lvm creation and started the creation of two vms in parallel. If, when prompted, I instruct the vms' installer to do a guided partitioning of the disk with LVM, it will still fail giving this different error on both the vms: Physical Volume /dev/sda5 is already in volume group sysvg. As I wrote, sysvg is the volume group used by the KVM host. So I thought the name clashing was the issue. By removing the preseed part I skipped the volume group naming. I can't understand how the virtual machines would be aware of the presence of that volume group since they're supposed to be isolated from it, especially after I removed and recreated the logical volumes dedicated to them. That detail should not be exposed at all to them! Thanks for the help. I'm clueless.

    Read the article

  • Scroll wheel causes browsers + Windows explorer to go back

    - by KaptajnKold
    I use Windows 7 on a VirtualBox VM on a Mac. Lately, when I'm using any browser (IE9, Chrome or FireFox or even Windows Explorer), use of the scroll wheel followed by any movement of the mouse cursor causes the browser to go back. Very annoying. This happens when I use the scroll wheel on a USB connected mouse (brand/model unknown, since I don't have it in front of me as I write this) or when I use two-finger scrolling on the trackpad when no mouse is connected. When I connect from my VM to a remote Windows box (Windows Server 2008), I experience the same problem. I have tried rebooting the VM to no avail. I am not sure when the problem started exactly. It may or may not have been after I connected the USB mouse for the first time, but trying to unplug it and then rebooting didn't help. I have tried to google for a solution, but all I've found are people who accidentally pressed the shift key while scrolling, which will cause the browser to go backward or forward in the browser history. This however is not the problem I'm having. To be clear, in my case the browser only goes back when I move the mouse after I've used the scroll wheel. I'm at my wits end :(

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking?

    Read the article

  • DAS vs SAN storage for serving 2 to 4 nodes

    - by Luke404
    We currently have 4 Linux nodes with local storage, arranged in two active/passive pairs with storage mirrored using DRBD, running virtual machines (actually using Xen Hypervisor) for typical hosting workloads (mail, web, a couple VPS, etc.). We're approaching the (presumed) maximum IOPS of those servers, and we're planning to migrate to an external storage solution with two active nodes, with capacity for up to four active nodes. Since we're an all-Dell shop I've done some research and found the MD3200 / MD3200i products should be the ones we're looking for. We are pretty sure we won't be attaching more than 4 hosts on a single storage and I'm wondering if there is any clear advantage for one or the other. In theory I should be able to attach 4 SAS hosts to a single MD3200 (single links on a single controller MD3200, or dual redundant SAS links from each host to a dual-controller MD3200), or 4 iSCSI hosts to a single MD3200i (directly on its 4 GigE ports without any switch, again with dual links for the dual controller option). Both setups should let us implement live VM migration since all hosts can access all the LUNs at the same time, and also some shared filesystem like GFS2 or OCFS2. Also, both setups should allow full redundancy of the whole system (assuming dual controllers in the storage). One difference I can see is that the DAS solution is actually limited to 4 hosts while the iSCSI one should be able to grow to more hosts (adding two GigE switches to the mix). One point for the iSCSI solution is that it would allow us to start out with our current nodes and upgrade them at a later time (we can't add other SAS controllers, but they already have 4 GigE ports each). With the right (iSCSI|SAS) controllers I should be able to connect diskless nodes and boot them off the external storage which I think is a good thing (get rid of any local storage). On the other hand, I would have thought the SAS one to be cheaper but it seems like an MD3200 actually costs a little less than an MD3200i (?) (please note: I've used Dell gear in my examples since that's what we're looking for but I assume the same goes with other vendors) I would like to know if my assumptions above are correct, and if I'm missing any important difference between the two setups.

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking? Thanks in advance. Scott

    Read the article

  • Active Directory Child Domain Replication Problems

    - by MikeR
    Hi, I've recently inherited an Active Directory (all DCs Windows 2003) which has been configured with several child domains that are used as test environments for out CRM software. Two of these child domains have been used for testing using dates in the future (2015), throwing them well outside of the Kerberos tolerance for time, and they're flooding my event logs with replication errors such as the following: Description: The attempt to establish a replication link for the following writable directory partition failed. Directory partition: CN=Schema,CN=Configuration,DC=ad,DC=xxxxxxx,DC=com Source domain controller: CN=NTDS Settings,CN=TESTDC001,CN=Servers,CN=SiteName,CN=Sites,CN=Configuration,DC=ad,DC=xxxxxxx,DC=com Source domain controller address: 38e95b2a-35af-4174-84ba-9ab039528cce._msdcs.ad.xxxxxxx.com Intersite transport (if any): This domain controller will be unable to replicate with the source domain controller until this problem is corrected. User Action Verify if the source domain controller is accessible or network connectivity is available. Additional Data Error value: 5 Access is denied. I'd also like to upgrade to Windows 2008 at some point, but wouldn't want to attempt any schema updates while I'm not 100% confident on the replication. I'm guessing my only real solution will be to get rid of these child domains. The child domains are operating as stand alone domains, the DC is up and running and authenticating test users fine. I'm guessing the best solution to this would be to delete the domains (although I'd be happily told otherwise). The clock forwarding appears to have been happening for several years, so I'm assuming I can't just put the clock right (I'm guessing scope for this would be 180days, the same as the tombstone lifetime) With the replication errors would I be able to dcpromo the child domains DC, select it as the last domain controller in the domain and the child domain would be deleted? Or would I be better off treating the domain as an orphaned domain and use Microsoft's instructions to clear up as such. Any advice would be much appreciated.

    Read the article

  • Pair programming with tmux and Vagrant

    - by neezer
    Does anyone have a clear step-by-step guide for setting up a shared tmux session on a Vagrant vbox that my coworkers (on our local office lan) could SSH into? The articles I've found online only seem to cover setting this up from machine to machine (no virtualbox setups), and I'm not very good at networking, so I haven't been able to extrapolate a solution... We're all running the latest Macs in our office, btw. Here's one article I've found but haven't been able to get working with Vagrant: http://blog.voxdolo.me/remote-pairing-with-vim-and-tmux.html EDIT: To clarify, I don't really know how I should be setting up Vagrant to allow me to SSH into it from a machine outside the one hosting the VM. The article above suggests that I add the tunnels host on my physical machine running the VM (here-on referred to as the MBP), so I did that. Next is the ProxyCommand host declaration, which I have also assumed should live on the MBP. So next I try SSHing into the MBP from a guest machine (another separate physical machine on my network), and that seems to work... but that only gets me into the MBP, not the Vagrant image running on the MBP. I normally login Vagrant image on the MBP via vagrant ssh (per the docs), and I know how to forward ports on the Vagrant VM to the MBP, but it's unclear to me how I could forward ports/SSH from the MBP to the Vagrant VM, which I assume I would need to do so that my guest machine could SSH in--through the MBP--to my Vagrant image. That, in a nutshell, is what I'm trying to accomplish. I do my development work in Vagrant VMs which keeps my MBP nice and clean of any dev-related cruft and also keeps my dev environments totally isolated from one another, yet I would like to start pair-programming with my coworkers via tmux, thus the reason why I've asked this question. I would like to accomplish all of this without setting up an additional user account on the MBP, or giving my coworkers access to my local user account on the MBP to get to my Vagrant VM, if that's at all possible.

    Read the article

  • ADFS v.2.0 transitive trust in a federation scenario

    - by masi
    Currently i'm working with ADFS to establish a federated trust between two separated domains. My question is simple: does ADFS v. 2.0 support transitive trust across federated identity providers? I know that ADFS v 1.0 does not, as stated in this document on page 9. But when looking on the claims rules that come with ADFS 2.0 it seems to be possible, as a Microsoft partner confirmed. However: the documentation on this topic is a mess! Simply no ADFS v. 2.0 related statements on this topic that i was able to find (IF you got any documentation on this PLEASE help me out guys!). To be more clear, lets assume this scenario: Federation provider (A) trust federation provider (B) which trusts identity provider (C). So, does (A) trust identities comming from (C) across (B)? Also, if it is possible there are some things that i'm specially interested in: Is it possible to restrict the transitive trust in ADFS in any way? If so, how? How does the transitive trust affect the Issuer and OriginalIssuer properties of the claims? If transitive trust is used together with claims transformations and provider (B) would transform incomming claims from (C) in a way that they are transformed into (new) claims of same type an value, how would this affect the Issuer and OriginalIssuer properties?

    Read the article

  • Recommendations for hosting large videos

    - by Clinton Blackmore
    I recently created and put a 45-minute, 300 MB video file on my website and told a mailing list about it. Checking my site stats, I see that I've used 20% of my "unlimited" bandwidth for the month. As I want to be able to have several videos like this, clearly, I need to consider other options. The appeal to hosting files as my own site (aside from the supposedly unlimited disk space and bandwidth), is to be able to have control over the format, resolution, and quality of the video(s), as well as to ensure that it is clear that I'm the copyright holder (although the videos will be under a creative commons license). I find that for the screencasts I'm making, having a high resolution (say 3/4 of 1024 * 768) really makes seeing what is going on on the screen easier. It is also always a plus to not have the experience marred by advertisements. One more wrench to throw in is that while the videos are non-commercial, they do promote a club, and it seems that that falls afoul of some terms of services (especially for free services; while free is very nice, I will certainly consider putting up some money.) What recommendations do you have for (fairly) long, high-resolution videos? Should I look in depth at sites like YouTube and Vimeo, should I be considering a filesharing site [I have no qualms with someone downloading the entire video first -- I wouldn't want to watch 45 minutes in my browser!], hosting files with Bittorent (ugh -- I think that'd reduce my audience), or should I be looking into other web hosts (and if so, who?)

    Read the article

  • Problem IIS 7.0 Locking files durring upload

    - by viscious
    I am running a server 2008 with iis7 and the ftp addon on to iis 7.0 I have the ftp site configured and mostly working Except that about 70% of the time when transferring a file the upload will hang forever. If I disconnect the ftp client and reconnect and try to upload the same file I will get an error on the client saying the file is locked. I have to restart the ftp service to clear the lock. I fired up process explorer and did a search on the file in question and sure enough the ftp service has a lock on the file and it takes around 20 minutes to release the lock on its own (and sometimes longer). This lock stays around even after I disconnect the client. Like I said this only happens about 70% of the time, the other 30% of the time it goes through just fine. Things i have verified. -Not a firewall issue. Server is using passive port range 8000-9000 which is allowed on the firewall. -Not a nat issue, server has a globally rout-able ip address -all recommended/required updates installed I have 5 other servers in a very similar configuration and this is the only one i have problems with.

    Read the article

  • How can I tell my dd-wrt router to use someone's Amazon Affiliates link when I point my browser to amazon.com?

    - by Michael Paul
    Here's what I'd like to do. Instead of a one-time donation to one of my favorite free tools (junecloud.com) I'd like to do what they suggest here and use their Amazon Affiliates link to do all my Amazon shopping. I shop at amazon once or twice a week, so this is a great way to let them earn lots of long-term cash without me dropping a dime. My thought was to go into my dd-wrt enabled router and tell it, "any time I go to amazon.com on any computer in the house, please go to http://www.amazon.com/gp/redirect.html?link_code=ur2&tag=junecloud-20&camp=1789&creative=9325&location=%2F instead. (That URL simply redirects me to amazon.com but every purchase I make during that session is credited to JuneCloud.) Once logged into dd-wrt, I went to Services Services DNSMasq but I'm not really sure how to get it to work from there, or if it's even possible. I know I can redirect IP addresses, but I'm looking to redirect someone on my network from amazon.com to the special amazon affiliate code link. Hope that's clear. Thanks for any replies!

    Read the article

  • How to diagnose occasional sudden resets?

    - by steve314
    I have a Windows XP system, and have recently upgraded by adding 2 1GB sticks of RAM to the 2x0.5GB already present. Since then, about once per day (the system is used 8+ hours per day), the system has suddenly and unexpectedly reset. On a couple of occasions, the system has frozen completely, only responding to the power button being held in for several seconds to force power off. Nothing at all ever appears in the system event log that might indicate a possible cause - everything seems to suggest business as usual. Sounds like faulty memory - but memtest86+ says otherwise. A full test, taking over an hour, found no issues. The next likely suspicion, then, is that I've knocked something while installing the RAM. Trouble is, everything I can think of to test seems fine. I've opened up the case and prodded a few things around, hoping to get better contact on connections etc, but there's no sign yet as to whether that has made a difference or not. I thought about a malware-related timing fluke, but again, so far as I can tell I'm all clear. All I can think of to add to my checklist (mainly of things that I can't easily check) is... The power supply is (1) only 350W, (2) not necessarily the best quality, and (3) powering a Prescott P4 640 3.2GHz. Could that be borderline overloaded or about to die? How do I check? Is it possible that the CPU isn't getting cooled properly? I haven't had the fan past normal tickover even doing video encoding, and the only sane temperature reading from SpeedFan is pretty steady at 36 celcius, so probably not. Any thoughts? Is there a standard procedure for diagnosing this kind of fault?

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >